Disney and Charter Team Up on Piracy Mitigation

Post Syndicated from Ernesto original https://torrentfreak.com/disney-and-charter-team-up-on-piracy-mitigation/

With roughly 22 million subscribers, Charter Communications is one of the largest Internet providers in the US.

The company operates under the Spectrum brand and offers a wide variety of services including TV and Internet access.

In an effort to provide more engaging content to its customers, this week Charter signed a major new distribution agreement with The Walt Disney Company.

The new partnership will provide the telco’s customers with access to popular titles in Disney’s services, including Hulu, ESPN+ and the yet-to-be-launched streaming service Disney+.

The fact that these giant companies have teamed-up is a big deal, business-wise and for consumers. Most Spectrum subscribers will likely be pleased to have more options, but there may also be a subgroup that has concerns.

Away from the major headline, both companies also state that they have agreed to partner up on piracy mitigation.

“This agreement will allow Spectrum to continue delivering to its customers popular Disney content […] and will begin an important collaborative effort to address the significant issue of piracy mitigation,” says Tom Montemagno, EVP, Programming Acquisition for Charter.

The public press releases give no concrete details of what this “piracy mitigation” will entail. It does mention that the two companies will work together to “implement business rules” and address issues such as “unauthorized access and password sharing.”

TorrentFreak reached out to Charter for further details, but the company said that it’s not elaborating beyond the press release at this time.

The term “mitigating” suggests that both companies will actively work together to reduce piracy. This is interesting because Charter is currently caught up in a major piracy liability lawsuit in a US federal court in Colorado.

Earlier this year the Internet provider was sued by several music companies which argued that the company turned a blind eye to piracy by failing to terminate accounts of repeat infringers. In addition, Charter stands accused of willingly profiting from these alleged copyright infringements.

Charter’s new agreement with Disney suggests that there could be a more proactive anti-piracy stance going forward. One possibility might be a more strict repeat infringer policy but, without further details, it remains unclear what the “piracy mitigation” entails precisely.

In any case, it will be interesting to see how the two companies plan to put a dent in current piracy levels, and what that means for Charter customers.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Friday Squid Blogging: Robot Squid Propulsion

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/08/friday_squid_bl_690.html

Interesting research:

The squid robot is powered primarily by compressed air, which it stores in a cylinder in its nose (do squids have noses?). The fins and arms are controlled by pneumatic actuators. When the robot wants to move through the water, it opens a value to release a modest amount of compressed air; releasing the air all at once generates enough thrust to fire the robot squid completely out of the water.

The jumping that you see at the end of the video is preliminary work; we’re told that the robot squid can travel between 10 and 20 meters by jumping, whereas using its jet underwater will take it just 10 meters. At the moment, the squid can only fire its jet once, but the researchers plan to replace the compressed air with something a bit denser, like liquid CO2, which will allow for extended operation and multiple jumps. There’s also plenty of work to do with using the fins for dynamic control, which the researchers say will “reveal the superiority of the natural flying squid movement.”

I can’t find the paper online.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Discover metadata with AWS Lake Formation: Part 2

Post Syndicated from Julia Soscia original https://aws.amazon.com/blogs/big-data/discover-metadata-with-aws-lake-formation-part-2/

Data lakes are an increasingly popular way to aggregate, store, and analyze both structured and unstructured data. AWS Lake Formation makes it easy for you to set up, secure, and manage your data lakes.

In Part 1 of this post series, you learned how to create and explore a data lake using Lake Formation. This post walks you through data discovery using the metadata search capabilities of Lake Formation in the console, and metadata search results restricted by column permissions.

Prerequisites

For this post, you need the following:

Metadata search in the console

In this post, we demonstrate the catalog search capabilities offered by the Lake Formation console:

  • Search by classification
  • Search by keyword
  • Search by tag: attribute
  • Multiple filter searches

Search by classification

Using the metadata catalog search capabilities, search across all tables within your data lake. Two share the name amazon_reviews but separately belong to your simulated “prod” and “test” databases, and the third is trip-data.

  1. In the Lake Formation console, under Data catalog, choose Tables.
  2. In the search bar, under Resource Attributes, choose Classification, type CSV, and press Enter. You should see only the trip_data table, which you formatted as CSV in your data lake. The amazon_reviews tables do not appear because they are in Parquet format.
  3. In the Name column, choose trip_data. Under Table details, you can see that the classification CSV is correctly identified by the metadata search filter.

Search by keyword

Next, search across your entire data lake filtering metadata by keyword.

  1. To refresh the list of tables, under Data catalog, choose Tables again.
  2. From the search bar, type star_rating, and press Enter. Now that you have applied the filter, you should see only the amazon_reviews tables because they both contain a column named star_rating.
  3. By choosing either of the two tables, you can scroll down to the Schema section, and confirm that they contain a star_rating column.

Search by tag: attribute

Next, search across your data lake and filter results by metadata tags and their attribute value.

  1. To refresh the list of tables, under Data catalog, choose Tables.
  2. From the search bar, type department: research, and press Enter. Now that you have applied the filter, you should see only the trip_data table because this is the only table containing the value of ‘research’ in the table property of ‘department’.
  3. Select the trip_data table. Under Table details, you can see the tag: attribute of department | research listed under Table properties.

Multiple filter searches

Finally, try searching across your entire data lake using multiple filters at one time.

  1. To refresh the list of tables, under Data catalog, choose Tables.
  2. In the search bar, choose Location, type S3, and press Enter. For this post, all of your catalog tables are in S3, so all three tables display.
  3. In the search bar, choose Classification, type parquet, and press Enter. You should see only the amazon_reviews tables because they are the only tables stored in S3 in Parquet format.
  4. Choose either of the displayed amazon_reviews tables. Under Table details, you can see that the following is true.
  • Location: S3
  • Classification: parquet

Metadata search results restricted by column permissions

The metadata search capabilities return results based on the permissions specified within Lake Formation. If a user or a role does not have permission to a particular database, table, or column, that element doesn’t appear in that user’s search results.

To demonstrate this, first create an IAM user, dataResearcher, with AWS Management Console access. Make sure to store the password somewhere safe.

To simplify this post, attach the AdministratorAccess policy to the user. This policy grants full access to your AWS account, which is overly permissive. I recommend that you either remove this user after completing the post, or remove this policy, and enable multi-factor authentication (MFA). For more information, see Creating an IAM user in the console.

In Part 1 of this series, you allowed Everyone to view the tables that the AWS Glue crawlers created. Now, revoke those permissions for the ny-taxi database.

  1. In the Lake Formation console, under Permissions, choose Data permissions.
  2. Scroll down or search until you see the Everyone record for the trip_data table.
  3. Select the record and choose Revoke, Revoke.

Now, your dataResearcher IAM user cannot see the ny-taxi database or the trip_data table. Resolve this issue by setting up Lake Formation permissions.

  1. Under Permissions, choose Data Permission, Grant.
  2. Select the dataResearcher user, the ny-taxi database, and the trip_data table.
  3. Under Table permissions, check Select and choose Grant.
  4. Log out of the console and sign back in using the dataResearcher IAM user that you created earlier.
  5. In the Lake Formation console, choose Tables, select the trip_data table, and look at its properties:

The dataResearcher user currently has visibility across all of these columns. However, you don’t want to allow this user to see the pickup or drop off locations, as those are potential privacy risks. Remove these columns from the dataResearcher user’s permissions.

  1. Log out of the dataResearcher user and log back in with your administrative account.
  2. In the Lake Formation console, under Permissions, choose Data Permissions.
  3. Select the dataResearcher record and choose Revoke.
  4. On the Revoke page, under Column, choose All columns except the exclude columns and then choose the vendor_id, passenger_count, trip_distance, and total_amount columns.
  5. Under Table permissions, check Select. These settings revoke all permissions of the dataResearcher user to the trip_data table except those selected in the window. In other words, the dataResearcher user can only Select(view) the four selected columns.
  6. Choose Revoke.
  7. Log back in as the dataResearcher user.
  8. In the Lake Formation console, choose Data catalog, Tables. Search for vendor_id and press Enter. The trip_data table appears in the search, as shown in the following screenshot.
  9. Search for pu_location_id. This returns no results because you revoked permissions to this column, as shown in the following screenshot.

Conclusion

Congratulations: You have learned how to use the metadata search capabilities of Lake Formation. By defining specific user permissions, Lake Formation allowed you to grant and revoke access to metadata in the Data Catalog as well as the underlying data stored in S3. Therefore, you can discover your data sources across your entire AWS environment using a single pane of glass. To learn more, see AWS Lake Formation.

 


About the Authors

Julia Soscia is a solutions architect at Amazon Web Services based out of New York City. Her main focus is to help customers create well-architected environments on the AWS cloud platform. She is an experienced data analyst with a focus in Big Data and Analytics.

 

 

 

Eric Weinberg is a systems development engineer on the AWS Envision Engineering team. He has 15 years of experience building and designing software applications.

 

 

 

 

Francesco Marelli is a senior solutions architect at Amazon Web Services. He has more than twenty years experience in Analytics and Data Management.

 

 

 

 

Mat Werber is a solutions architect on the AWS Community SA Team. He is responsible for providing architectural guidance across the full AWS stack with a focus on Serverless, Redshift, DynamoDB, and RDS. He also has an audit background in IT governance, risk, and controls.

 

 

 

 

Creating custom Pinpoint dashboards using Amazon QuickSight, part 1

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/creating-custom-pinpoint-dashboards-using-amazon-quicksight-part-1/

Note: This post was written by Manan Nayar and Aprajita Arora, Software Development Engineers on the AWS Digital User Engagement team.


Amazon Pinpoint helps you create customer-centric engagement experiences across the mobile, web, and other messaging channels. It also provides a variety of Key Performance Indicators (KPIs) that you can use to track the performance of your messaging programs.

You can access these KPIs through the console, or by using the Amazon Pinpoint API. In some cases, you might want to create custom dashboards that aren’t included by default, or even combine these metrics with other data. Over the next few days, we’ll discuss several different methods that you can use to create your own custom dashboards.

In this post, you’ll learn how to use the Amazon Pinpoint API to retrieve metrics, and then display them in visualizations that you create in Amazon QuickSight. This option is ideal for creating custom dashboards that highlight a specific set of metrics, or for embedding these metrics in your existing application or website.

In the next post (which we’ll post on Monday, August 19), you’ll learn how to export raw event data to an S3 bucket, and use that data to create dashboards in by using QuickSight’s Super-fast, Parallel, In-memory Calculation Engine (SPICE). This option enables you to perform in-depth analyses and quickly update visualizations. It’s also cost-effective, because all of the event data is stored in an S3 bucket.

The final post (which we’ll post on Wednesday, August 21) will also discuss the process of creating visualizations from event stream data. However, in this solution, the data will be sent from Amazon Kinesis to a Redshift cluster. This option is ideal if you need to process very large volumes of event data.

Creating a QuickSight dashboard that uses specific metrics

You can use the Amazon Pinpoint API to programmatically access many of the metrics that are shown on the Analytics pages of the Amazon Pinpoint console. You can learn more about using the API to obtain specific KPIs in our recent blog post, Tracking Campaign Performance Using the Metrics APIs.

The following sections show you how to parse and store those results in Amazon S3, and then create custom dashboards by using Amazon Quicksight. The steps below are meant to provide general guidance, rather than specific procedures. If you’ve used other AWS services in the past, most of the concepts here will be familiar. If not, don’t worry—we’ve included links to the documentation to make things easier.

Step 1: Package the Dependencies

Lambda currently uses a version of the AWS SDK that is a few versions behind the current version. However, the ability to retrieve Pinpoint metrics programmatically is a relatively new feature. For this reason, you have to download the latest version of the SDK libraries to your computer, create a .zip archive, and then upload that archive to Lambda.

To package the dependencies

    1. Paste the following code into a text editor:
      from datetime import datetime
      import boto3
      import json
      
      AWS_REGION = "<us-east-1>"
      PROJECT_ID = "<projectId>"
      BUCKET_NAME = "<bucketName>"
      BUCKET_PREFIX = "quicksight-data"
      DATE = datetime.now()
      
      # Get today's push open rate KPI values.
      def get_kpi(kpi_name):
      
          client = boto3.client('pinpoint',region_name=AWS_REGION)
      
          response = client.get_application_date_range_kpi(
              ApplicationId=PROJECT_ID,
              EndTime=DATE.strftime("%Y-%m-%d"),
              KpiName=kpi_name,
              StartTime=DATE.strftime("%Y-%m-%d")
          )
          rows = response['ApplicationDateRangeKpiResponse']['KpiResult']['Rows'][0]['Values']
      
          # Create a JSON object that contains the values we'll use to build QuickSight visualizations.
          data = construct_json_object(rows[0]['Key'], rows[0]['Value'])
      
          # Send the data to the S3 bucket.
          write_results_to_s3(kpi_name, json.dumps(data).encode('UTF-8'))
      
      # Create the JSON object that we'll send to S3.
      def construct_json_object(kpi_name, value):
          data = {
              "applicationId": PROJECT_ID,
              "kpiName": kpi_name,
              "date": str(DATE),
              "value": value
          }
      
          return data
      
      # Send the data to the designated S3 bucket.
      def write_results_to_s3(kpi_name, data):
          # Create a file path with folders for year, month, date, and hour.
          path = (
              BUCKET_PREFIX + "/"
              + DATE.strftime("%Y") + "/"
              + DATE.strftime("%m") + "/"
              + DATE.strftime("%d") + "/"
              + DATE.strftime("%H") + "/"
              + kpi_name
          )
      
          client = boto3.client('s3')
      
          # Send the data to the S3 bucket.
          response = client.put_object(
              Bucket=BUCKET_NAME,
              Key=path,
              Body=bytes(data)
          )
      
      def lambda_handler(event, context):
          get_kpi('email-open-rate')
          get_kpi('successful-delivery-rate')
          get_kpi('unique-deliveries')

      In the preceding code, make the following changes:

      • Replace <us-east-1> with the name of the AWS Region that you use Amazon Pinpoint in.
      • Replace <projectId> with the ID of the Amazon Pinpoint project that the metrics are associated with.
      • Replace <bucketName> with the name of the Amazon S3 bucket that you want to use to store the data. For more information about creating S3 buckets, see Create a Bucket in the Amazon S3 Getting Started Guide.
      • Optionally, modify the lambda_handler function so that it calls the get_kpi function for the specific metrics that you want to retrieve.

      When you finish, save the file as retrieve_pinpoint_kpis.py.

  1. Use pip to download the latest versions of the boto3 and botocore libraries. Add these libraries to a .zip file. Also add retrieve_pinpoint_kpis.py to the .zip file. You can learn more about all of these tasks in Updating a Function with Additional Dependencies With a Virtual Environment in the AWS Lambda Developer Guide.

Step 2: Set up the Lambda function

In this section, you upload the package that you created in the previous section to Lambda.

To set up the Lambda function

  1. In the Lambda console, create a new function from scratch. Choose the Python 3.7 runtime.
  2. Choose a Lambda execution role that contains the following permissions:
    • Allows the action mobiletargeting:GetApplicationDateRangeKpi for the resource arn:aws:mobiletargeting:<awsRegion>:<yourAwsAccountId>:apps/*/kpis/*/*, where <awsRegion> is the Region where you use Amazon Pinpoint, and <yourAwsAccountId> is your AWS account number.
    • Allows the action s3:PutObject for the resource arn:aws:s3:::<my_bucket>/*, where <my_bucket> is the name of the S3 bucket where you want to store the metrics.
  3. Upload the .zip file that you created in the previous section.
  4. Change the Handler value to retrieve_pinpoint_kpis.lambda_handler.
  5. Save your changes.

Step 3: Schedule the execution of the function

At this point, the Lambda function is ready to run. The next step is to set up the trigger that will cause it to run. In this case, since we’re retrieving an entire day’s worth of data, we’ll set up a scheduled trigger that runs every day at 11:59 PM.

To set up the trigger

  1. In the Lambda console, in the Designer section, choose Add trigger.
  2. Create a new CloudWatch Events rule that uses the Schedule expression rule type.
  3. For the schedule expression, enter cron(59 23 ? * * *).

Step 4: Create QuickSight Analyses

Once the data is populated in S3, you can start creating analyses in Amazon QuickSight. The process of creating new analyses involves a couple of tasks: creating a new data set, and creating your visualizations.

To create analyses in QuickSight
1.    In a text editor, create a new file. Paste the following code:

{
    "fileLocations": [
        {
            "URIPrefixes": [ 
                "s3://<bucketName>/quicksight-data/"          
            ]
        }
    ],
    "globalUploadSettings": {
        "format": "JSON"
    }
}

In the preceding code, replace <bucketName> with the name of the S3 bucket that you’re using to store the metrics data. Save the file as manifest.json.
2.    Sign in to the QuickSight console at https://quicksight.aws.amazon.com.
3.    Create a new S3 data set. When prompted, choose the manifest file that you created in step 1. For more information about creating S3 data sets, see Creating a Data Set Using Amazon S3 Files in the Amazon QuickSight User Guide.
4.    Create a new analysis. From here, you can start creating visualizations of your data. To learn more, see Creating an Analysis in the Amazon QuickSight User Guide.

Amazon Prime Day 2019 – Powered by AWS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-prime-day-2019-powered-by-aws/

What did you buy for Prime Day? I bought a 34″ Alienware Gaming Monitor and used it to replace a pair of 25″ monitors that had served me well for the past six years:

 

As I have done in years past, I would like to share a few of the many ways that AWS helped to make Prime Day a reality for our customers. You can read How AWS Powered Amazon’s Biggest Day Ever and Prime Day 2017 – Powered by AWS to learn more about how we evaluate the results of each Prime Day and use what we learn to drive improvements to our systems and processes.

This year I would like to focus on three ways that AWS helped to support record-breaking amounts of traffic and sales on Prime Day: Amazon Prime Video Infrastructure, AWS Database Infrastructure, and Amazon Compute Infrastructure. Let’s take a closer look at each one…

Amazon Prime Video Infrastructure
Amazon Prime members were able to enjoy the second Prime Day Concert (presented by Amazon Music) on July 10, 2019. Headlined by 10-time Grammy winner Taylor Swift, this live-streamed event also included performances from Dua Lipa, SZA, and Becky G.

Live-streaming an event of this magnitude and complexity to an audience in over 200 countries required a considerable amount of planning and infrastructure. Our colleagues at Amazon Prime Video used multiple AWS Media Services including AWS Elemental MediaPackage and AWS Elemental live encoders to encode and package the video stream.

The streaming setup made use of two AWS Regions, with a redundant pair of processing pipelines in each region. The pipelines delivered 1080p video at 30 fps to multiple content distribution networks (including Amazon CloudFront), and worked smoothly.

AWS Database Infrastructure
A combination of NoSQL and relational databases were used to deliver high availability and consistent performance at extreme scale during Prime Day:

Amazon DynamoDB supports multiple high-traffic sites and systems including Alexa, the Amazon.com sites, and all 442 Amazon fulfillment centers. Across the 48 hours of Prime Day, these sources made 7.11 trillion calls to the DynamoDB API, peaking at 45.4 million requests per second.

Amazon Aurora also supports the network of Amazon fulfillment centers. On Prime Day, 1,900 database instances processed 148 billion transactions, stored 609 terabytes of data, and transferred 306 terabytes of data.

Amazon Compute Infrastructure
Prime Day 2019 also relied on a massive, diverse collection of EC2 instances. The internal scaling metric for these instances is known as a server equivalent; Prime Day started off with 372K server equivalents and scaled up to 426K at peak.

Those EC2 instances made great use of a massive fleet of Elastic Block Store (EBS) volumes. The team added an additional 63 petabytes of storage ahead of Prime Day; the resulting fleet handled 2.1 trillion requests per day and transferred 185 petabytes of data per day.

And That’s a A Wrap
These are some impressive numbers, and show you the kind of scale that you can achieve with AWS. As you can see, scaling up for one-time (or periodic) events and then scaling back down afterward, is easy and straightforward, even at world scale!

If you want to run your own world-scale event, I’d advise you to check out the blog posts that I linked above, and also be sure to read about AWS Infrastructure Event Management. My colleagues are ready (and eager) to help you to plan for your large-scale product or application launch, infrastructure migration, or marketing event. Here’s an overview of their process:

 

Jeff;

Cloudflare Flags Copyright Lawsuits as Potential Liabilities Ahead of IPO

Post Syndicated from Andy original https://torrentfreak.com/cloudflare-flags-copyright-lawsuits-as-potential-liabilities-ahead-of-ipo-190816/

As a CDN and security company, Cloudflare currently serves around 20 million “Internet properties”, ranging from domains and websites through to application programming interfaces (APIs) and mobile applications.

At least hundreds of those properties, potentially more, are considered ‘pirate’ platforms by copyright groups, which has resulted in Cloudflare being sucked into copyright infringement lawsuits due to the activities of its customers.

On Thursday, Cloudflare filed to go public by submitting the required S-1 registration statement. It contains numerous warnings that copyright infringement lawsuits, both current and those that may appear in the future, could present significant issues of liability for the company.

Noting that some of Cloudflare’s customers may use its services in violation of the law, the company states that existing laws relating to the liability of service providers are “highly unsettled and in flux”, both in the United States and further afield.

“For example, we have been named as a defendant in a number of lawsuits, both in the United States and abroad, alleging copyright infringement based on content that is made available through our customers’ websites,” the filing reads.

“There can be no assurance that we will not face similar litigation in the future or that we will prevail in any litigation we may face. An adverse decision in one or more of these lawsuits could materially and adversely affect our business, results of operations, and financial condition.”

Cloudflare goes on to reference the safe harbor provisions of the DMCA, noting that they may not offer “complete protection” for the company or could even be amended in the future to its detriment.

“If we are found not to be protected by the safe harbor provisions of the DMCA, CDA [Communications Decency Act] or other similar laws, or if we are deemed subject to laws in other countries that may not have the same protections or that may impose more onerous obligations on us, we may face claims for substantial damages and our brand, reputation, and financial results may be harmed. Such claims may result in liability that exceeds our ability to pay or our insurance coverage,” Cloudflare warns.

As a global company, it’s not only US law the company has to consider. Cloudflare references the recently-approved Copyright Directive in the EU, noting that also has the potential to expose Cloudflare and other online platforms to liability.

As recently as last month and in advance of any claims under that particular legislation, Cloudflare experienced an adverse ruling in an Italian court. Local broadcaster RTI successfully argued that Cloudflare can be held liable if it willingly fails to act in response to copyright infringement notices. In addition, Cloudflare was ordered to terminate the accounts of several pirate sites.

Of course, it’s not uncommon for S-1 filings to contain statements that can be interpreted as impending doom, since companies are required to be frank about their business’s prospects. However, with single copyright cases often dealing with millions of dollars worth of alleged infringement, Cloudflare’s appraisal of the risks seems entirely warranted.

Cloudflare’s S-1 filing can be viewed here

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

[$] Reconsidering unprivileged BPF

Post Syndicated from corbet original https://lwn.net/Articles/796328/rss

The BPF virtual machine within the kernel has seen a great deal of work
over the last few years; as that has happened, its use has expanded to many
different kernel subsystems. One of the objectives of that work in the
past has been
to make it safe to allow unprivileged users to load at least some types of
BPF programs into the kernel. A recent discussion has made it clear,
though, that the goal of opening up BPF to unprivileged users has been
abandoned as unachievable, and that further work in that direction will not
be accepted by the BPF maintainer.

kdevops: a devops framework for Linux kernel development

Post Syndicated from corbet original https://lwn.net/Articles/796466/rss

Luis Chamberlain has announced
the “kdevops” kernel-development framework. “I’m announcing the
release of kdevops which aims at making setting up and testing the Linux
kernel for any project as easy as possible. Note that setting up testing
for a subsystem and testing a subsystem are two separate operations,
however we strive for both. This is not a new test framework, it allows you
to use existing frameworks, and set those frameworks up as easily can
humanly be possible. It relies on a series of modern hip devops frameworks,
it relies on ansible, vagrant and terraform, ansible roles through the
Ansible Galaxy, and terraform modules.

Security updates for Friday

Post Syndicated from jake original https://lwn.net/Articles/796455/rss

Security updates have been issued by Debian (freetype, libreoffice, and openjdk-7), Fedora (edk2, mariadb, mariadb-connector-c, mariadb-connector-odbc, python-django, and squirrelmail), Gentoo (chromium, cups, firefox, glibc, kconfig, libarchive, libreoffice, oracle-jdk-bin, polkit, proftpd, sqlite, wget, zeromq, and znc), openSUSE (bzip2, chromium, dosbox, evince, gpg2, icedtea-web, java-11-openjdk, java-1_8_0-openjdk, kconfig, kdelibs4, mariadb, mariadb-connector-c, nodejs8, pdns, polkit, python, subversion, and vlc), Oracle (ghostscript and kernel), Red Hat (mysql:8.0 and subversion:1.10), SUSE (389-ds, libvirt and libvirt-python, and openjpeg2), and Ubuntu (nginx).

Welcome Kim — Senior Engineering Manager

Post Syndicated from Nicole Perry original https://www.backblaze.com/blog/welcome-kim-senior-engineering-manager/

Joining to help manage our amazing, ever-growing team of engineers is Kim. Kim’s background with companies like Microsoft and Salesforce.com makes her a great addition to the team. Let’s learn a little more about Kim, shall we?

What is your Backblaze title?
Senior engineering manager.

Where are you originally from?
I was born in Vietnam and came to the United States when I was 11 years old.

What attracted you to Backblaze?
I worked with Tina Cessna back in Salesforce.com and she told me this is a great company to work for, surrounded by smart and friendly people.

What do you expect to learn while being at Backblaze?
I expect to learn more about backup and restore business and technologies.

Where else have you worked?
I worked for Salesforce.com for 7 years and prior to that at Microsoft for 10 years.

Where did you go to school?
I graduated from San Jose State University with a B.S in Business Accounting.

Of what achievements are you most proud of?
Having two beautiful kids.

Favorite place you’ve traveled?
I have too many places. To name a few: Italy, Croatia, and Japan.

Favorite hobby?
Racquetball.

Favorite food?
Good food! I work for food.

Star Trek or Star Wars?
Star Wars.

Coke or Pepsi?
Water and coffee.

In an office full of foodies, we feel like you will fit in great here at Backblaze Kim! Welcome aboard!

The post Welcome Kim — Senior Engineering Manager appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Software Vulnerabilities in the Boeing 787

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/08/software_vulner.html

Boeing left its software unprotected, and researchers have analyzed it for vulnerabilities:

At the Black Hat security conference today in Las Vegas, Santamarta, a researcher for security firm IOActive, plans to present his findings, including the details of multiple serious security flaws in the code for a component of the 787 known as a Crew Information Service/Maintenance System. The CIS/MS is responsible for applications like maintenance systems and the so-called electronic flight bag, a collection of navigation documents and manuals used by pilots. Santamarta says he found a slew of memory corruption vulnerabilities in that CIS/MS, and he claims that a hacker could use those flaws as a foothold inside a restricted part of a plane’s network. An attacker could potentially pivot, Santamarta says, from the in-flight entertainment system to the CIS/MS to send commands to far more sensitive components that control the plane’s safety-critical systems, including its engine, brakes, and sensors. Boeing maintains that other security barriers in the 787’s network architecture would make that progression impossible.

Santamarta admits that he doesn’t have enough visibility into the 787’s internals to know if those security barriers are circumventable. But he says his research nonetheless represents a significant step toward showing the possibility of an actual plane-hacking technique. “We don’t have a 787 to test, so we can’t assess the impact,” Santamarta says. “We’re not saying it’s doomsday, or that we can take a plane down. But we can say: This shouldn’t happen.”

Boeing denies that there’s any problem:

In a statement, Boeing said it had investigated IOActive’s claims and concluded that they don’t represent any real threat of a cyberattack. “IOActive’s scenarios cannot affect any critical or essential airplane system and do not describe a way for remote attackers to access important 787 systems like the avionics system,” the company’s statement reads. “IOActive reviewed only one part of the 787 network using rudimentary tools, and had no access to the larger system or working environments. IOActive chose to ignore our verified results and limitations in its research, and instead made provocative statements as if they had access to and analyzed the working system. While we appreciate responsible engagement from independent cybersecurity researchers, we’re disappointed in IOActive’s irresponsible presentation.”

This being Black Hat and Las Vegas, I’ll say it this way: I would bet money that Boeing is wrong. I don’t have an opinion about whether or not it’s lying.

Scratch 3 Desktop for Raspbian on Raspberry Pi

Post Syndicated from Martin O'Hanlon original https://www.raspberrypi.org/blog/scratch-3-desktop-for-raspbian-on-raspberry-pi/

You can now install and use Scratch 3 Desktop for Raspbian on your Raspberry Pi!

Scratch 3

Scratch 3 was released in January this year, and since then we and the Scratch team have put lots of work into creating an offline version for Raspberry Pi.

The new version of Scratch has a significantly improved interface and better functionality compared to previous versions. These improvements come at the cost of needing more processing power to run. Luckily, Raspberry Pi 4 has delivered just that, and with the software improvements in the newest version of Raspbian, Buster, we can now deliver a reliable Scratch 3 experience on our computer.

Which Raspberry Pi can I use?

Scratch 3 needs at least 1GB of RAM to run, and we recommend a Raspberry Pi 4 with 2GB+ RAM. While you can run Scratch 3 on a Raspberry Pi 2, 3, 3B+, or a Raspberry 4 with 1GB RAM, performance on these models is reduced, and depending on what other software you run at the same time, Scratch 3 may fail to start due to lack of memory.

The Scratch team is working to reduce the memory requirements of Scratch 3, so we will hopefully see improvements to this soon.

How to install Scratch 3

You can only install Scratch 3 on Raspbian Buster.

First, update Raspbian!

  • If you’ve yet to upgrade to Raspbian Buster, we recommend installing a fresh version of Buster onto your SD card instead of upgrading from your current version of Raspbian.
  • If you’re already using Raspbian Buster, but you’re not sure your running the latest version, update Buster by following this tutorial:

How to update Raspbian on your Raspberry Pi

How to update to the latest version of Raspbian on your Raspberry Pi.

Once you’re running the latest version of Buster, you can install Scratch 3 either using the Recommended Software application or apt on the terminal.

How to install Scratch 3 using the Recommended Software app

Open up the menu, click on Preferences > Recommended Software, and then select Scratch 3 and click on OK.

How to install Scratch 3 using the terminal

Open a terminal window, and type in and run the following commands:

sudo apt-get update
sudo apt-get install scratch3

What can I do with Scratch 3 and Raspberry Pi?

Scratch 3 Desktop for Raspbian comes with new extensions to allow you to control the GPIO pins and Sense HAT with Scratch code!

GPIO extension

GPIO extension is a replacement for the existing extension in Scratch 2. Its layout and functionality is very similar, so you can use it as a drop-in replacement.

The GPIO extension gives you the flexibility to connect and control a whole host of electronic devices.

Simple Electronics extension

If you are looking to add something simple, like an LED or button controller for a game, you should find the new Simple Electronics extension easier to use than the GPIO extension. The Simple Electronics extension is the first version of a beginner-friendly extension for interacting with Raspberry Pi’s GPIO pins. Taking lessons from the implementation of gpiozero for Python, this new extension provides a simpler way of using electronic components: currently buttons and LEDs.

In this example, an LED connected to GPIO pin 17 is controlled by a button connected between pin 2 and GND.

Sense HAT extension

We’ve improved the Sense HAT extension to take advantage of new features in Scratch 3, and the updated version of the extension also introduces a number of new blocks to allow you to:

  • Sense tilting, shaking, and orientation
  • Use the joystick
  • Measure temperature, pressure, and humidity
  • Display text, characters, and patterns on the LED matrix

micro:bit and LEGO extensions

The micro:bit and LEGO extensions will become available later on Scratch 3 Desktop. This is because Scratch Link, the software which allows Scratch to talk to Bluetooth devices, is not yet available for Linux-type operating systems like Raspbian. A version of Scratch Link for Raspbian is part of our plans but, as yet, we don’t have a release date.

A round of thanks

It has been a long ambition of both the Scratch and Raspberry Pi teams to have Scratch 3 running on Raspberry Pi, and it’s amazing to see it released!

A big thank you to Raspberry Pi engineer Simon Long for building and packaging Scratch 3, and to the Scratch team for their support in getting over some of the problems we faced along the way.

The post Scratch 3 Desktop for Raspbian on Raspberry Pi appeared first on Raspberry Pi.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close