Enable fine-grained permissions for Amazon QuickSight authors in AWS Lake Formation

Post Syndicated from Adnan Hasan original https://aws.amazon.com/blogs/big-data/enable-fine-grained-permissions-for-amazon-quicksight-authors-in-aws-lake-formation/

We’re excited to announce the integration of Amazon QuickSight with the AWS Lake Formation security model, which provides fine-grained access control for QuickSight authors. Data lake administrators can now use the Lake Formation console to grant QuickSight users and groups permissions to AWS Glue Data Catalog databases, tables, and Amazon Simple Storage Service (Amazon S3) buckets that are registered and managed via Lake Formation.

This new feature enhances the fine-grained access control capability previously introduced in QuickSight, which allows admins to use AWS Identity and Access Management (IAM) policies to scope down QuickSight author access to Amazon S3, Amazon Athena, Amazon Relational Database Service (Amazon RDS), and Amazon Redshift. The scope-down access is enforced by attaching IAM policies to the QuickSight user or a group in the QuickSight portal. For more information, see Introducing Amazon QuickSight fine-grained access control over Amazon S3 and Amazon Athena.

For Athena-based datasets, you’re no longer required to use IAM policies to scope down QuickSight author access to Amazon S3, or Data Catalog databases and tables. You can grant permissions directly in the Lake Formation console. An added benefit is that you can also grant column-level permissions to the QuickSight users and groups. Lake Formation handles all this for you centrally.

This feature is currently available in the QuickSight Enterprise edition in the following Regions:

  • US East (Ohio)
  • US East (N. Virginia)
  • US West (Oregon)

It will soon be available in all Regions where Lake Formation exists as of this post. For more information, see Region Table.

This post compares the new fine-grained permissions model in Lake Formation to the IAM policy-based access control in QuickSight. It also provides guidance on how to migrate fine-grained permissions for QuickSight users and groups to Lake Formation.

QuickSight fine-grained permissions vs. Lake Formation permissions

In QuickSight, you can limit user or group access to AWS resources by attaching a scope-down IAM policy. If no such policies exist for a user or a group (that the user is a member of), QuickSight service role permissions determine access to the AWS resources. The following diagram illustrates how permissions work for a QuickSight user trying to create an Athena dataset.

With the Lake Formation integration, the permissions model changes slightly. The two important differences while creating an Athena dataset are:

  • Users can view the Data Catalog resources (databases and tables) that have one of the following:
    1. The IAMAllowedPrincipal security group is granted Super permission to the resource in Lake Formation.
    2. An ARN for the QuickSight user or group (that the user is a member of) is explicitly granted permissions to the resource in Lake Formation.
  • If the S3 source bucket for the Data Catalog resource is registered in Lake Formation. Amazon S3 access settings in QuickSight are ignored, including scope-down IAM policies for users and groups.

The following diagram shows the change in permission model when a QuickSight user tries to create an Athena dataset.

The following sections dive into how fine-grained permissions work in QuickSight and how you can migrate the existing permissions to the Lake Formation security model.

Existing fine-grained access control in QuickSight

For this use case, a business analyst in the marketing team, lf-gs-author, created an Athena dataset Monthly Sales in QuickSight. It was built using the month_b2bsalesdata table in AWS Glue and the data in S3 bucket b2bsalesdata.

The following screenshot shows the table details.

The following screenshot shows the dataset details.

The dataset is also shared with a QuickSight group analystgroup. See the following screenshot of the group details.

A fine-grained IAM policy enforces access to the S3 bucket b2bsalesdata for lf-qs-author and analystgroup. The following code is an example of an Amazon S3 access policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": "arn:aws:s3:::"
        },
        {
            "Action": [
                "s3:ListBucket"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::b2bsalesdata"
            ]
        },
        {
            "Action": [
                "s3:GetObject",
                "s3:GetObjectVersion"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::b2bsalesdata/"
            ]
        }
    ]
}

Enabling QuickSight permissions in Lake Formation

To migrate QuickSight permissions to Lake Formation,  follow the steps described below (in the given order):

1.) Capturing the ARN for the QuickSight user and group

First, capture the QuickSight ARN for the business analyst user and marketing team group. You can use the describe-user API and the describe-group API to retrieve the user ARN and the group ARN, respectively. For example, to retrieve the ARN for the QuickSight group analystgroup, enter the following code in the AWS Command Line Interface (AWS CLI):

aws quicksight describe-group --group-name 'analystgroup' --aws-account-id 253914981264 --namespace default

Record the group ARN from the response, similar to the following code:

{
 "Status": 200,
 "Group": {
 "Arn": "arn:aws:quicksight:us-east-1:253914981264:group/default/analystgroup",
 "GroupName": "analystgroup",
 "PrincipalId": "group/d-906706bd27/3095e3ab-e901-479b-88da-92f7629b202d"
 },
 "RequestId": "504ec460-2ceb-46ca-844b-a33a46bc7080"
}

Repeat the same step to retrieve the ARN for the business analyst lf-qs-author.

2.) Granting permissions in the data lake

To grant permissions to the month_b2bsalesdata table in salesdb, complete the following steps:

  1. Sign in to the Lake Formation console as the data lake administrator.

A data lake administrator can grant any principal (IAM, QuickSight, or Active Directory) permissions to Data Catalog resources (databases and tables) or data lake locations in Amazon S3. For more information about creating a data lake administrator and the data lake security model, see AWS Lake Formation: How It Works.

  1. Choose Tables.
  2. Select month_b2bsalesdata.
  3. From the Actions drop-down menu, choose View permissions.

You see a list of principals with associated permissions for each resource type.

  1. Choose Grant.
  2. For Active Directory and Amazon QuickSight users and groups, enter the QuickSight user ARN.
  3. For Table permissions, select Select.
  4. Optionally, under Column permissions, you can grant column-level permissions to the user. This is a benefit of using Lake Formation permissions over QuickSight policies.
  5. Choose Grant.

  1. Repeat the preceding steps to grant select table permissions to analystgroup, using the ARN you recorded earlier.
  2. Select month_b2bsalesdata.
  3. From the Actions drop-down menu, choose View permissions.

The following screenshot shows the added permissions for the QuickSight user and group.

3.) Removing IAMAllowedPrincipal group permissions

For Lake Formation permissions to take effect, you must remove the IAMAllowedPrincipal group from the month_b2bsalesdata table.

  1. Select month_b2bsalesdata.
  2. From the Actions drop-down menu, choose View permissions.
  3. Select IAMAllowedPrincipals.
  4. Choose Revoke.

  1. Choose Revoke

4.) Registering your S3 bucket in Lake Formation

You can now register the S3 source bucket (b2bsalesdata) in Lake Formation. Registering the S3 bucket switches Amazon S3 authorization from QuickSight scope-down policies to Lake Formation security.

  1. Choose Data lake locations.
  2. Choose Register location.
  3. For Amazon S3 path, enter the path for your source bucket (s3://b2bsalesdata).
  4. For IAM role, choose the role with permissions to that bucket.
  5. Choose Register location.

5.) Cleaning up the scope-down policies in QuickSight

You can now remove the scope-down policies for the user and group in QuickSight. To find these policies, under Security and Permissions, choose IAM policy assignments.

6.) Creating a dataset in QuickSight

To create a dataset, complete the following steps:

  1. Log in to QuickSight as a user who is a member of analystgroup (someone besides lf-qs-author).
  2. Choose Manage data.
  3. Choose New data set.
  4. Choose Athena.
  5. For the data source name, enter Marketing Data.
  6. Choose Create data source.
  7. In the list of databases, choose salesdb.
  8. Choose month_b2bsalesdata.
  9. Choose Edit/Preview data.

The following screenshot shows the details of month_b2bsalesdata table.

You can also use custom SQL to query the data.

Conclusion

This post demonstrates how to extend the Lake Formation security model to QuickSight users and groups, which allows data lake administrators to manage data catalog resource permissions centrally from one console. As organizations embark on the journey to secure their data lakes with Lake Formation, having the ability to centrally manage fine-grained permissions for QuickSight authors can extend the data governance and enforcement of security controls at the data consumption (business intelligence) layer. You can enable these fine-grained permissions for QuickSight users and groups at the database, table, or column level, and they’re reflected in the Athena dataset in QuickSight.

Start migrating your fine-grained permissions to Lake Formation today, and leave your thoughts and questions in the comments.

 


About the Author

Adnan Hasan is a Solutions Architect with Amazon QuickSight at Amazon Web Services.

 

Enforce column-level authorization with Amazon QuickSight and AWS Lake Formation

Post Syndicated from Avijit Goswami original https://aws.amazon.com/blogs/big-data/enforce-column-level-authorization-with-amazon-quicksight-and-aws-lake-formation/

Amazon QuickSight is a fast, cloud-powered, business intelligence service that makes it easy to deliver insights and integrates seamlessly with your data lake built on Amazon Simple Storage Service (Amazon S3). QuickSight users in your organization often need access to only a subset of columns for compliance and security reasons. Without having a proper solution to enforce column-level security, you have to develop additional solutions, such as views, data masking, or encryption, to enforce security.

QuickSight accounts can now take advantage of AWS Lake Formation column-level authorization to enforce granular-level access control for their users.

Overview of solution

In this solution, you build an end-to-end data pipeline using Lake Formation to ingest data from an Amazon Aurora MySQL database to an Amazon S3 data lake and use Lake Formation to enforce column-level access control for QuickSight users.

The following diagram illustrates the architecture of this solution.

Walkthrough overview

The detailed steps in this solution include building a data lake using Lake Formation, which uses an Aurora MySQL database as the source and Amazon S3 as the target data lake storage. You create a workflow in Lake Formation that imports a single table from the source database to the data lake. You then use Lake Formation security features to enforce column-level security for QuickSight service on the imported table. Finally, you use QuickSight to connect to this data lake and visualize only the columns for which Lake Formation has given access to QuickSight user.

To implement the solution, you complete the following steps:

  1. Prerequisites
  2. Creating a source database
  3. Importing a single table from the source database
    • Creating a connection to the data source
    • Creating and registering your S3 bucket
    • Creating a database in the Data Catalog and granting permissions
    • Creating and running the workflow
    • Granting Data Catalog permissions
  4. Enforcing column-level security in Lake Formation
  5. Creating visualizations in QuickSight

Prerequisites

For this walkthrough, you should have the following prerequisites:

Creating a source database

In this step, create an Aurora MySQL database cluster and use the DDLs in the following GitHub repo to create an HR schema with associated tables and sample data.

You should then see the schema you created using the MySQL monitor or your preferred SQL client. For this post, I used SQL Workbench. See the following screenshot.

Record the Aurora database JDBC endpoint information; you need it in subsequent steps.

Importing a single table from the source database

Before you complete the following steps, make sure you have set up Lake Formation and met the JDBC prerequisites.

The Lake Formation setup creates a datalake_user IAM user. You need to add the same user as a QuickSight user. For instructions, see Managing User Access Inside Amazon QuickSight. For Role, choose AUTHOR.

Creating a connection to the data source

After you complete the Lake Formation prerequisites, which include creating IAM users datalake_admin and datalake_user, create a connection in your Aurora database. For instructions, see Create a Connection in AWS Glue. Provide the following information:

  • Connection name<yourPrefix>-blog-datasource
  • Connection type – JDBC
  • Database connection parameters – JDBC URL, user name, password, VPC, subnet, and security group

Creating and registering your S3 bucket

In this step, you create an S3 bucket named <yourPrefix>-blog-datalake, which you use as the root location of your data lake. After you create the bucket, you need to register the Amazon S3 path. Lastly, grant data location permissions.

Creating a database in the Data Catalog and granting permissions

Create a database in the Lake Formation Data Catalog named <yourPrefix>-blog-database, which stores the metadata tables. For instructions, see Create a Database in the Data Catalog.

After you create the database, you grant data permissions to the metadata tables to the LakeFormationWorkflowRole role, which you use to run the workflows.

Creating and running the workflow

In this step, you copy the EMPLOYEES table from the source database using a Lake Formation blueprint. Provide the following information:

  • Blueprint type – Database snapshot
  • Database connection<yourPrefix>-blog-datasource
  • Source data pathHR/EMPLOYEES
  • Target database<yourPrefix>-blog-database
  • Target storage location<yourPrefix>-blog-datalake
  • Workflow name<yourPrefix>-datalake-quicksight
  • IAM roleLakeFormationWorkflowRole
  • Table prefixblog

For instructions, see Use a Blueprint to Create a Workflow.

When the workflow is ready, you can start the workflow and check its status by choosing View graph. When the workflow is complete, you can see the employee table available in your Data Catalog under <yourPrefix>-blog-database. See the following screenshot.

You can also view the imported data using Athena, which is integrated with Lake Formation. You need to select “View Data” from “Actions” drop down menu for this purpose. See the following screenshot.

Granting Data Catalog permissions

In this step, you provide the Lake Formation Data Catalog access to the IAM user datalake_user. This is the same user that you added in QuickSight to create the dashboard. For Database permissions, select Create table and Alter for this use case, but you can change the permission level based on your specific requirements. For instructions, see Granting Data Catalog Permissions.

When this step is complete, you see the permissions for your database <yourPrefix>-blog-database.

Enforcing column-level security in Lake Formation

Now that your table is imported into the data lake, enforce column-level security to the dataset. For this use case, you want to hide the Salary and Phone_Number columns from business intelligence QuickSight users.

  1. In the Lake Formation Data Catalog, choose Databases.
  2. From the list of databases, choose <yourPrefix>-blog-database.
  3. Choose View tables.
  4. Select blog_hr_employees.
  5. From the Actions drop-down menu, choose Grant.

  1. For Active Directory and Amazon QuickSight users and groups, provide the QuickSight user ARN.

You can find the ARN by entering the code aws quicksight list-users --aws-account-id <your AWS account id> --namespace default --region us-east-1 in the AWS Command Line Interface (AWS CLI).

  1. For Database, choose <yourPrefix>-blog-database.
  2. For Table, choose blog_hr_employees.
  3. For Columns, choose Exclude columns.
  4. For Exclude columns, choose salary and phone_number.
  5. For Table permissions, select Select.

You should receive a confirmation on the console that says Permission granted for: datalake_user to Exclude: <yourPrefix>-blog-database.blog_hr_employees.[salary, phone_number].

You can also verify that appropriate permission is reflected for the QuickSight user on the Lake Formation console by navigating to the Permissions tab and filtering for your database and table.

You can also specify column-level permissions in the AWS CLI with the following code:

aws lakeformation grant-permissions --principal DataLakePrincipalIdentifier=<QuickSight User ARN> --permissions "SELECT" --resource '{ "TableWithColumns": {"DatabaseName":"<yourPrefix>-blog-database", "Name":"blog_hr_employees", "ColumnWildcard": {"ExcludedColumnNames": ["salary", "phone_number"]}}}'  --region us-west-2 --profile datalake_admin

Creating visualizations in QuickSight

In this step, you use QuickSight to access the blog_hr_employees table in your data lake. While accessing this dataset from QuickSight, you can see that QuickSight doesn’t show the salary and phone_number columns, which you excluded from the source table in the previous step.

  1. Log in to QuickSight using the datalake_user IAM user.
  2. Choose New analysis.
  3. Choose New dataset.
  4. For the data source, choose Athena.

  1. For your data source name, enter Athena-HRDB.
  2. For Database, choose <yourPrefix>-blog-database.
  3. For Tables, select blog_hr_employees.
  4. Choose Select.

  1. Choose Import to SPICE for quicker analysis or Directly query your data.

For this use case, choose Import to SPICE. This provides faster visualization in a production setup, and you can run a scheduled refresh to make sure your dashboards are referring to the current data. For more information, see Scheduled Refresh for SPICE Data Sets on Amazon QuickSight.

When you complete the previous steps, your data is imported to your SPICE machine and you arrive at the QuickSight visualization dashboard. You can see that SPICE has excluded the salary and phone_number fields from the table. In the following screenshot, we created a pie chart visualization to show how many employees are present in each department.

Cleaning up

To avoid incurring future charges, delete the resources you created in this walkthrough, including your S3 bucket, Aurora cluster, and other associated resources.

Conclusion

Restricting access to sensitive data to various users in a data lake is a very common challenge. In this post, we demonstrated how to use Lake Formation to enforce column-level access to QuickSight dashboard users. You can enhance security further with Athena workgroups. For more information, see Creating a Data Set Using Amazon Athena Data and Benefits of Using Workgroups.

 


About the Author

Avijit Goswami is a Sr. Startups Solutions Architect at AWS, helping startup customers become tomorrow’s enterprises. When not at work, Avijit likes to cook, travel, watch sports, and listen to music.

 

 

How the Digital Camera Transformed Our Concept of History

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/silicon-revolution/how-the-digital-camera-transformed-our-concept-of-history

For an inventor, the main challenge might be technical, but sometimes it’s timing that determines success. Steven Sasson had the technical talent but developed his prototype for an all-digital camera a couple of decades too early.

A CCD from Fairchild was used in Kodak’s first digital camera prototype

It was 1974, and Sasson, a young electrical engineer at Eastman Kodak Co., in Rochester, N.Y., was looking for a use for Fairchild Semiconductor’s new type 201 charge-coupled device. His boss suggested that he try using the 100-by-100-pixel CCD to digitize an image. So Sasson built a digital camera to capture the photo, store it, and then play it back on another device.

Sasson’s camera was a kluge of components. He salvaged the lens and exposure mechanism from a Kodak XL55 movie camera to serve as his camera’s optical piece. The CCD would capture the image, which would then be run through a Motorola analog-to-digital converter, stored temporarily in a DRAM array of a dozen 4,096-bit chips, and then transferred to audio tape running on a portable Memodyne data cassette recorder. The camera weighed 3.6 kilograms, ran on 16 AA batteries, and was about the size of a toaster.

After working on his camera on and off for a year, Sasson decided on 12 December 1975 that he was ready to take his first picture. Lab technician Joy Marshall agreed to pose. The photo took about 23 seconds to record onto the audio tape. But when Sasson played it back on the lab computer, the image was a mess—although the camera could render shades that were clearly dark or light, anything in between appeared as static. So Marshall’s hair looked okay, but her face was missing. She took one look and said, “Needs work.”

Sasson continued to improve the camera, eventually capturing impressive images of different people and objects around the lab. He and his supervisor, Garreth Lloyd, received U.S. Patent No. 4,131,919 for an electronic still camera in 1978, but the project never went beyond the prototype stage. Sasson estimated that image resolution wouldn’t be competitive with chemical photography until sometime between 1990 and 1995, and that was enough for Kodak to mothball the project.

Digital photography took nearly two decades to take off

While Kodak chose to withdraw from digital photography, other companies, including Sony and Fuji, continued to move ahead. After Sony introduced the Mavica, an analog electronic camera, in 1981, Kodak decided to restart its digital camera effort. During the ’80s and into the ’90s, companies made incremental improvements, releasing products that sold for astronomical prices and found limited audiences. [For a recap of these early efforts, see Tekla S. Perry’s IEEE Spectrum article, “Digital Photography: The Power of Pixels.”]

Then, in 1994 Apple unveiled the QuickTake 100, the first digital camera for under US $1,000. Manufactured by Kodak for Apple, it had a maximum resolution of 640 by 480 pixels and could only store up to eight images at that resolution on its memory card, but it was considered the breakthrough to the consumer market. The following year saw the introduction of Apple’s QuickTake 150, with JPEG image compression, and Casio’s QV10, the first digital camera with a built-in LCD screen. It was also the year that Sasson’s original patent expired.

Digital photography really came into its own as a cultural phenomenon when the Kyocera VisualPhone VP-210, the first cellphone with an embedded camera, debuted in Japan in 1999. Three years later, camera phones were introduced in the United States. The first mobile-phone cameras lacked the resolution and quality of stand-alone digital cameras, often taking distorted, fish-eye photographs. Users didn’t seem to care. Suddenly, their phones were no longer just for talking or texting. They were for capturing and sharing images.

The rise of cameras in phones inevitably led to a decline in stand-alone digital cameras, the sales of which peaked in 2012. Sadly, Kodak’s early advantage in digital photography did not prevent the company’s eventual bankruptcy, as Mark Harris recounts in his 2014 Spectrum article “The Lowballing of Kodak’s Patent Portfolio.” Although there is still a market for professional and single-lens reflex cameras, most people now rely on their smartphones for taking photographs—and so much more.

How a technology can change the course of history

The transformational nature of Sasson’s invention can’t be overstated. Experts estimate that people will take more than 1.4 trillion photographs in 2020. Compare that to 1995, the year Sasson’s patent expired. That spring, a group of historians gathered to study the results of a survey of Americans’ feelings about the past. A quarter century on, two of the survey questions stand out:

  • During the last 12 months, have you looked at photographs with family or friends?

  • During the last 12 months, have you taken any photographs or videos to preserve memories?

In the nationwide survey of nearly 1,500 people, 91 percent of respondents said they’d looked at photographs with family or friends and 83 percent said they’d taken a photograph—in the past year. If the survey were repeated today, those numbers would almost certainly be even higher. I know I’ve snapped dozens of pictures in the last week alone, most of them of my ridiculously cute puppy. Thanks to the ubiquity of high-quality smartphone cameras, cheap digital storage, and social media, we’re all taking and sharing photos all the time—last night’s Instagram-worthy dessert; a selfie with your bestie; the spot where you parked your car.

So are all of these captured moments, these personal memories, a part of history? That depends on how you define history.

For Roy Rosenzweig and David Thelen, two of the historians who led the 1995 survey, the very idea of history was in flux. At the time, pundits were criticizing Americans’ ignorance of past events, and professional historians were wringing their hands about the public’s historical illiteracy.

Instead of focusing on what people didn’t know, Rosenzweig and Thelen set out to quantify how people thought about the past. They published their results in the 1998 book The Presence of the Past: Popular Uses of History in American Life (Columbia University Press). This groundbreaking study was heralded by historians, those working within academic settings as well as those working in museums and other public-facing institutions, because it helped them to think about the public’s understanding of their field.

Little did Rosenzweig and Thelen know that the entire discipline of history was about to be disrupted by a whole host of technologies. The digital camera was just the beginning.

For example, a little over a third of the survey’s respondents said they had researched their family history or worked on a family tree. That kind of activity got a whole lot easier the following year, when Paul Brent Allen and Dan Taggart launched Ancestry.com, which is now one of the largest online genealogical databases, with 3 million subscribers and approximately 10 billion records. Researching your family tree no longer means poring over documents in the local library.

Similarly, when the survey was conducted, the Human Genome Project was still years away from mapping our DNA. Today, at-home DNA kits make it simple for anyone to order up their genetic profile. In the process, family secrets and unknown branches on those family trees are revealed, complicating the histories that families might tell about themselves.

Finally, the survey asked whether respondents had watched a movie or television show about history in the last year; four-fifths responded that they had. The survey was conducted shortly before the 1 January 1995 launch of the History Channel, the cable channel that opened the floodgates on history-themed TV. These days, streaming services let people binge-watch historical documentaries and dramas on demand.

Today, people aren’t just watching history. They’re recording it and sharing it in real time. Recall that Sasson’s MacGyvered digital camera included parts from a movie camera. In the early 2000s, cellphones with digital video recording emerged in Japan and South Korea and then spread to the rest of the world. As with the early still cameras, the initial quality of the video was poor, and memory limits kept the video clips short. But by the mid-2000s, digital video had become a standard feature on cellphones.

As these technologies become commonplace, digital photos and video are revealing injustice and brutality in stark and powerful ways. In turn, they are rewriting the official narrative of history. A short video clip taken by a bystander with a mobile phone can now carry more authority than a government report.

Maybe the best way to think about Rosenzweig and Thelen’s survey is that it captured a snapshot of public habits, just as those habits were about to change irrevocably.

Digital cameras also changed how historians conduct their research

For professional historians, the advent of digital photography has had other important implications. Lately, there’s been a lot of discussion about how digital cameras in general, and smartphones in particular, have changed the practice of historical research. At the 2020 annual meeting of the American Historical Association, for instance, Ian Milligan, an associate professor at the University of Waterloo, in Canada, gave a talk in which he revealed that 96 percent of historians have no formal training in digital photography and yet the vast majority use digital photographs extensively in their work. About 40 percent said they took more than 2,000 digital photographs of archival material in their latest project. W. Patrick McCray of the University of California, Santa Barbara, told a writer with The Atlantic that he’d accumulated 77 gigabytes of digitized documents and imagery for his latest book project [an aspect of which he recently wrote about for Spectrum].

So let’s recap: In the last 45 years, Sasson took his first digital picture, digital cameras were brought into the mainstream and then embedded into another pivotal technology—the cellphone and then the smartphone—and people began taking photos with abandon, for any and every reason. And in the last 25 years, historians went from thinking that looking at a photograph within the past year was a significant marker of engagement with the past to themselves compiling gigabytes of archival images in pursuit of their research.

So are those 1.4 trillion digital photographs that we’ll collectively take this year a part of history? I think it helps to consider how they fit into the overall historical narrative. A century ago, nobody, not even a science fiction writer, predicted that someone would take a photo of a parking lot to remember where they’d left their car. A century from now, who knows if people will still be doing the same thing. In that sense, even the most mundane digital photograph can serve as both a personal memory and a piece of the historical record.

An abridged version of this article appears in the July 2020 print issue as “Born Digital.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

DNA Databases in the U.S. and China Are Tools of Racial Oppression

Post Syndicated from Thor Benson original https://spectrum.ieee.org/tech-talk/biomedical/ethics/dna-databases-in-china-and-the-us-are-tools-of-racial-oppression

Two major world powers, the United States and China, have both collected an enormous number of DNA samples from their citizens, the premise being that these samples will help solve crimes that might have otherwise gone unsolved. While DNA evidence can often be crucial when it comes to determining who committed a crime, researchers argue these DNA databases also pose a major threat to human rights.

In the U.S., the Federal Bureau of Investigation (FBI) has a DNA database called the Combined DNA Index System (CODIS) that currently contains over 14 million DNA profiles. This database has a disproportionately high number of profiles of black men, because black Americans are arrested five times as much as white Americans. You don’t even have to be convicted of a crime for law enforcement to take and store your DNA; you simply have to have been arrested as a suspect.

Bradley Malin, co-director of the Center for Genetic Privacy and Identity in Community Settings at Vanderbilt University, tells IEEE that there are many issues that can arise from this database largely being composed of DNA profiles taken from people of color.

“I wouldn’t say that they are only collecting information on minorities, but when you have a skew towards the collection of information from these communities, when you solve a crime or you think you have solved a crime, then it is going to be a disproportionate number of people from the minority groups that are going to end up being implicated,” Malin says. “It’s a non-random collection of data, as an artifact, so that’s a problem. There’s clearly skew with respect to the information that they have.”

Some of the DNA in the FBI’s database is now being collected by immigration agencies that are collecting samples from undocumented immigrants at the border. Not only are we collecting a disproportionate amount of DNA from black Americans who have been arrested, we’re collecting it from immigrants who are detained while trying to come to America. Malin says this further skews the database and could cause serious problems.

“If you combine the information you’re getting on immigrant populations coming into the United States with information that the FBI already holds on minority populations, who’s being left out here? You’ve got big holes in terms of a lack of white, caucasian people within this country,” Malin says. “In the event that you have people who are suspected of a crime, the databases are going to be all about the immigrant, black, and Hispanic populations.”

Malin says immigration agencies are often separating families based on DNA because they will say someone is not part of a family if their DNA doesn’t match. That can mean people who have been adopted or live with a family will be separated from them.

Aside from the clear threat to privacy these databases represent, one of the problems with them is that they can contain contaminated samples, or samples can become contaminated, which can lead law enforcement to make wrongful arrests. Another problem is law enforcement can end up collecting DNA that is a near match to DNA contained in the database and end up harassing people they believe to be related to a criminal in order to find their suspect. Malin says there’s also no guarantee that these DNA samples will not end up being used in controversial ways we have yet to even consider.

“One of the problems you run into is scope creep,” Malin says. “Just because the way the law is currently architected says that it shouldn’t be used for other purposes doesn’t mean that that won’t happen in the future.”

As for China, a report that was published by the Australian Strategic Policy Institute in mid-June claims that China is operating the “world’s largest police-run DNA database” as part of its powerful surveillance state. Chinese authorities have collected DNA samples from possibly as many as 70 million men since 2017, and the total database is believed to contain as many as 140 million profiles. The country hopes to collect DNA from all of its male citizens, as it argues men are most likely to commit crimes.

DNA is reportedly often collected during what are represented as free physicals, and it’s also being collected from children at schools. There are reports of Chinese citizens being threatened with punishment by government officials if they refuse to give a DNA sample. Much of the DNA that’s been collected has been from Uighur Muslims that have been oppressed by the Chinese government and infamously forced into concentration camps in the Xinjiang province.

“You have a country that has historically been known to persecute certain populations,” Malin says. “If you are not just going to persecute a population based on the extent to which they publicly say that they are a particular group, there is certainly a potential to subjugate them on a biological basis.”

James Leibold, a nonresident senior fellow at the Australian Strategic Policy Institute and one of the authors of the report on China’s DNA database, tells Spectrum that he is worried that China building up and utilizing this database could normalize this type of behavior.

“Global norms around genomic data are currently in a state of flux. China is the only country in the world conducting mass harvesting of DNA data outside a major criminal investigation,” Leibold says. “It’s the only forensic DNA database in the world to contain troves of samples from innocent civilians.”

Lebold says ethnic minorities like the Uighurs aren’t the only ones threatened by this mass DNA collection. He says the database could be used against dissidents and any other people who the government sees as a threat.

“With a full genomic map of its citizenry, Chinese authorities could track down those engaged in politically subversive acts (protestors, petitioners, etc.) or even those engaged in ‘abnormal’ or unacceptable behavior (religious groups, drug users, gamblers, prostitutes, etc.),” Leibold says. “We know the Chinese police have planted evidence in the past, and now it is conceivable that they could use planted DNA to convict ‘enemies of the state.’”

As Leibold points out, world powers like China and the U.S. have the ability to change norms in terms of what kind of behavior from a major government is considered acceptable. Thusly, there are many risks to allowing these countries to normalize massive DNA databases. As often happens, what at first seems like a simple law enforcement tool can quickly become a dangerous weapon against marginalized people.

Code signing using AWS Certificate Manager Private CA and AWS Key Management Service asymmetric keys

Post Syndicated from Ram Ramani original https://aws.amazon.com/blogs/security/code-signing-aws-certificate-manager-private-ca-aws-key-management-service-asymmetric-keys/

In this post, we show you how to combine the asymmetric signing feature of the AWS Key Management Service (AWS KMS) and code-signing certificates from the AWS Certificate Manager (ACM) Private Certificate Authority (PCA) service to digitally sign any binary data blob and then verify its identity and integrity. AWS KMS makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and with your applications running on AWS. ACM PCA provides you a highly available private certificate authority (CA) service without the upfront investment and ongoing maintenance costs of operating your own private CA. CA administrators can use ACM PCA to create a complete CA hierarchy, including online root and subordinate CAs, with no need for external CAs. Using ACM PCA, you can provision, rotate, and revoke certificates that are trusted within your organization.

Traditionally, a person’s signature helps to validate that the person signed an agreement and agreed to the terms. Signatures are a big part of our lives, from our driver’s licenses to our home mortgage documents. When a signature is requested, the person or entity requesting the signature needs to verify the validity of the signature and the integrity of the message being signed.

As the internet and cryptography research evolved, technologists found ways to carry the usefulness of signatures from the analog world to the digital world. In the digital world, public and private key cryptography and X.509 certificates can help with digital signing, verifying message integrity, and verifying signature authenticity. In simple terms, an entity—which could be a person, an organization, a device, or a server—can digitally sign a piece of data, and another entity can validate the authenticity of the signature and validate the integrity of the signed data. The data that’s being signed could be a document, a software package, or any other binary data blob.

To learn more about AWS KMS asymmetric keys and ACM PCA, see Digital signing with the new asymmetric keys feature of AWS KMS and How to host and manage an entire private certificate infrastructure in AWS.

We provide Java code snippets for each part of the process in the following steps. In addition, the complete Java code with the maven build configuration file pom.xml are available for download from this GitHub project. The steps below illustrate the different processes that are involved and the associated Java code snippet. However, you need to use the GitHub project to be able to build and run the Java code successfully.

Let’s take a look at the steps.

1. Create an asymmetric key pair

For digital signing, you need a code-signing certificate and an asymmetric key pair. In this step, you create an asymmetric key pair using AWS KMS. The below code snippet in the main method within the file Runner.java is used to create the asymmetric key pair within KMS in your AWS account. An asymmetric KMS key with the alias CodeSigningCMK is created.


AsymmetricCMK codeSigningCMK = AsymmetricCMK.builder()
                .withAlias(CMK_ALIAS)
                .getOrCreate();

2. Create a code-signing certificate

To create a code-signing certificate, you need a private CA hierarchy, which you create within the ACM PCA service. This uses a simple CA hierarchy of one root CA and one subordinate CA under the root because the recommendation is that you should not use the root CA directly for signing code-signing certificates. The certificate authorities are needed to create the code-signing certificate. The common name for the root CA certificate is root CA, and the common name for the subordinate CA certificate is subordinate CA. The following code snippet in the main method within the file Runner.java is used to create the private CA hierarchy.


PrivateCA rootPrivateCA = PrivateCA.builder()
                .withCommonName(ROOT_COMMON_NAME)
                .withType(CertificateAuthorityType.ROOT)
                .getOrCreate();

PrivateCA subordinatePrivateCA = PrivateCA.builder()
        .withIssuer(rootPrivateCA)
        .withCommonName(SUBORDINATE_COMMON_NAME)
        .withType(CertificateAuthorityType.SUBORDINATE)
        .getOrCreate();

3. Create a certificate signing request

In this step, you create a certificate signing request (CSR) for the code-signing certificate. The following code snippet in the main method within the file Runner.java is used to create the CSR. The END_ENTITY_COMMON_NAME refers to the common name parameter of the code signing certificate.


String codeSigningCSR = codeSigningCMK.generateCSR(END_ENTITY_COMMON_NAME);

4. Sign the CSR

In this step, the code-signing CSR is signed by the subordinate CA that was generated in step 2 to create the code-signing certificate.


GetCertificateResult codeSigningCertificate = subordinatePrivateCA.issueCodeSigningCertificate(codeSigningCSR);

Note: The code-signing certificate that’s generated contains the public key of the asymmetric key pair generated in step 1.

5. Create the custom signed object

The data to be signed is a simple string: “the data I want signed”. Its binary representation is hashed and digitally signed by the asymmetric KMS private key created in step 1, and a custom signed object that contains the signature and the code-signing certificate is created.

The below code snippet in the main method within the file Runner.java is used to create the custom signed object.


CustomCodeSigningObject customCodeSigningObject = CustomCodeSigningObject.builder()
                .withAsymmetricCMK(codeSigningCMK)
                .withDataBlob(TBS_DATA.getBytes(StandardCharsets.UTF_8))
                .withCertificate(codeSigningCertificate.getCertificate())
                .build();

6. Verify the signature

The custom signed object is verified for integrity, and the root CA certificate is used to verify the chain of trust to confirm non-repudiation of the identity that produced the digital signature.

The below code snippet in the main method within the file Runner.java is used for signature verification:


String rootCACertificate = rootPrivateCA.getCertificate();
 String customCodeSigningObjectCertificateChain = codeSigningCertificate.getCertificate() + "\n" + codeSigningCertificate.getCertificateChain();

 CustomCodeSigningObject.getInstance(customCodeSigningObject.toString())
        .validate(rootCACertificate, customCodeSigningObjectCertificateChain);

During this signature validation process, the validation method shown in the code above retrieves the public key portion of the AWS KMS asymmetric key pair generated in step 1 from the code-signing certificate. This process has the advantage that credentials to access AWS KMS aren’t needed during signature validation. Any entity that has the root CA certificate loaded in its trust store can verify the signature without needing access to the AWS KMS verify API.

Note: The implementation outlined in this post is an example. It doesn’t use a certificate trust store that’s either part of a browser or part of a file system within the resident operating system of a device or a server. The trust store is placed in an instance of a Java class object for the purpose of this post. If you are planning to use this code-signing example in a production system, you must change the implementation to use a trust store on the host. To do so, you can build and distribute a secure trust store that includes the root CA certificate.

Conclusion

In this post, we showed you how a binary data blob can be digitally signed using ACM PCA and AWS KMS and how the signature can be verified using only the root CA certificate. No secret information or credentials are required to verify the signature. You can use this method to build a custom code-signing solution to address your particular use cases. The GitHub repository provides the Java code and the maven pom.xml that you can use to build and try it yourself. The README.md file in the GitHub repository shows the instructions to execute the code.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Certificate Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ram Ramani

Ram is a Security Solutions Architect at AWS focusing on data protection. Ram works with customers across different industry verticals to provide them with solutions that help with protecting data at rest and in transit. In prior roles, Ram built ML algorithms for video quality optimization and worked on identity and access management solutions for financial services organizations.

Author

Kyle Schultheiss

Kyle is a Senior Software Engineer on the AWS Cryptography team. He has been working on the ACM Private Certificate Authority service since its inception in 2018. In prior roles, he contributed to other AWS services such as Amazon Virtual Private Cloud, Amazon EC2, and Amazon Route 53.

Netflix Studio Engineering Overview

Post Syndicated from Netflix Technology Blog original https://netflixtechblog.com/netflix-studio-engineering-overview-ed60afcfa0ce

By Steve Urban, Sridhar Seetharaman, Shilpa Motukuri, Tom Mack, Erik Strauss, Hema Kannan, CJ Barker

Netflix is revolutionizing the way a modern studio operates. Our mission in Studio Engineering is to build a unified, global, and digital studio that powers the effective production of amazing content.

Netflix produces some of the world’s most beloved and award-winning films and series, including The Irishman, The Crown, La Casa de Papel, Ozark, and Tiger King. In an effort to effectively and efficiently produce this content we are looking to improve and automate many areas of the production process. We combine our entertainment knowledge and our technical expertise to provide innovative technical solutions from the initial pitch of an idea to the moment our members hit play.

Why Does Studio Engineering Exist?

We enable Netflix to build a unified, global and digital studio that powers the effective production of amazing content.
Studio Engineering’s ‘Why’

The journey of a Netflix Original title from the moment it first comes to us as a pitch, to that press of the play button is incredibly complex. Producing great content requires a significant amount of coordination and collaboration from Netflix employees and external vendors across the various production phases. This process starts before the deal has been struck and continues all the way through launch on the service, involving people representing finance, scheduling, human resources, facilities, asset delivery, and many other business functions. In this overview, we will shed light on the complexity and magnitude of this journey and update this post with links to deeper technical blogs over time.

Content Lifecycle: Pitch, Development, Production, On-Service
Pitch-to-Play

Mission at a Glance

  • Creative pitch: Combine the best of machine learning and human intuition to help Netflix understand how a proposed title compares to other titles, estimate how many subscribers will enjoy it, and decide whether or not to produce it.
  • Business negotiations: Empower the Netflix Legal team with data to help with deal negotiations and acquisition of rights to produce and stream the content.
  • Pre-Production: Provide solutions to plan for resource needs, and discovery of people and vendors to continue expanding the scale of our productions. Any given production requires the collaboration of hundreds of people with varying expertise, so finding exactly the right people and vendors for each job is essential.
  • Production: Enable content creation from script to screen that optimizes the production process for efficiency and transparency. Free up creative resources to focus on what’s important: producing amazing and entertaining content.
  • Post-Production: Help our creative partners collaborate to refine content into their final vision with digital content logistics and orchestration.

What’s Next?

Studio Engineering will be publishing a series of articles providing business and technical insights as we further explore the details behind the journey from pitch to play. Stay tuned as we expand on each stage of the content lifecycle over the coming months!

Here are some related articles to Studio Engineering:


Netflix Studio Engineering Overview was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Using AWS ParallelCluster serverless API for AWS Batch

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-aws-parallelcluster-serverless-api-for-aws-batch/

This post is courtesy of Dario La Porta, Senior Consultant, HPC.

This blog is a continuation of a series of posts demonstrating how to create serverless architectures to support HPC workloads run with AWS ParallelCluster.

The first post, Using AWS ParallelCluster with a serverless API, explains how to create a serverless API for the AWS ParallelCluster command line interface. The second post, Amazon API Gateway for HPC job submission, shows how to submit jobs to a cluster that uses a Slurm job scheduler through a similar serverless API. In this post, I create a serverless API of the AWS Batch command line interface inside ParallelCluster. This uses AWS ParallelCluster, Amazon API Gateway, and AWS Lambda.

The integration of ParallelCluster with AWS Batch replaces the need of third-party batch processing solutions. It also natively integrates with the AWS Cloud.

Many use cases can benefit from this approach. The financial services industry can automate the resourcing and scheduling of the jobs to accelerate decision-making and reduce cost. Life sciences companies can discover new drugs in a more efficient way.

Submitting HPC workloads through a serverless API enables additional workflows. You can extend on-premises clusters to run specific jobs on AWS’ scalable infrastructure to leverage its elasticity and scale. For example, you can create event-driven workflows that run in response to new data being stored in an S3 bucket.

Using a serverless API as described in this post can improve security by removing the need to log in to EC2 instances to use the AWS Batch CLI in AWS ParallelCluster.

Together, this class of workflow can further improve the security of your infrastructure and dat. It can also help optimize researchers’ time and efficiency.

In this post, I show how to create the AWS Batch cluster using AWS ParallelCluster. I then explain how to build the serverless API used for the interaction with the cluster. Finally, I explain how to use the API to query the resources of the cluster and submit jobs.

This diagram shows the different components of the solution.

Architecture diagram

AWS ParallelCluster configuration

AWS ParallelCluster is an open source cluster management tool to deploy and manage HPC clusters in the AWS Cloud.

The same procedure, described in the Using AWS ParallelCluster with a serverless API post, is used to create the AWS Batch cluster in the new template.yml and pcluster.conf file. The template.yml file contains the required policies for the Lambda function to build the AWS Batch cluster. Be sure to modify <AWS ACCOUNT ID> and <REGION> to match the value for your account.

The pcluster.conf file contains the AWS ParallelCluster configuration to build a cluster using AWS Batch as the job scheduler. The master_subnet_id is the id of the created public subnet and the compute_subnet_id is the private one. More information about ParallelCluster configuration file options and syntax are explained in the ParallelCluster documentation.

Deploy the API with AWS SAM

The code used for this example can be downloaded from this repo. Inside the repo:

  • The sam-app folder in the aws-sample repository contains the code required to build the AWS ParallelCluster serverless API for AWS Batch.
  • sam-app/template.yml contains the policy required for the Lambda function for the creation of the AWS Batch cluster. Be sure to modify <AWS ACCOUNT ID> and <REGION>to match the value for your account.

AWS Identity and Access Management Roles in AWS ParallelCluster contains the latest version of the policy. See the ParallelClusterInstancePolicy section related to the awsbatch scheduler.

To deploy the application, run the following commands:

cd sam-app
sam build
sam deploy --guided

From here, provide parameter values for the SAM deployment wizard for your preferred Region and AWS account. After the deployment, note the outputs:

Deployment output

SAM deploying:
SAM deployment output

The API Gateway endpoint URL is used to interact with the API. It has the following format:

https://<ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pclusterbatch

Interact with the AWS Batch cluster using the deployed API

The deployed pclusterbatch API requires some parameters:

  • command – the pcluster Batch command to execute. A detailed list is available commands is available in the AWS ParallelCluster CLI Commands for AWS Batch page.
  • cluster_name – the name of the cluster.
  • jobid – the jobid string.
  • compute_node – parameter used to retrieve the output of the specified compute node number in a mpi job.
  • --data-binary "$(base64 /path/to/script.sh)" – parameter used to pass the job script to the API.
  • -H "additional_parameters: <param1> <param2> <...>" – used to pass additional parameters.

The cluster’s queue can be listed with the following:

$ curl --request POST -H "additional_parameters: "  "https://<ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pclusterbatch?command=awsbqueues&cluster=cluster1"

Job output
A cluster job can be submitted with the following command. The job_script.sh is an example script used for the job.

$ curl --request POST -H "additional_parameters: -jn hello" --data-binary "$(base64 /path/to/job_script.sh)" "https://<ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pclusterbatch?command=awsbsub&cluster=cluster1"

Job output
This command is used to check the status of the job:

$ curl --request POST -H "additional_parameters: "  "https://<ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pclusterbatch?command=awsbstat&cluster=cluster1&jobid=3d3e092d-ca12-4070-a53a-9a1ec5c98ca0"

Job output
The output of the job can be retrieved with the following:

$ curl --request POST -H "additional_parameters: "  "https://<ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pclusterbatch?command=awsbout&cluster=cluster1&jobid=3d3e092d-ca12-4070-a53a-9a1ec5c98ca0"

Job output

The following command can be used to list the cluster’s hosts:

$ curl –request POST –H “additional_parameters: “  “https://<ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pclusterbatch?command=awsbhosts&cluster=cluster1”

Job output
You can also use the API to submit MPI jobs to the AWS Batch cluster. The mpi_job_script.sh can be used for the following three nodes MPI job:

curl --request POST -H "additional_parameters: -n 3" --data-binary "$(base64 mpi_script.sh)" "https://<ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pclusterbatch?command=awsbsub&cluster=cluster1"

Job output
Retrieve the job output from the first node using the following:

$ curl --request POST -H "additional_parameters: "  "https://<ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pclusterbatch?command=awsbout&cluster=cluster1&jobid=085b8e31-21cc-4f8e-8ab5-bdc1aff960d9&compute_node=0"

Job output

Teardown

You can destroy the resources by deleting the CloudFormation stacks created during installation. Deleting a Stack on the AWS CloudFormation Console explains the required steps.

Conclusion

In this post, I show how to integrate the AWS Batch CLI by AWS ParallelCluster with API Gateway. I explain the lifecycle of the job submission with AWS Batch using this API. API Gateway and Lambda run a serverless implementation of the CLI. It facilitates programmatic integration with AWS ParallelCluster with your on-premises or AWS Cloud applications.

You can also use this approach to integrate with the previous APIs developed in the Using AWS ParallelCluster with a serverless API and Amazon API Gateway for HPC job submission posts. By combining these different APIs, it is possible to create event-driven workflows for HPC. You can create scriptable workflows to extend on-premises infrastructure. You can also improve the security of HPC clusters by avoiding the need to use IAM roles and security groups that must otherwise be granted to individual users.

To learn more, read more about how to use AWS ParallelCluster and AWS Batch.

Biking’s Bedazzling Boom

Post Syndicated from David Schneider original https://spectrum.ieee.org/tech-talk/transportation/systems/bikings-bedazzing-boom

It might seem odd that, earlier this month, Stuttgart-based Bosch, a leading global supplier of automotive parts and equipment, seemed to be asking political leaders to reduce the amount of space on roadways they are allowing for cars and trucks.

This makes more sense when you realize that this call for action came from the folks at Bosch eBike Systems, a division of the company that makes electric bicycles. Their argument is simple enough: The COVID19 pandemic has prompted many people to shift from traveling via mass transit to bicycling, and municipal authorities should respond to this change by beefing up the bike infrastructure in their cities and towns.

There’s no doubt that a tectonic shift in people’s interest in cycling is taking place. Indeed, the current situation appears to rival in ferocity the bike boom of the early 1970s, which was sparked by a variety of factors, including: the maturing into adulthood of many baby boomers who were increasingly concerned about the environment; the 1973 Arab oil embargo; and the mass production of lightweight road bikes.

While the ’70s bike boom was largely a North American affair, the current one, like the pandemic itself, is global. Detailed statistics are hard to come by, but retailers in many countries are reporting a surge of sales, for both conventional bikes and e-bikes—the latter of which may be this bike boom’s technological enabler the way lightweight road bikes were to the boom that took place 50 years ago. Dutch e-bike maker VanMoof, for example, reported a 50 percent year-over-year increase in its March sales. And that’s when many countries were still in lockdown.

Eco Compteur, a French company that sells equipment for tracking pedestrian and bicycle traffic, is documenting the current trend with direct observations. It reports bicycle use in Europe growing strongly since lockdown measures eased. And according to its measurements, in most parts of the United States, bicycle usage is up by double or even triple digits over the same time last year.

Well before Bosch’s electric-bike division went public with its urgings, local officials had been responding with ways to help riders of both regular bikes and e-bikes. In March, for example, the mayor of New York City halted the police crackdown on food-delivery workers using throttle-assisted e-bikes. (Previously, they had been treated as scofflaws and ticketed.) And in April, New York introduced a budget bill that will legalize such e-bikes statewide.

Biking in all forms is indeed getting a boost around the world, as localities create or enlarge bike lanes, accomplishing at breakneck speed what typically would have taken years. Countless cities and towns—including Boston, Berlin, and Bogota, where free e-bikes have even been provided to healthcare workers—are fast creating bike lanes to help their many new bicycle riders get around.

Maybe it’s not accurate to characterize these local improvements to biking infrastructure as countless; some people are indeed trying to keep of tally of these developments. The “Local Actions to Support Walking and Cycling During Social Distancing Dataset” has roughly 700 entries as of this writing. That dataset is the brainchild of Tabitha Combs at the University of North Carolina in Chapel Hill, who does research on transportation planning.

“That’s probably 10 percent of what’s happening in the world right now,” says Combs, who points out that one of the pandemic’s few positive side effects has been its influence on cycling “You’ve got to get out of the house and do something,” she says. “People are rediscovering bicycling.”

The key question is whether the changes in people’s inclination to cycle to work or school or just for exercise—and the many improvements to biking infrastructure that the pandemic has sparked as a result—will endure after this public-health crisis ends. Combs says that cities in Europe appear more committed than those in the United States in this regard, with some allocating substantial funds to planning their new bike infrastructure.

Cycling is perhaps one realm were responding to the pandemic doesn’t force communities to sacrifice economically: Indeed, increasing the opportunities for people to walk and bike often “facilitates spontaneous commerce,” says Combs. And researchers at Portland State have shown that cycling infrastructure can even boost nearby home values. So lots of people should be able to agree that having the world bicycling more is an excellent way to battle the pandemic.

[$] First PHP 8 alpha released

Post Syndicated from coogle original https://lwn.net/Articles/824738/rss

The PHP project has released the first alpha of PHP 8, which is slated for general availability in November 2020. This initial test release includes many new features such as just-in-time (JIT) compilation, new constructs like Attributes, and more. One of twelve planned releases before the general availability release, it represents a feature set that is still subject to change.

Welcome to the Digital Nomad Life

Post Syndicated from Lora Maslenitsyna original https://www.backblaze.com/blog/welcome-to-the-digital-nomad-life/

In early March of this year, Backblaze made the decision to require all employees who could work from home to do so. As various shelter-in-place orders were issued, the vast majority of the country quickly became a workforce working remotely. Now, with a couple of months of work from home behind us, and the indefinite future ahead, we’re all seeing that going to work will probably look a bit different for us for some time to come.

Many people have been working from home with the assumption that they’d be back in the office by fall. While some businesses move forward with reopening, many companies are beginning to consider allowing most of their employees to work remotely on a regular basis. There’s also the potential for future shelter-in-place orders that might require us to keep working from home. All this means that remote work is becoming a new norm rather than an unexpected but short-lived trend.

If you’re someone who has just been “making it work” from home, then you probably haven’t had a moment to think about creating a productive space for yourself. Now’s the time to adjust your setup to optimize your ability to work from home.

Practical Tips for Remote Work

Some of the previous posts in our Digital Nomads series highlight people who have already optimized their remote work setup. They’re experts in working from home or in very remote locations, but many other people aren’t yet. So, we’ve put together a few tips gathered from our remote team, including advice from professionals who already had experience working remotely prior to this year for improving your work from home setup.

Optimize Your Home Office

One of the first things you’ll want to consider is where you’re working. For many of us, our homes aren’t set up for productivity because we’re used to going into an office. At home, it might feel a bit more difficult to concentrate on work when distractions, space adjustments, and new routines need to be managed. Some people have room in their homes to build out a home office, while people with less space might want to focus on a few items that will help them feel more productive.

For many people, working means sitting at their desk for extended periods of time. That makes a comfortable chair particularly important for sustaining focus. If you liked the chair you used in your office, consider reaching out to your office administrator to find out the exact make or model. Aeron chairs are also a good option that provide ergonomic support and customizable options. Aeron chairs tend to be expensive, but Staples also offers quality chairs for a range of budgets.

You can also look at reviews of office chairs from Wirecutter to learn about the range in types of features to look for in a good chair, even if you end up picking one outside of their recommendations. For example, all you might need to make your desk chair feel most comfortable is a new set of wheels, like these SunnieDog ergonomic office chair wheels.

A chair may not be the only adjustment you might need to make to your home office. Standing desks are a good option for anyone who doesn’t want to do much sitting. The most popular standing desks among our team are Jarvis, Uplift, and IKEA Bekant. Jarvis and Uplift even offer custom options.

Minimize Distractions

Many people are experiencing a lot of distractions while working from home. In the office, we don’t usually have kids running around, or pets who want to go on walks, or roommates who also have to share the new living room-turned-conference room. It could make all the difference in your ability to productively work from home to adjust your environment in a way that minimizes distractions.

Our team and the Digital Nomads we’ve profiled all recommend developing a space for work. For some, that simply means clearing the kitchen table every morning to start the day…and “breaking down” the office to end the work day and unplug.

The important thing to remember is to be mindful of the space that makes you feel most productive. That could mean creating a separate space entirely for work, claiming a corner in a room of your home as a quiet zone, or growing a jungle of house plants in your living room—do what’s best for you! Set up your space in the best way for you to be able to focus on your work.

Keep Your Data Secure

Something that’s easy to take for granted at work is data security. We imagine that the files that we use at work stay at work. But nowadays, we’re taking our work outside of the office. Working remotely means that as individuals, we gain more responsibility for the security of those files.

If this has been the first time you’ve worked outside the office, you might still be getting used to using the same security guidelines at home as you would in the office. Being outside your office network may mean you have more passwords and logins to juggle than ever before. Using a password manager like BitWarden, 1Password, or LastPass can help to stay organized while protecting your work projects. You can also read our guide to preventing ransomware attacks that could compromise all of your work data.

Be Sure to Have a Backup Plan

Speaking of protecting your data, backing up is just as important for your work as it is with your personal files. And if you’re already using Backblaze to protect your personal data, why not ease your mind about your work files, too?

If you’re part of a team that works off of a shared drive or cloud sync service like Google Drive or Microsoft OneDrive, then having a backup of your files is all the more important considering sync doesn’t secure them. (If you’re not sure about the difference between cloud sync vs. cloud backup, you can read our handy guide, here.) For distributed teams, Backblaze Business Backup extends our Computer Backup product and adds a number of Admin controls (and other features tailored for disrupted workflows). There is no incremental charge for the extra functionality. Here’s a primer on all of the different options available to you.

Communicate with Your Team

While working from home, it’s hard to communicate with team members you don’t see face-to-face every day. Remote work puts an emphasis on an individual environment, so it can make it more difficult to find time in busy schedules to schedule a meeting or a quick call about a project.

Thankfully, technology can bridge some of those gaps. In place of in-person meetings, most people now hold video calls. And while a comfortable setup for your home office is one thing, upgrading your video setup could make all the difference in connecting with your team over frequent video meetings. We put together a guide with expert advice from TV studio professionals to up your video conferencing game.

Many organizations also use direct messaging applications like Slack so that employees can reach each other quickly and casually without the need for email or a call. Connecting Slack with your calendar automatically shows co-workers whether you’re free to talk or busy in a meeting. It’s the virtual equivalent of seeing that someone’s stepped away from their desk.

Besides adapting to new methods of communication, we’re still dealing with some pre-existing opportunities to enhance productivity. Some of us on the Publishing team suffer from “browser tab proliferation” but we use bookmarking tools and browser extensions that help group and manage tabs. Pocket and Session Buddy are popular options that can help organize some of the content you access often or want to save for later.

Work From Anywhere

In a sense, we can all be Digital Nomads now given more opportunities to take our work remotely. In our previous posts in this series, we’ve interviewed professionals in our team, like Senior System Administrator Elliott, and professionals in other fields, like Chris, Producer and Director at Fin Films, to learn more about their experiences with working outside of a traditional office setting.

This year, a lot has changed, including the fact that there are even more people who are redefining the ways they do their jobs. We’d love to hear your best tips for working remotely in the comments below.

The post Welcome to the Digital Nomad Life appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

New Hardware Mimics Spiking Behavior of Neurons With High Efficiency

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/hardware/new-hardware-mimics-spiking-behavior-of-neurons-with-high-efficiency

Journal Watch report logo, link to report landing page

Nothing computes more efficiently than a brain, which is why scientists are working hard to create artificial neural networks that mimic the organ as closely as possible. Conventional approaches use artificial neurons that work together to learn different tasks and analyze data; however, these artificial neurons do not have the ability to actually “fire” like real neurons, releasing bursts of electricity that connect them to other neurons in the network. The third generation of this computing tech aims to capture this real-life process more accurately – but achieving such a feat is hard to do efficiently.

Leveraging AWS Global Backbone for Data Center Migration and Global Expansion

Post Syndicated from Santiago Freitas original https://aws.amazon.com/blogs/architecture/leveraging-aws-global-backbone-for-data-center-migration-and-global-expansion/

Many companies run their applications in data centers, server rooms or in space rented from colocation providers in multiple countries. Those companies usually have a mixture of a small number of central large data centers where their core systems are hosted in several smaller, regional data centers. These offices in the multiple countries require access to applications running in the local data centers, usually in the same country, as well as to applications running in the remote data centers. Companies have taken the approach of establishing a self-managed, international wide area network (WAN) or contracting it as a service from a telecommunications provider to enable connectivity between the different sites. As customers migrate workloads to AWS Regions, they need to maintain connectivity between their offices, AWS Regions, and existing on-premises data centers.

This blog post discusses architectures applicable for international data center migrations as well as to customers expanding their business to new countries. The proposed architectures enable access to both AWS and on-premises hosted applications. These architectures leverage the AWS global backbone for connectivity between customer sites in different countries and even continents.

Let’s look into a use case where a customer has their central data center that hosts its core systems located in London, United Kingdom. The customer has rented space from a colocation provider in Mumbai to run applications required to be hosted in India. They have an office in India where users need access to the applications running in their Mumbai data center as well as the core systems running in their London data center. Those different sites are interconnected by a global WAN as illustrated on the diagram below.

Initial architecture with a global WAN interconnecting customer’s sites

Figure 1: Initial architecture with a global WAN interconnecting customer’s sites

The customer then migrates their applications from their Mumbai data center to the AWS Mumbai region. Users from the customer’s offices in India require access to applications running in the AWS Mumbai Region as well as the core systems running in their London data center. To enable access to the applications hosted in the AWS Mumbai Region, the customer established a connection from their India offices to the AWS Mumbai region. These connections can leverage AWS Direct Connect (DX) or an AWS Site-to-Site VPN. We will also use AWS Transit Gateway (TGW) which allows for customer traffic to transit through AWS infrastructure. For the customer sites using AWS Direct Connect, we attach an AWS Transit Gateway to a Direct Connect gateway (DXGW) to enable customers to manage a single connection for multiple VPCs or VPNs that are in the same region. To optimize their WAN cost, the customer leverages AWS Transit Gateway inter-region peering capability to connect their AWS Transit Gateway in the AWS Mumbai region to their AWS Transit Gateway in the AWS London region. Traffic using inter-region Transit Gateway peering is always encrypted, stays on the AWS global network, and never traverses the public Internet. Transit Gateway peering enables international, in this case intercontinental, communication. Once the traffic arrives at the London region’s Transit Gateway, the customer routes the traffic over an AWS Direct Connect (or VPN) to the central data center, where core systems are hosted.

As applications are migrated from the central data center in London to the AWS London Region, users from India office are able to seamlessly access applications hosted in the AWS London region and on-premises. The architecture below demonstrates the traffic between the customer sites and also from a customer site to a local and a remote AWS Region.

Access from customer sites to applications in AWS regions and on-premises via AWS Global Network

Figure 2: Access from customer sites to applications in AWS regions and on-premises via AWS Global Network

As the customer expands internationally, the architecture evolves to allow access from new international offices such as in Sydney and Singapore to the other customer sites as well as to AWS regions via the AWS Global Network. Depending on the bandwidth requirements, a customer can use AWS DX to the nearest AWS region and then leverage AWS Transit Gateway inter-region peering, as demonstrated on the diagram below for the Singapore site. For sites where a VPN-based connection meets the bandwidth and user experience requirements, the customer can leverage accelerated site-to-site VPN using AWS Global Accelerator, as illustrated for the Sydney office. This architecture allows thousands of sites to be interconnected and use the AWS global network to access applications running on-premises or in AWS.

Global connectivity facilitated by AWS Global Network

Figure 3: Global connectivity facilitated by AWS Global Network

Considerations

The following are some of the characteristic customers should consider when adopting the architectures described in this blog post.

  • You have a fixed hourly cost of TGW attachments, VPN and DX connections.
  • There is also a variable usage-based component that depends on the amount of traffic that flows through TGW, OUT of AWS, and inter-region.
  • In comparison, a fixed price model is often offered by telecommunications providers for the entire network.

For customers with a high number of sites in the same geographical area, consider setting up a regional WAN. This could be done with SD-WAN technologies or private WAN connections. A regional WAN is used to interconnect different sites with nearest AWS region also connected to the regional WAN. Such design uses the AWS global network for international connectivity and a regional WAN for regional connectivity between customer sites.

Conclusion

As customers migrate their applications to AWS, they can leverage the AWS global network to optimize their WAN architecture and associated costs. Leveraging TGW inter-region peering enable customers to build architectures which facilitate data center migration as well as international business expansion, while allowing access to workloads running either on-premises or in AWS regions. For a list of AWS regions where TGW inter-region peering is supported, please refer to the AWS Transit Gateway FAQ.

Android Apps Stealing Facebook Credentials

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/android_apps_st.html

Google has removed 25 Android apps from its store because they steal Facebook credentials:

Before being taken down, the 25 apps were collectively downloaded more than 2.34 million times.

The malicious apps were developed by the same threat group and despite offering different features, under the hood, all the apps worked the same.

According to a report from French cyber-security firm Evina shared with ZDNet today, the apps posed as step counters, image editors, video editors, wallpaper apps, flashlight applications, file managers, and mobile games.

The apps offered a legitimate functionality, but they also contained malicious code. Evina researchers say the apps contained code that detected what app a user recently opened and had in the phone’s foreground.

No Propeller? No Problem. This Blimp Flies on Buoyancy Alone

Post Syndicated from Andrew Rae original https://spectrum.ieee.org/aerospace/aviation/no-propeller-no-problem-this-blimp-flies-on-buoyancy-alone

On a cold March night last year in Portsmouth, England, an entirely new type of aircraft flew for the first time, along a dimly lit 120-meter corridor in a cavernous building once used to build minesweepers for the Royal Navy.

This is the Phoenix, an uncrewed blimp that has no engines but propels itself forward by varying its buoyancy and its orientation. The prototype measures 15 meters in length, 10.5 meters in wingspan, and when fully loaded weighs 150 kilograms (330 pounds). It flew over the full length of the building, each flight requiring it to undulate up and down about five times.

Flying in this strange way has advantages. For one, it demands very little energy, allowing the craft to be used for long-duration missions. Also, it dispenses with whirring rotors and compressor blades and violent exhaust streams—all potentially dangerous to people or objects on the ground and even in the air. Finally, it’s cool: an airship that moves like a sea creature.

This propulsion concept has been around since 1864, when a patent for the technique, as applied to an airship, was granted to one Solomon Andrews, of New Jersey (U.S. Patent 43,449). Andrews called the ship the Aereon, and he proposed that it use hydrogen for lift, to make the ship ascend. The ship could then vent some of the hydrogen to reduce its buoyancy, allowing it to descend. A return to lighter-than-air buoyancy would then be achieved by discarding ballast carried aloft in a gondola suspended beneath the airship.

The pilot would control the ship’s attitude by walking along the length of the gondola. Walking to the front moved the center of gravity ahead of the center of buoyancy, making the nose of the airship pitch down; walking to the back would make the nose pitch up.

Andrews suggested that these two methods could be used in conjunction to propel the airship in a sinusoidal flight path. Raising the nose in ascent and lowering it in descent causes the combination of aerodynamic force with either buoyancy (when lighter than air) or with weight (when heavier than air) to have a vector component along the flight path. That component provides the thrust except at the top and bottom of the flight path, where momentum alone carries it through. The flight tests we performed were at walking pace, so the aerodynamic forces would have been very small. There will always be a component of either buoyancy or weight along the flight path.

The method Andrews describes in his patent means that the flight had to end when the airship ran out of either hydrogen or ballast. Later, he built a second airship, which used cables to compress the gas or let it expand again, so that the airship could go up and down without having to jettison ballast. His approach was sound: The key to unlocking this idea and creating a useful aircraft is thus the ability to vary the buoyancy in a sustainable manner.

A variation on this mode of propulsion has been demonstrated successfully underwater, in remotely operated vehicles. Many of these “gliders” vary the volume of water that they displace by using compressed air to expand and contract flexible bladders. Such gliders have been used as long-distance survey vehicles that surface periodically to upload the data they’ve collected. Because water is nearly 1,000 times as dense as air, these robot submarines needn’t change the volume of the bladders very much to attain the necessary changes in buoyancy.

Aeronautical versions of this variable-buoyancy concept have been tried—the Physical Science Laboratory at New Mexico State University ran a demonstration project called Aerobody in the early 2000s—but all anyone could do was demonstrate that this odd form of propulsion works. Before now, nobody ever took advantage of the commercial possibilities that it offered for ultralong endurance applications.

The Phoenix project grew out of a small demonstration system developed by Athene Works, a British company that specializes in innovation and that’s funded by the U.K. Ministry of Defense. That system was successful enough to interest Innovate UK, a government agency dedicated to testing new ideas, and the Aerospace Technology Institute, a government-funded body that promotes transformative technology in air transport. These two organizations put up half the £3.5 million budget for the Phoenix. The rest was supplied by four private companies, five universities, and three governmental organizations devoted to high-value manufacturing.

My colleagues and I had less than four years to develop many of the constituent technologies, most of which were bespoke solutions, and then build and test the new craft. A great number of organizations participated in the project, with the Centre for Process Innovation managing the overall collaboration. I served as the lead engineer.

For up-and-down motion, the aircraft takes in and compresses air into an internal “lung,” making itself heavier than air; then it releases that compressed air to again become lighter than air. Think of the aircraft as a creature that inhales and exhales as it propels itself forward.

The 15-meter-long fuselage, filled with helium to achieve buoyancy, has a teardrop shape, representing a compromise between a sphere (which would be the ideal shape for maximizing the volume of gas you can enclose with a given amount of material) and a long, thin needle (which would minimize drag). At the relatively low speeds such a craft can aspire to, it is enough that the teardrop be just streamlined enough to avoid eddy currents, which on a sphere would form when the boundary layer of air that lies next to the surface of the airship pulls away from it. With our teardrop, the only drag comes from the friction of the air as it flows smoothly over the surface.

The skin is made of Vectran[PDF], a fiber that’s strong enough to withstand the internal pressure and sufficiently closely knit so that together with a thermoplastic polyurethane coating it can seal the helium in. The point was to be strong enough to maintain the right shape, even when the airship’s internal bladder was inflating.

Whether ascending or descending, the aircraft must control its attitude. It therefore has wings with ailerons at the tips to control the aircraft’s roll. At the back is a cross-shaped structure with a pair of horizontal stabilizers incorporating elevators to control how the airship pitches up or down, and a similar pair of vertical stabilizers with rudders to control how it yaws left or right. These flight surfaces have much in common with the wood, fabric, and wire parts in the pioneering airplanes of the early 20th century.

Two carbon-fiber spars span the wings, giving them strength. Airfoil-shaped ribs are distributed along the spars, each made up of foam sandwiched between carbon fiber. A thin skin wraps around this skeleton to give the wing its shape. We designed the horizontal and vertical tail sections to be identical to one another and to the outer panels of the wings. Thus, we were able to limit the types of parts, making the craft easier to construct and repair.

An onboard power system supplies the electricity needed to operate the pumps and valves used to inflate and deflate the inner bladder. It also energizes the various actuators needed to adjust the flight-control surfaces and keeps the craft’s autonomous flight-control system functioning. A rechargeable lithium-ion battery with a capacity of 3 kilowatt-hours meets those requirements in darkness. During daylight hours, arrays of flexible solar cells (most of them on the upper surfaces of the wings, the rest on the upper surface of the horizontal tail) recharge that battery. We confirmed through ground tests outdoors in the sun that these solar cells could simultaneously power all of the aircraft’s systems and recharge the battery in a reasonable amount of time, proving that the Phoenix could be entirely self-sufficient for energy.

We had envisaged also using a hydrogen fuel cell, but because of fire-safety requirements it wasn’t quite ready for the indoor flight trials. We do plan to add this second power source later, for redundancy. Also, if we were to use hydrogen as the lift gas, the fuel cell could be used to replenish any hydrogen lost through the airship’s skin.

So how well did the thing fly? For our tests, we programmed the autonomous flight-control system to follow a sinusoidal flight path by operating the valves and compressors connected to the internal bladder. In this respect, the flight-control system has more in common with a submarine’s buoyancy controls than an airplane’s flight controls.

We had to set strict altitude limits to avoid contact with the roof and floor of the building during our indoor test. In normal operation, the aircraft will be free to determine for itself the amplitude of its up-and-down motion and the length of each undulation to achieve the necessary velocity. Doing that will require some complex calculations and precisely executed commands—a far cry from the meandering backward and forward in a wicker gondola that Andrews did.

Although our experiments to date are merely testing a previously unproven concept, the Phoenix can now serve as the prototype for a commercially valuable aircraft. The next step is getting the Phoenix certified as airworthy. For that, it must pass flight trials outdoors. When we planned the project, this certification had a series of weight thresholds, with 150 kg being the upper limit for approval through the U.K. Civil Aviation Authority under a “permit to fly.” Had it been heavier, approval by the European Union Aviation Safety Agency would have been needed, and trying to obtain that was beyond our budget of both time and money. After the United Kingdom fully exits from the European Union, certification will be different.

Commercial applications for such an aircraft are not hard to imagine. A good example is as a high-altitude pseudosatellite, a craft that can be positioned at will to convey wireless signals to remote places. Existing aircraft designed to perform this role all need very big arrays of solar cells and large batteries, which add to both the weight and cost of the aircraft. Because the Phoenix needs only small arrays of solar cells on the wings and horizontal tail, it can be built for a tenth the cost of the solar e-planes that have been designed for this purpose. It is a cheap, almost disposable, alternative with a much higher ratio of payload to mass than that of alternative aircraft. And our designs for a much larger version of the Phoenix show that it should be feasible to lift a payload of 100 kg to an altitude of 20 kilometers.

We are now beginning to develop such a successor to the Phoenix. Perhaps you will see it one day, a dot in the sky, hanging motionless or languidly porpoising to a new position high above your head.

This article appears in the July 2020 print issue as “This Blimp Flies on Buoyancy Alone.”

About the Author

Andrew Rae is a professor of engineering at the University of the Highlands and Islands, in Inverness, Scotland.

Security updates for Tuesday

Post Syndicated from ris original https://lwn.net/Articles/824822/rss

Security updates have been issued by Debian (coturn, drupal7, libvncserver, mailman, php5, and qemu), openSUSE (curl, graphviz, mutt, squid, tomcat, and unbound), Red Hat (chromium-browser, file, kernel, microcode_ctl, ruby, and virt:rhel), Slackware (firefox), and SUSE (mariadb-100, mutt, unzip, and xmlgraphics-batik).

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close