Tag Archives: desktop

N-O-D-E’s always-on networked Pi Plug

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/node-pi-plug/

N-O-D-E’s Pi Plug is a simple approach to using a Raspberry Pi Zero W as an always-on networked device without a tangle of wires.

Pi Plug 2: Turn The Pi Zero Into A Mini Server

Today I’m back with an update on the Pi Plug I made a while back. This prototype is still in the works, and is much more modular than the previous version. https://N-O-D-E.net/piplug2.html https://github.com/N-O-D-E/piplug —————- Shop: http://N-O-D-E.net/shop/ Patreon: http://patreon.com/N_O_D_E_ BTC: 17HqC7ZzmpE7E8Liuyb5WRbpwswBUgKRGZ Newsletter: http://eepurl.com/ceA-nL Music: https://archive.org/details/Fwawn-FromManToGod

The Pi Zero Power Case

In a video early last year, YouTuber N-O-D-E revealed his Pi Zero Power Case, an all-in-one always-on networked computer that fits snugly against a wall power socket.

NODE Plug Raspberry Pi Plug

The project uses an official Raspberry Pi power supply, a Zero4U USB hub, and a Raspberry Pi Zero W, and it allows completely wireless connection to a network. N-O-D-E cut the power cord and soldered its wires directly to the power input of the USB hub. The hub powers the Zero via pogo pins that connect directly to the test pads beneath.

The Power Case is a neat project, but it may be a little daunting for anyone not keen on cutting and soldering the power supply wires.

Pi Plug 2

In his overhaul of the design, N-O-D-E has created a modular reimagining of the previous always-on networked computer that fits more streamlined to the wall socket and requires absolutely no soldering or hacking of physical hardware.

Pi Plug

The Pi Plug 2 uses a USB power supply alongside two custom PCBs and a Zero W. While one PCB houses a USB connector that slots directly into the power supply, two blobs of solder on the second PCB press against the test pads beneath the Zero W. When connected, the PCBs run power directly from the wall socket to the Raspberry Pi Zero W. Neat!

NODE Plug Raspberry Pi
NODE Plug Raspberry Pi
NODE Plug Raspberry Pi
NODE Plug Raspberry Pi

While N-O-D-E isn’t currently selling these PCBs in his online store, all files are available on GitHub, so have a look if you want to recreate the Pi Plug.

Uses

In another video — and seriously, if you haven’t checked out N-O-D-E’s YouTube channel yet, you really should — he demonstrates a few changes that can turn your Zero into a USB dongle computer. This is a great hack if you don’t want to carry a power supply around in your pocket. As N-O-D-E explains:

Besides simply SSH’ing into the Pi, you could also easily install a remote desktop client and use the GUI. You can share your computer’s internet connection with the Pi and use it just like you would normally, but now without the need for a monitor, chargers, adapters, cables, or peripherals.

We’re keen to see how our community is hacking their Zeros and Zero Ws in order to take full advantage of the small footprint of the computer, so be sure to share your projects and ideas with us, either in the comments below or via social media.

The post N-O-D-E’s always-on networked Pi Plug appeared first on Raspberry Pi.

AWS Hot Startups for February 2018: Canva, Figma, InVision

Post Syndicated from Tina Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-for-february-2018-canva-figma-invision/

Note to readers! Starting next month, we will be publishing our monthly Hot Startups blog post on the AWS Startup Blog. Please come check us out.

As visual communication—whether through social media channels like Instagram or white space-heavy product pages—becomes a central part of everyone’s life, accessible design platforms and tools become more and more important in the world of tech. This trend is why we have chosen to spotlight three design-related startups—namely Canva, Figma, and InVision—as our hot startups for the month of February. Please read on to learn more about these design-savvy companies and be sure to check out our full post here.

Canva (Sydney, Australia)

For a long time, creating designs required expensive software, extensive studying, and time spent waiting for feedback from clients or colleagues. With Canva, a graphic design tool that makes creating designs much simpler and accessible, users have the opportunity to design anything and publish anywhere. The platform—which integrates professional design elements, including stock photography, graphic elements, and fonts for users to build designs either entirely from scratch or from thousands of free templates—is available on desktop, iOS, and Android, making it possible to spin up an invitation, poster, or graphic on a smartphone at any time.

To learn more about Canva, read our full interview with CEO Melanie Perkins here.

Figma (San Francisco, CA)

Figma is a cloud-based design platform that empowers designers to communicate and collaborate more effectively. Using recent advancements in WebGL, Figma offers a design tool that doesn’t require users to install any software or special operating systems. It also allows multiple people to work in a file at the same time—a crucial feature.

As the need for new design talent increases, the industry will need plenty of junior designers to keep up with the demand. Figma is prepared to help students by offering their platform for free. Through this, they “hope to give young designers the resources necessary to kick-start their education and eventually, their careers.”

For more about Figma, check out our full interview with CEO Dylan Field here.

InVision (New York, NY)

Founded in 2011 with the goal of helping improve every digital experience in the world, digital product design platform InVision helps users create a streamlined and scalable product design process, build and iterate on prototypes, and collaborate across organizations. The company, which raised a $100 million series E last November, bringing the company’s total funding to $235 million, currently powers the digital product design process at more than 80 percent of the Fortune 100 and brands like Airbnb, HBO, Netflix, and Uber.

Learn more about InVision here.

Be sure to check out our full post on the AWS Startups blog!

-Tina

Server vs Endpoint Backup — Which is Best?

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/endpoint-backup-for-distributed-computing/

server and computer backup to the cloud

How common are these statements in your organization?

  • I know I saved that file. The application must have put it somewhere outside of my documents folder.” — Mike in Marketing
  • I was on the road and couldn’t get a reliable VPN connection. I guess that’s why my laptop wasn’t backed up.” — Sally in Sales
  • I try to follow file policies, but I had a deadline this week and didn’t have time to copy my files to the server.” — Felicia in Finance
  • I just did a commit of my code changes and that was when the coffee mug was knocked over onto the laptop.” — Erin in Engineering
  • If you need a file restored from backup, contact the help desk at [email protected] The IT department will get back to you.” — XYZ corporate intranet
  • Why don’t employees save files on the network drive like they’re supposed to?” — Isaac in IT

If these statements are familiar, most likely you rely on file server backups to safeguard your valuable endpoint data.

The problem is, the workplace has changed. Where server backups might have fit how offices worked at one time in the past, relying solely on server backups today means you could be missing valuable endpoint data from your backups. On top of that, you likely are unnecessarily expending valuable user and IT time in attempting to secure and restore endpoint data.

Times Have Changed, and so have Effective Enterprise Backup Strategies

The ways we use computers and handle files today are vastly different from just five or ten years ago. Employees are mobile, and we no longer are limited to monolithic PC and Mac-based office suites. Cloud applications are everywhere. Company-mandated network drive policies are difficult to enforce as office practices change, devices proliferate, and organizational culture evolves. Besides, your IT staff has other things to do than babysit your employees to make sure they follow your organization’s policies for managing files.

Server Backup has its Place, but Does it Support How People Work Today?

Many organizations still rely on server backup. If your organization works primarily in centralized offices with all endpoints — likely desktops — connected directly to your network, and you maintain tight control of how employees manage their files, it still might work for you.

Your IT department probably has set network drive policies that require employees to save files in standard places that are regularly backed up to your file server. Turns out, though, that even standard applications don’t always save files where IT would like them to be. They could be in a directory or folder that’s not regularly backed up.

As employees have become more mobile, they have adopted practices that enable them to access files from different places, but these practices might not fit in with your organization’s server policies. An employee saving a file to Dropbox might be planning to copy it to an “official” location later, but whether that ever happens could be doubtful. Often people don’t realize until it’s too late that accidentally deleting a file in one sync service directory means that all copies in all locations — even the cloud — are also deleted.

Employees are under increasing demands to produce, which means that network drive policies aren’t always followed; time constraints and deadlines can cause best practices to go out the window. Users will attempt to comply with policies as best they can — and you might get 70% or even 75% effective compliance — but getting even to that level requires training, monitoring, and repeatedly reminding employees of policies they need to follow — none of which leads to a good work environment.

Even if you get to 75% compliance with network file policies, what happens if the critical file needed to close out an end-of-year financial summary isn’t one of the files backed up? The effort required for IT to get from 70% to 80% or 90% of an endpoint’s files effectively backed up could require multiple hours from your IT department, and you still might not have backed up the one critical file you need later.

Your Organization Operates on its Data — And Today That Data Exists in Multiple Locations

Users are no longer tied to one endpoint, and may use different computers in the office, at home, or traveling. The greater the number of endpoints used, the greater the chance of an accidental or malicious device loss or data corruption. The loss of the Sales VP’s laptop at the airport on her way back from meeting with major customers can affect an entire organization and require weeks to resolve.

Even with the best intentions and efforts, following policies when out of the office can be difficult or impossible. Connecting to your private network when remote most likely requires a VPN, and VPN connectivity can be challenging from the lobby Wi-Fi at the Radisson. Server restores require time from the IT staff, which can mean taking resources away from other IT priorities and a growing backlog of requests from users to need their files as soon as possible. When users are dependent on IT to get back files critical to their work, employee productivity and often deadlines are affected.

Managing Finite Server Storage Is an Ongoing Challenge

Network drive backup usually requires on-premises data storage for endpoint backups. Since it is a finite resource, allocating that storage is another burden on your IT staff. To make sure that storage isn’t exceeded, IT departments often ration storage by department and/or user — another oversight duty for IT, and even more choices required by your IT department and department heads who have to decide which files to prioritize for backing up.

Adding Backblaze Endpoint Backup Improves Business Continuity and Productivity

Having an endpoint backup strategy in place can mitigate these problems and improve user productivity, as well. A good endpoint backup service, such as Backblaze Cloud Backup, will ensure that all devices are backed up securely, automatically, without requiring any action by the user or by your IT department.

For 99% of users, no configuration is required for Backblaze Backup. Everything on the endpoint is encrypted and securely backed up to the cloud, including program configuration files and files outside of standard document folders. Even temp files are backed up, which can prove invaluable when recovering a file after a crash or other program interruption. Cloud storage is unlimited with Backblaze Backup, so there are no worries about running out of storage or rationing file backups.

The Backblaze client can be silently and remotely installed to both Macintosh and Windows clients with no user interaction. And, with Backblaze Groups, your IT staff has complete visibility into when files were last backed up. IT staff can recover any backed up file, folder, or entire computer from the admin panel, and even give file restore capability to the user, if desired, which reduces dependency on IT and time spent waiting for restores.

With over 500 petabytes of customer data stored and one million files restored every hour of every day by Backblaze customers, you know that Backblaze Backup works for its users.

You Need Data Security That Matches the Way People Work Today

Both file server and endpoint backup have their places in an organization’s data security plan, but their use and value differ. If you already are using file server backup, adding endpoint backup will make a valuable contribution to your organization by reducing workload, improving productivity, and increasing confidence that all critical files are backed up.

By guaranteeing fast and automatic backup of all endpoint data, and matching the current way organizations and people work with data, Backblaze Backup will enable you to effectively and affordably meet the data security demands of your organization.

The post Server vs Endpoint Backup — Which is Best? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Plasma 5.12.0

Post Syndicated from ris original https://lwn.net/Articles/746735/rss

KDE has released
Plasma 5.12.0. “Plasma 5.12 LTS is the second long-term support release from the Plasma 5 team. We have been working hard, focusing on speed and stability for this release. Boot time to desktop has been improved by reviewing the code for anything which blocks execution. The team has been triaging and fixing bugs in every aspect of the codebase, tidying up artwork, removing corner cases, and ensuring cross-desktop integration. For the first time, we offer our Wayland integration on long-term support, so you can be sure we will continue to provide bug fixes and improvements to the Wayland experience.

Build a Multi-Tenant Amazon EMR Cluster with Kerberos, Microsoft Active Directory Integration and EMRFS Authorization

Post Syndicated from Songzhi Liu original https://aws.amazon.com/blogs/big-data/build-a-multi-tenant-amazon-emr-cluster-with-kerberos-microsoft-active-directory-integration-and-emrfs-authorization/

One of the challenges faced by our customers—especially those in highly regulated industries—is balancing the need for security with flexibility. In this post, we cover how to enable multi-tenancy and increase security by using EMRFS (EMR File System) authorization, the Amazon S3 storage-level authorization on Amazon EMR.

Amazon EMR is an easy, fast, and scalable analytics platform enabling large-scale data processing. EMRFS authorization provides Amazon S3 storage-level authorization by configuring EMRFS with multiple IAM roles. With this functionality enabled, different users and groups can share the same cluster and assume their own IAM roles respectively.

Simply put, on Amazon EMR, we can now have an Amazon EC2 role per user assumed at run time instead of one general EC2 role at the cluster level. When the user is trying to access Amazon S3 resources, Amazon EMR evaluates against a predefined mappings list in EMRFS configurations and picks up the right role for the user.

In this post, we will discuss what EMRFS authorization is (Amazon S3 storage-level access control) and show how to configure the role mappings with detailed examples. You will then have the desired permissions in a multi-tenant environment. We also demo Amazon S3 access from HDFS command line, Apache Hive on Hue, and Apache Spark.

EMRFS authorization for Amazon S3

There are two prerequisites for using this feature:

  1. Users must be authenticated, because EMRFS needs to map the current user/group/prefix to a predefined user/group/prefix. There are several authentication options. In this post, we launch a Kerberos-enabled cluster that manages the Key Distribution Center (KDC) on the master node, and enable a one-way trust from the KDC to a Microsoft Active Directory domain.
  2. The application must support accessing Amazon S3 via Applications that have their own S3FileSystem APIs (for example, Presto) are not supported at this time.

EMRFS supports three types of mapping entries: user, group, and Amazon S3 prefix. Let’s use an example to show how this works.

Assume that you have the following three identities in your organization, and they are defined in the Active Directory:

To enable all these groups and users to share the EMR cluster, you need to define the following IAM roles:

In this case, you create a separate Amazon EC2 role that doesn’t give any permission to Amazon S3. Let’s call the role the base role (the EC2 role attached to the EMR cluster), which in this example is named EMR_EC2_RestrictedRole. Then, you define all the Amazon S3 permissions for each specific user or group in their own roles. The restricted role serves as the fallback role when the user doesn’t belong to any user/group, nor does the user try to access any listed Amazon S3 prefixes defined on the list.

Important: For all other roles, like emrfs_auth_group_role_data_eng, you need to add the base role (EMR_EC2_RestrictedRole) as the trusted entity so that it can assume other roles. See the following example:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::511586466501:role/EMR_EC2_RestrictedRole"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

The following is an example policy for the admin user role (emrfs_auth_user_role_admin_user):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "*"
        }
    ]
}

We are assuming the admin user has access to all buckets in this example.

The following is an example policy for the data science group role (emrfs_auth_group_role_data_sci):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::emrfs-auth-data-science-bucket-demo/*",
                "arn:aws:s3:::emrfs-auth-data-science-bucket-demo"
            ],
            "Action": [
                "s3:*"
            ]
        }
    ]
}

This role grants all Amazon S3 permissions to the emrfs-auth-data-science-bucket-demo bucket and all the objects in it. Similarly, the policy for the role emrfs_auth_group_role_data_eng is shown below:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::emrfs-auth-data-engineering-bucket-demo/*",
                "arn:aws:s3:::emrfs-auth-data-engineering-bucket-demo"
            ],
            "Action": [
                "s3:*"
            ]
        }
    ]
}

Example role mappings configuration

To configure EMRFS authorization, you use EMR security configuration. Here is the configuration we use in this post

Consider the following scenario.

First, the admin user admin1 tries to log in and run a command to access Amazon S3 data through EMRFS. The first role emrfs_auth_user_role_admin_user on the mapping list, which is a user role, is mapped and picked up. Then admin1 has access to the Amazon S3 locations that are defined in this role.

Then a user from the data engineer group (grp_data_engineering) tries to access a data bucket to run some jobs. When EMRFS sees that the user is a member of the grp_data_engineering group, the group role emrfs_auth_group_role_data_eng is assumed, and the user has proper access to Amazon S3 that is defined in the emrfs_auth_group_role_data_eng role.

Next, the third user comes, who is not an admin and doesn’t belong to any of the groups. After failing evaluation of the top three entries, EMRFS evaluates whether the user is trying to access a certain Amazon S3 prefix defined in the last mapping entry. This type of mapping entry is called the prefix type. If the user is trying to access s3://emrfs-auth-default-bucket-demo/, then the prefix mapping is in effect, and the prefix role emrfs_auth_prefix_role_default_s3_prefix is assumed.

If the user is not trying to access any of the Amazon S3 paths that are defined on the list—which means it failed the evaluation of all the entries—it only has the permissions defined in the EMR_EC2RestrictedRole. This role is assumed by the EC2 instances in the cluster.

In this process, all the mappings defined are evaluated in the defined order, and the first role that is mapped is assumed, and the rest of the list is skipped.

Setting up an EMR cluster and mapping Active Directory users and groups

Now that we know how EMRFS authorization role mapping works, the next thing we need to think about is how we can use this feature in an easy and manageable way.

Active Directory setup

Many customers manage their users and groups using Microsoft Active Directory or other tools like OpenLDAP. In this post, we create the Active Directory on an Amazon EC2 instance running Windows Server and create the users and groups we will be using in the example below. After setting up Active Directory, we use the Amazon EMR Kerberos auto-join capability to establish a one-way trust from the KDC running on the EMR master node to the Active Directory domain on the EC2 instance. You can use your own directory services as long as it talks to the LDAP (Lightweight Directory Access Protocol).

To create and join Active Directory to Amazon EMR, follow the steps in the blog post Use Kerberos Authentication to Integrate Amazon EMR with Microsoft Active Directory.

After configuring Active Directory, you can create all the users and groups using the Active Directory tools and add users to appropriate groups. In this example, we created users like admin1, dataeng1, datascientist1, grp_data_engineering, and grp_data_science, and then add the users to the right groups.

Join the EMR cluster to an Active Directory domain

For clusters with Kerberos, Amazon EMR now supports automated Active Directory domain joins. You can use the security configuration to configure the one-way trust from the KDC to the Active Directory domain. You also configure the EMRFS role mappings in the same security configuration.

The following is an example of the EMR security configuration with a trusted Active Directory domain EMRKRB.TEST.COM and the EMRFS role mappings as we discussed earlier:

The EMRFS role mapping configuration is shown in this example:

We will also provide an example AWS CLI command that you can run.

Launching the EMR cluster and running the tests

Now you have configured Kerberos and EMRFS authorization for Amazon S3.

Additionally, you need to configure Hue with Active Directory using the Amazon EMR configuration API in order to log in using the AD users created before. The following is an example of Hue AD configuration.

[
  {
    "Classification":"hue-ini",
    "Properties":{

    },
    "Configurations":[
      {
        "Classification":"desktop",
        "Properties":{

        },
        "Configurations":[
          {
            "Classification":"ldap",
            "Properties":{

            },
            "Configurations":[
              {
                "Classification":"ldap_servers",
                "Properties":{

                },
                "Configurations":[
                  {
                    "Classification":"AWS",
                    "Properties":{
                      "base_dn":"DC=emrkrb,DC=test,DC=com",
                      "ldap_url":"ldap://emrkrb.test.com",
                      "search_bind_authentication":"false",
                      "bind_dn":"CN=adjoiner,CN=users,DC=emrkrb,DC=test,DC=com",
                      "bind_password":"Abc123456",
                      "create_users_on_login":"true",
                      "nt_domain":"emrkrb.test.com"
                    },
                    "Configurations":[

                    ]
                  }
                ]
              }
            ]
          },
          {
            "Classification":"auth",
            "Properties":{
              "backend":"desktop.auth.backend.LdapBackend"
            },
            "Configurations":[

            ]
          }
        ]
      }
    ]
  }

Note: In the preceding configuration JSON file, change the values as required before pasting it into the software setting section in the Amazon EMR console.

Now let’s use this configuration and the security configuration you created before to launch the cluster.

In the Amazon EMR console, choose Create cluster. Then choose Go to advanced options. On the Step1: Software and Steps page, under Edit software settings (optional), paste the configuration in the box.

The rest of the setup is the same as an ordinary cluster setup, except in the Security Options section. In Step 4: Security, under Permissions, choose Custom, and then choose the RestrictedRole that you created before.

Choose the appropriate subnets (these should meet the base requirement in order for a successful Active Directory join—see the Amazon EMR Management Guide for more details), and choose the appropriate security groups to make sure it talks to the Active Directory. Choose a key so that you can log in and configure the cluster.

Most importantly, choose the security configuration that you created earlier to enable Kerberos and EMRFS authorization for Amazon S3.

You can use the following AWS CLI command to create a cluster.

aws emr create-cluster --name "TestEMRFSAuthorization" \ 
--release-label emr-5.10.0 \ --instance-type m3.xlarge \ 
--instance-count 3 \ 
--ec2-attributes InstanceProfile=EMR_EC2_DefaultRole,KeyName=MyEC2KeyPair \ --service-role EMR_DefaultRole \ 
--security-configuration MyKerberosConfig \ 
--configurations file://hue-config.json \
--applications Name=Hadoop Name=Hive Name=Hue Name=Spark \ 
--kerberos-attributes Realm=EC2.INTERNAL, \ KdcAdminPassword=<YourClusterKDCAdminPassword>, \ ADDomainJoinUser=<YourADUserLogonName>,ADDomainJoinPassword=<YourADUserPassword>, \ 
CrossRealmTrustPrincipalPassword=<MatchADTrustPwd>

Note: If you create the cluster using CLI, you need to save the JSON configuration for Hue into a file named hue-config.json and place it on the server where you run the CLI command.

After the cluster gets into the Waiting state, try to connect by using SSH into the cluster using the Active Directory user name and password.

ssh -l [email protected] <EMR IP or DNS name>

Quickly run two commands to show that the Active Directory join is successful:

  1. id [user name] shows the mapped AD users and groups in Linux.
  2. hdfs groups [user name] shows the mapped group in Hadoop.

Both should return the current Active Directory user and group information if the setup is correct.

Now, you can test the user mapping first. Log in with the admin1 user, and run a Hadoop list directory command:

hadoop fs -ls s3://emrfs-auth-data-science-bucket-demo/

Now switch to a user from the data engineer group.

Retry the previous command to access the admin’s bucket. It should throw an Amazon S3 Access Denied exception.

When you try listing the Amazon S3 bucket that a data engineer group member has accessed, it triggers the group mapping.

hadoop fs -ls s3://emrfs-auth-data-engineering-bucket-demo/

It successfully returns the listing results. Next we will test Apache Hive and then Apache Spark.

 

To run jobs successfully, you need to create a home directory for every user in HDFS for staging data under /user/<username>. Users can configure a step to create a home directory at cluster launch time for every user who has access to the cluster. In this example, you use Hue since Hue will create the home directory in HDFS for the user at the first login. Here Hue also needs to be integrated with the same Active Directory as explained in the example configuration described earlier.

First, log in to Hue as a data engineer user, and open a Hive Notebook in Hue. Then run a query to create a new table pointing to the data engineer bucket, s3://emrfs-auth-data-engineering-bucket-demo/table1_data_eng/.

You can see that the table was created successfully. Now try to create another table pointing to the data science group’s bucket, where the data engineer group doesn’t have access.

It failed and threw an Amazon S3 Access Denied error.

Now insert one line of data into the successfully create table.

Next, log out, switch to a data science group user, and create another table, test2_datasci_tb.

The creation is successful.

The last task is to test Spark (it requires the user directory, but Hue created one in the previous step).

Now let’s come back to the command line and run some Spark commands.

Login to the master node using the datascientist1 user:

Start the SparkSQL interactive shell by typing spark-sql, and run the show tables command. It should list the tables that you created using Hive.

As a data science group user, try select on both tables. You will find that you can only select the table defined in the location that your group has access to.

Conclusion

EMRFS authorization for Amazon S3 enables you to have multiple roles on the same cluster, providing flexibility to configure a shared cluster for different teams to achieve better efficiency. The Active Directory integration and group mapping make it much easier for you to manage your users and groups, and provides better auditability in a multi-tenant environment.


Additional Reading

If you found this post useful, be sure to check out Use Kerberos Authentication to Integrate Amazon EMR with Microsoft Active Directory and Launching and Running an Amazon EMR Cluster inside a VPC.


About the Authors

Songzhi Liu is a Big Data Consultant with AWS Professional Services. He works closely with AWS customers to provide them Big Data & Machine Learning solutions and best practices on the Amazon cloud.

 

 

 

 

Google Won’t Take Down ‘Pirate’ VLC With Five Million Downloads

Post Syndicated from Andy original https://torrentfreak.com/google-wont-take-down-pirate-vlc-with-five-million-downloads-180206/

VLC is the media player of choice for Internet users around the globe. Downloaded for desktop at least 2,493,000,000 times since February 2005, VLC is an absolute giant. And those figures don’t even include GNU/Linux, iOS, Android, Chrome OS or Windows Phone downloads either.

Aside from its incredible functionality, VLC (operated by the VideoLAN non-profit) has won the hearts of Internet users for other key reasons, not least its commitment to being free and open source software. While it’s true to say that VLC doesn’t cost a penny, the term ‘free’ actually relates to the General Public License (GPL) under which it’s distributed.

The GPL aims to guarantee that software under it remains ‘free’ for all current and future users. To benefit from these protections, the GPL requires people who modify and redistribute software to afford others the same freedoms by informing them of the requirement to make source code available.

Since VLC is extremely popular and just about as ‘free’ as software can get, people get extremely defensive when they perceive that a third-party is benefiting from the software without adhering to the terms of the generous GPL license. That was the case beginning a few hours ago when veteran Reddit user MartinVanBallin pointed out a piece of software on the Google Play Store.

“They took VLC, put in ads, didn’t attribute VLC or follow the open source license, and they’re using Media Player Classics icon,” MartinVanBallin wrote.

The software is called 321 Media Player and has an impressive 4.5 score from more than 101,000 reviews. Despite not mentioning VLC or the GPL, it is based completely on VLC, as the image below (and other proof) shows.

VLC Media Player 321 Media Player

TorrentFreak spoke with VideoLAN President Jean-Baptiste Kempf who confirmed that the clone is in breach of the GPL.

“The Android version of VLC is under the license GPLv3, which requires everything inside the application to be open source and sharing the source,” Kempf says.

“This clone seems to use a closed-source advertisement component (are there any that are open source?), which is a clear violation of our copyleft. Moreover, they don’t seem to share the source at all, which is also a violation.”

Perhaps the most amazing thing is the popularity of the software. According to stats provided by Google, 321 Media Player has amassed between five and ten million downloads. That’s not an insignificant amount when one considers that unlike VLC, 321 Media Player contains revenue-generating ads.

Using GPL-licensed software for commercial purposes is allowed providing the license terms are strictly adhered to. Kempf informs TF that VideoLAN doesn’t mind if this happens but in this case, the GPL is not being respected.

“A fork application which changes some things is an interesting thing, because they maybe have something to give back to our community. The application here, is just a parasite, and I think they are useless and dangerous,” Kempf says.

All that being said, turning VLC itself into adware is something the VideoLAN team is opposed to. In fact, according to questions answered by Kempf last September, the team turned down “several tens of millions of euros” to turn their media player into an ad-supported platform.

“Integrating crap, adware and spyware with VLC is not OK,” Kempf informs TF.

TorrentFreak contacted the developer of 321 Media Player for comment but at the time of publication, we were yet to receive a response. We also asked for a copy of the source code for 321 Media Player as the GPL requires, but that wasn’t forthcoming either.

In the meantime, it appears that a small army of Reddit users are trying to get something done about the ‘rogue’ app by reporting it as an “inappropriate copycat” to Google. Whether this will have any effect remains to be seen but according to Kempf, tackling these clone versions has proven extremely difficult in the past.

“We reported this application already more than three times and Google refuses to take it down,” he says.

“Our experience is that it is very difficult to take these kinds of apps down, even if they embed spyware or malware. Maybe it is because it makes money for Google.”

Finally, Kempf also points to the obviously named “Indian VLC Player” on Google Play. Another VLC clone with up to 500,000 downloads, this one appears to breach both copyright and trademark law.

“We remove applications that violate our policies, such as apps that are illegal,” a Google spokesperson informs TorrentFreak.

“We don’t comment on individual applications; you can check out our policies for more information.”

Update: The app has now been removed from Google Play

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

2018 in perspective (Libre Graphics World)

Post Syndicated from corbet original https://lwn.net/Articles/746651/rss

Here’s a look
at what’s coming on the desktop
in Libre Graphics World. “After
almost 6 years of work, the GIMP team is finalizing the next big
update. The plan is to cut a beta of v2.10 once the amount of critical bugs
falls further down: it’s currently stuck at 20, as new bugs get promoted to
blockers, while old blockers get fixed. It’s a bit of an uphill
battle.

0-Day Flash Vulnerability Exploited In The Wild

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/02/0-day-flash-vulnerability-exploited-in-the-wild/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

0-Day Flash Vulnerability Exploited In The Wild

So another 0-Day Flash Vulnerability is being exploited in the Wild, a previously unknown flaw which has been labelled CVE-2018-4878 and it affects 28.0.0.137 and earlier versions for both Windows and Mac (the desktop runtime) and for basically everything in the Chrome Flash Player (Windows, Mac, Linux and Chrome OS).

The full Adobe Security Advisory can be found here:

– Security Advisory for Flash Player | APSA18-01

Adobe warned on Thursday that attackers are exploiting a previously unknown security hole in its Flash Player software to break into Microsoft Windows computers.

Read the rest of 0-Day Flash Vulnerability Exploited In The Wild now! Only available at Darknet.

ActivityPub is now a W3C recommended standard

Post Syndicated from ris original https://lwn.net/Articles/745172/rss

The Free Software Foundation blog has a guest
post
from
GNU MediaGoblin founder Christopher Lemmer Webber announcing that ActivityPub has been made an
official W3C recommended standard. “ActivityPub is a protocol for building decentralized social networking applications. It provides both a server-to-server protocol (i.e. federation) and a client-to-server protocol (for desktop and mobile applications to connect to your server). You can use the server-to-server protocol or the client-to-server protocol on their own, but one nice feature is that the designs for both are very similar. Chances are, if you’ve implemented support for one, you can get support for the other with very little extra effort! We’ve worked hard to make ActivityPub easy to understand.

Denuvo Has Been Sold to Global Anti-Piracy Outfit Irdeto

Post Syndicated from Andy original https://torrentfreak.com/denuvo-has-been-sold-to-global-anti-piracy-outfit-irdeto-180123/

It’s fair to say that of all video games anti-piracy technologies, Denuvo is perhaps the most hated of recent times. That hatred unsurprisingly stems from both its success and complexity.

Those with knowledge of the system say it’s fiendishly difficult to defeat but in recent times, cracks have been showing. In 2017, various iterations of the anti-tamper system were defeated by several cracking groups, much to the delight of the pirate masses.

Now, however, a new development has the potential to herald a new lease of life for the Austria-based anti-piracy company. A few moments ago it was revealed that the company has been bought by Irdeto, a global anti-piracy company with considerable heritage and resources.

“Irdeto has acquired Denuvo, the world leader in gaming security, to provide anti-piracy and anti-cheat solutions for games on desktop, mobile, console and VR devices,” Irdeto said in a statement.

“Denuvo provides technology and services for game publishers and platforms, independent software vendors, e-publishers and video publishers across the globe. Current Denuvo customers include Electronic Arts, UbiSoft, Warner Bros and Lionsgate Entertainment, with protection provided for games such as Star Wars Battlefront II, Football Manager, Injustice 2 and others.”

Irdeto says that Denuvo will “continue to operate as usual” with all of its staff retained – a total of 45 across Austria, Poland, the Czech Republic, and the US. Denuvo headquarters in Salzburg, Austria, will also remain intact along with its sales operations.

“The success of any game title is dependent upon the ability of the title to operate as the publisher intended,” says Irdeto CEO Doug Lowther.

“As a result, protection of both the game itself and the gaming experience for end users is critical. Our partnership brings together decades of security expertise under one roof to better address new and evolving security threats. We are looking forward to collaborating as a team on a number of initiatives to improve our core technology and services to better serve our customers.”

Denuvo was founded relatively recently in 2013 and employs less than 50 people. In contrast, Irdeto’s roots go all the way back to 1969 and currently has almost 1,000 staff. It’s a subsidiary of South Africa-based Internet and media group Naspers, a corporate giant with dozens of notable companies under its control.

While Denuvo is perhaps best known for its anti-piracy technology, Irdeto is also placing emphasis on the company’s ability to hinder cheating in online multi-player gaming environments. This has become a hot topic recently, with several lawsuits filed in the US by companies including Blizzard and Epic.

Denuvo CEO Reinhard Blaukovitsch

“Hackers and cybercriminals in the gaming space are savvy, and always have been. It is critical to implement robust security strategies to combat the latest gaming threats and protect the investment in games. Much like the movie industry, it’s the only way to ensure that great games continue to get made,” says Denuvo CEO Reinhard Blaukovitsch.

“In joining with Irdeto, we are bringing together a unique combination of security expertise, technology and enhanced piracy services to aggressively address security challenges that customers and gamers face from hackers.”

While it seems likely that the companies have been in negotiations for some, the timing of this announcement also coincides with negative news for Denuvo.

Yesterday it was revealed that the latest variant of its anti-tamper technology – Denuvo v4.8 – had been defeated by online cracking group CPY (Conspiracy). Version 4.8 had been protecting Sonic Forces since its release early November 2017 but the game was leaked out onto the Internet late Sunday with all protection neutralized.

Sonic Forces cracked by CPY

Irdeto has a long history of acquiring anti-piracy companies and technologies. They include Lockstream (DRM for content on mobile phones), Philips Cryptoworks (DVB conditional access system), Cloakware (various security), Entriq (media protection), BD+ (Blu-ray protection), and BayTSP (anti-piracy monitoring).

It’s also noteworthy that Irdeto supplied behind-the-scenes support in two of the largest IPTV provider raids of recent times, one focused on Spain in 2017 and more recently in Cyprus, Bulgaria, Greece and the Netherlands (1,2,3).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Zero WH: pre-soldered headers and what to do with them

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/zero-wh/

If you head over to the website of your favourite Raspberry Pi Approved Reseller today, you may find the new Zero WH available to purchase. But what it is? Why is it different, and what can you do with it?

Raspberry Pi Zero WH

“If you like pre-soldered headers, and getting caught in the rain…”

Raspberry Pi Zero WH

Imagine a Raspberry Pi Zero W. Now add a professionally soldered header. Boom, that’s the Raspberry Pi Zero WH! It’s your same great-tasting Pi, with a brand-new…crust? It’s perfect for everyone who doesn’t own a soldering iron or who wants the soldering legwork done for them.

What you can do with the Zero WH

What can’t you do? Am I right?! The small size of the Zero W makes it perfect for projects with minimal wiggle-room. In such projects, some people have no need for GPIO pins — they simply solder directly to the board. However, there are many instances where you do want a header on your Zero W, for example in order to easily take advantage of the GPIO expander tool for Debian Stretch on a PC or Mac.

GPIO expander in clubs and classrooms

As Ben Nuttall explains in his blog post on the topic:

[The GPIO expander tool] is a real game-changer for Raspberry Jams, Code Clubs, CoderDojos, and schools. You can live boot the Raspberry Pi Desktop OS from a USB stick, use Linux PCs, or even install [the Pi OS] on old computers. Then you have really simple access to physical computing without full Raspberry Pi setups, and with no SD cards to configure.

Using the GPIO expander with the Raspberry Pi Zero WH decreases the setup cost for anyone interested in trying out physical computing in the classroom or at home. (And once you’ve stuck your toes in, you’ll obviously fall in love and will soon find yourself with multiple Raspberry Pi models, HATs aplenty, and an area in your home dedicated to your new adventure in Raspberry Pi. Don’t say I didn’t warn you.)

Other uses for a Zero W with a header

The GPIO expander setup is just one of a multitude of uses for a Raspberry Pi Zero W with a header. You may want the header for prototyping before you commit to soldering wires directly to a board. Or you may have a temporary build in mind for your Zero W, in which case you won’t want to commit to soldering wires to the board at all.

Raspberry Pi Zero WH

Your use case may be something else entirely — tell us in the comments below how you’d utilise a pre-soldered Raspberry Pi Zero WH in your project. The best project idea will receive ten imaginary house points of absolutely no practical use, but immense emotional value. Decide amongst yourselves who you believe should win them — I’m going to go waste a few more hours playing SLUG!

The post Zero WH: pre-soldered headers and what to do with them appeared first on Raspberry Pi.

WebTorrent Desktop Hits a Million Downloads

Post Syndicated from Ernesto original https://torrentfreak.com/webtorrent-desktop-hits-a-million-downloads-180104/

Fifteen years ago BitTorrent conquered the masses. It offered a superior way to share large video files, something that was virtually impossible at the time.

With the shift to online video streaming, BitTorrent has lost prominence in recent years. That’s a shame, since the technology offers many advantages.

This is one of the reasons why Stanford University graduate Feross Aboukhadijeh invented WebTorrent. The technology, which is supported by most modern browsers, allows users to seamlessly stream videos on the web with BitTorrent.

In the few years that it’s been around, several tools and services have been built on WebTorrent, including a dedicated desktop client. The desktop version basically serves as a torrent client that streams torrents almost instantaneously on Windows, Linux, and Mac.

Add in AirPlay, Chromecast and DLNA support and it brings these videos to any network-connected TV as well. Quite a powerful tool, as many people have discovered in recent months.

This week Feross informed TorrentFreak that WebTorrent Desktop had reached the one million download mark. That’s a major milestone for a modest project with no full-time developer. But while users seem to be happy, it’s not perfect yet.

“WebTorrent Desktop is the best torrent app in existence. Yet, the app suffers from performance issues when too many torrents are added or too many peers show up. It’s also missing important power user features like bandwidth throttling,” Feross says.

The same is true for WebTorrent itself, which the desktop version is built on. The software has been on the verge of version 1.0.0 for over two years now but needs some more work to make the final leap. This is why Feross would like to invest more time into the projects, given the right support.

Last month Feross launched a Patreon campaign to crowdfund future development of WebTorrent including the desktop version. There are dozens of open issues and a lot of plans and with proper funding, the developer can free up time to work on these.

“The goal of the campaign is to allow me to spend a few days per week addressing these issues,” Feross says, adding that all software he works on is completely free and always has been.

Feross and cat

Thus far the fundraising campaign is going well. WebTorrent’s developer has received support from dozens of people, totaling $1,730 a month through Patreon alone, and he has signed up the privacy oriented browser Brave and video site PopChest as Platinum backers.

Community-driven funding is a great way to support Open Source projects, Feross believes, and he is encouraging others to try it out as well.

“I’ve been promoting Patreon heavily within my community as a way for open source software developers to get paid for their work,” Feross says.

“The norm in the industry right now is that no one gets paid — it’s all volunteer work, even though we’re generating a lot of value for the world! Patreon is a really promising solution for software people like me.”

People who want to give WebTorrent Desktop a try can download a copy from the official site. More information on the core WebTorrent technology and its implementations is available there was well. And if you like what you see, Feross still needs a bit of help to reach his Patreon goal.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

12 B2 Power Tips for New Users

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/newbie-cloud-storage-guide/

B2 Tips for Beginners
You probably know that B2 is Backblaze’s fast and economical general purpose cloud storage, but do you know everything that you can do with it?

If you’re a B2 newbie, here are some blazing power tips to help you get the most out of B2 Cloud Storage.

If you’re a B2 expert or a developer, stay tuned. We’ll be publishing power tips for you in the near future. Enter your email address using the Join button at the top of the page and you won’t miss any upcoming blog posts.
Backblaze logo

1    Drag and Drop Files to B2

Use Backblaze’s drag-and-drop web interface to store, restore, and share B2 files.

Backblaze logo

2    Share Files You Have in B2

You can designate a B2 bucket as private or public. If the bucket is public and you’d like to share a file with others, you can create and copy a Friendly URL and paste it into an email or message.

Backblaze logo

3    Use B2 Just Like Any Other Drive

Use B2 just as if it were a drive on your computer — drag and drop files and folders, save files to it — using one of a number of integrations that let you mount B2 as a volume in your Windows or Macintosh file system (Mountain Duck, ExpanDrive, odrive). Pick the files you want to save, drop them in a desktop folder, and they are automatically saved to B2.

Backblaze logo

4    Drag and Drop To and From B2 from the Desktop, Too

Use Cyberduck, a B2 integration partner, to drag-and-drop files to and from B2 right from the Windows or Macintosh desktop.

Backblaze logo

5    Determine the Speed of your Connection to B2

You can check the speed and latency of your internet connection between your location and Backblaze’s data centers, and see how much data you could theoretically transfer in a day, at https://www.backblaze.com/speedtest/.

Backblaze logo

6    No Matter What Type of Data you Have, B2 Can Handle It

You can transfer any type or amount of data to B2 from any device that can connect to the internet, including Windows, Macintosh, Linux, servers, mobile devices, external drives, and NAS.

Backblaze logo

7    Get Your Files from B2 by Mail

You have a choice of how to receive your data from B2. You can download data directly or request that your data be shipped to you via FedEx.

Backblaze logo

8    Back Up Your Backups to B2

You can automatically back up your Apple Time Machine backup or Windows backup to a NAS and then back that up to B2 to give you both local and cloud backups for a 3-2-1 backup solution.

Backblaze logo

9    Protect Your B2 Account with Two-Factor Verification

You can (and should) protect your Backblaze account with two-factor verification (such as using an app on your smartphone), and you can use backup codes and SMS verification in case you lose access to your smartphone.

Backblaze logo

10    Preview Photos Stored on B2 from the Web

Preview your photos as thumbnails (and optionally download individual photos) in common image formats (including jpg, png, img, tiff, and gif) with the B2 web interface.

Backblaze logo

11    B2 Has Group Management, Too

Backblaze Groups works for B2, too — just like Backblaze Personal Backup and Business Backup. You can manage billing, group membership, and control access using Group Management in your Backblaze account dashboard.

Backblaze logo

12    B2 Integrations Make B2 More Powerful and Useful

There are over 30+ software and hardware integrations that make B2 more powerful. You can visit our integrations page to find a solution that works for you.

Want to Learn More About B2?

You can find more information on B2 on our website and in our help pages.

The post 12 B2 Power Tips for New Users appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

[$] HarfBuzz brings professional typography to the desktop

Post Syndicated from jake original https://lwn.net/Articles/741722/rss

By their nature, low-level libraries go mostly unnoticed by users and
even some programmers. Usually, they are only noticed when something goes
wrong. However, HarfBuzz
deserves to be an exception. Not only does the adoption of HarfBuzz mean
that free
software’s ability to convert Unicode
characters to a font’s specific glyphs is as advanced as any proprietary
equivalent, but its increasing use means that professional typography can
now be done from the Linux desktop as easily as at a print shop.

Movie & TV Companies Tackle Pirate IPTV in Australia Federal Court

Post Syndicated from Andy original https://torrentfreak.com/movie-tv-companies-tackle-pirate-iptv-in-australia-federal-court-171207/

As movie and TV show piracy has migrated from the desktop towards mobile and living room-based devices, copyright holders have found the need to adapt to a new enemy.

Dealing with streaming services is now high on the agenda, with third-party Kodi addons and various Android apps posing the biggest challenge. Alongside is the much less prevalent but rapidly growing pay IPTV market, in which thousands of premium channels are delivered to homes for a relatively small fee.

In Australia, copyright holders are treating these services in much the same way as torrent sites. They feel that if they can force ISPs to block them, the problem can be mitigated. Most recently, movie and TV show giants Village Roadshow, Disney, Universal, Warner Bros, Twentieth Century Fox, and Paramount filed an application targeting HDSubs+, a pirate IPTV operation servicing thousands of Australians.

Filed in October, the application for the injunction targets Australia’s largest ISPs including Telstra, Optus, TPG, and Vocus, plus their subsidiaries. The movie and TV show companies want them to quickly block HDSubs+, to prevent it from reaching its audience.

HDSubs+ IPTV package
However, blocking isn’t particularly straightforward. Due to the way IPTV services are setup a number of domains need to be blocked, including their sales platforms, EPG (electronic program guide), software (such as an Android app), updates, and sundry other services. In HDSubs+ case around ten domains need to be restricted but in court today, Village Roadshow revealed that probably won’t deal with the problem.

HDSubs+ appears to be undergoing some kind of transformation, possibly to mitigate efforts to block it in Australia. ComputerWorld reports that it is now directing subscribers to update to a new version that works in a more evasive manner.

If they agree, HDSubs+ customers are being migrated over to a service called PressPlayPlus. It works in the same way as the old system but no longer uses the domain names cited in Village Roadshow’s injunction application. This means that DNS blocks, the usual weapon of choice for local ISPs, will prove futile.

Village Roadshow says that with this in mind it may be forced to seek enhanced IP address blocking, unless it is granted a speedy hearing for its application. This, in turn, may result in the normally cooperative ISPs returning to court to argue their case.

“If that’s what you want to do, then you’ll have to amend the orders and let the parties know,” Judge John Nicholas said.

“It’s only the former [DNS blocking] that carriage service providers have agreed to in the past.”

As things stand, Village Roadshow will return to court on December 15 for a case management hearing but in the meantime, the Federal Court must deal with another IPTV-related blocking request.

In common with its Australian and US-based counterparts, Hong Kong-based broadcaster Television Broadcasts Limited (TVB) has launched a similar case asking local ISPs to block another IPTV service.

“Television Broadcasts Limited can confirm that we have commenced legal action in Australia to protect our copyright,” a TVB spokesperson told Computerworld.

TVB wants ISPs including Telstra, Optus, Vocus, and TPG plus their subsidiaries to block access to seven Android-based services named as A1, BlueTV, EVPAD, FunTV, MoonBox, Unblock, and hTV5.

Court documents list 21 URLs maintaining the services. They will all need to be blocked by DNS or other means, if the former proves futile. Online reports suggest that there are similarities among the IPTV products listed above. A demo for the FunTV IPTV service is shown below.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

GPIO expander: access a Pi’s GPIO pins on your PC/Mac

Post Syndicated from Gordon Hollingworth original https://www.raspberrypi.org/blog/gpio-expander/

Use the GPIO pins of a Raspberry Pi Zero while running Debian Stretch on a PC or Mac with our new GPIO expander software! With this tool, you can easily access a Pi Zero’s GPIO pins from your x86 laptop without using SSH, and you can also take advantage of your x86 computer’s processing power in your physical computing projects.

A Raspberry Pi zero connected to a laptop - GPIO expander

What is this magic?

Running our x86 Stretch distribution on a PC or Mac, whether installed on the hard drive or as a live image, is a great way of taking advantage of a well controlled and simple Linux distribution without the need for a Raspberry Pi.

The downside of not using a Pi, however, is that there aren’t any GPIO pins with which your Scratch or Python programs could communicate. This is a shame, because it means you are limited in your physical computing projects.

I was thinking about this while playing around with the Pi Zero’s USB booting capabilities, having seen people employ the Linux gadget USB mode to use the Pi Zero as an Ethernet device. It struck me that, using the udev subsystem, we could create a simple GUI application that automatically pops up when you plug a Pi Zero into your computer’s USB port. Then the Pi Zero could be programmed to turn into an Ethernet-connected computer running pigpio to provide you with remote GPIO pins.

So we went ahead and built this GPIO expander application, and your PC or Mac can now have GPIO pins which are accessible through Scratch or the GPIO Zero Python library. Note that you can only use this tool to access the Pi Zero.

You can also install the application on the Raspberry Pi. Theoretically, you could connect a number of Pi Zeros to a single Pi and (without a USB hub) use a maximum of 140 pins! But I’ve not tested this — one for you, I think…

Making the GPIO expander work

If you’re using a PC or Mac and you haven’t set up x86 Debian Stretch yet, you’ll need to do that first. An easy way to do it is to download a copy of the Stretch release from this page and image it onto a USB stick. Boot from the USB stick (on most computers, you just need to press F10 during booting and select the stick when asked), and then run Stretch directly from the USB key. You can also install it to the hard drive, but be aware that installing it will overwrite anything that was on your hard drive before.

Whether on a Mac, PC, or Pi, boot through to the Stretch desktop, open a terminal window, and install the GPIO expander application:

sudo apt install usbbootgui

Next, plug in your Raspberry Pi Zero (don’t insert an SD card), and after a few seconds the GUI will appear.

A screenshot of the GPIO expander GUI

The Raspberry Pi USB programming GUI

Select GPIO expansion board and click OK. The Pi Zero will now be programmed as a locally connected Ethernet port (if you run ifconfig, you’ll see the new interface usb0 coming up).

What’s really cool about this is that your plugged-in Pi Zero is now running pigpio, which allows you to control its GPIOs through the network interface.

With Scratch 2

To utilise the pins with Scratch 2, just click on the start bar and select Programming > Scratch 2.

In Scratch, click on More Blocks, select Add an Extension, and then click Pi GPIO.

Two new blocks will be added: the first is used to set the output pin, the second is used to get the pin value (it is true if the pin is read high).

This a simple application using a Pibrella I had hanging around:

A screenshot of a Scratch 2 program - GPIO expander

With Python

This is a Python example using the GPIO Zero library to flash an LED:

[email protected]:~ $ export GPIOZERO_PIN_FACTORY=pigpio
[email protected]:~ $ export PIGPIO_ADDR=fe80::1%usb0
[email protected]:~ $ python3
>>> from gpiozero import LED
>>> led = LED(17)
>>> led.blink()
A Raspberry Pi zero connected to a laptop - GPIO expander

The pinout command line tool is your friend

Note that in the code above the IP address of the Pi Zero is an IPv6 address and is shortened to fe80::1%usb0, where usb0 is the network interface created by the first Pi Zero.

With pigs directly

Another option you have is to use the pigpio library and the pigs application and redirect the output to the Pi Zero network port running IPv6. To do this, you’ll first need to set some environment variable for the redirection:

[email protected]:~ $ export PIGPIO_ADDR=fe80::1%usb0
[email protected]:~ $ pigs bc2 0x8000
[email protected]:~ $ pigs bs2 0x8000

With the commands above, you should be able to flash the LED on the Pi Zero.

The secret sauce

I know there’ll be some people out there who would be interested in how we put this together. And I’m sure many people are interested in the ‘buildroot’ we created to run on the Pi Zero — after all, there are lots of things you can create if you’ve got a Pi Zero on the end of a piece of IPv6 string! For a closer look, find the build scripts for the GPIO expander here and the source code for the USB boot GUI here.

And be sure to share your projects built with the GPIO expander by tagging us on social media or posting links in the comments!

The post GPIO expander: access a Pi’s GPIO pins on your PC/Mac appeared first on Raspberry Pi.

Stretch for PCs and Macs, and a Raspbian update

Post Syndicated from Simon Long original https://www.raspberrypi.org/blog/stretch-pcs-macs-raspbian-update/

Today, we are launching the first Debian Stretch release of the Raspberry Pi Desktop for PCs and Macs, and we’re also releasing the latest version of Raspbian Stretch for your Pi.

Raspberry Pi Desktop Stretch splash screen

For PCs and Macs

When we released our custom desktop environment on Debian for PCs and Macs last year, we were slightly taken aback by how popular it turned out to be. We really only created it as a result of one of those “Wouldn’t it be cool if…” conversations we sometimes have in the office, so we were delighted by the Pi community’s reaction.

Seeing how keen people were on the x86 version, we decided that we were going to try to keep releasing it alongside Raspbian, with the ultimate aim being to make simultaneous releases of both. This proved to be tricky, particularly with the move from the Jessie version of Debian to the Stretch version this year. However, we have now finished the job of porting all the custom code in Raspbian Stretch to Debian, and so the first Debian Stretch release of the Raspberry Pi Desktop for your PC or Mac is available from today.

The new Stretch releases

As with the Jessie release, you can either run this as a live image from a DVD, USB stick, or SD card or install it as the native operating system on the hard drive of an old laptop or desktop computer. Please note that installing this software will erase anything else on the hard drive — do not install this over a machine running Windows or macOS that you still need to use for its original purpose! It is, however, safe to boot a live image on such a machine, since your hard drive will not be touched by this.

We’re also pleased to announce that we are releasing the latest version of Raspbian Stretch for your Pi today. The Pi and PC versions are largely identical: as before, there are a few applications (such as Mathematica) which are exclusive to the Pi, but the user interface, desktop, and most applications will be exactly the same.

For Raspbian, this new release is mostly bug fixes and tweaks over the previous Stretch release, but there are one or two changes you might notice.

File manager

The file manager included as part of the LXDE desktop (on which our desktop is based) is a program called PCManFM, and it’s very feature-rich; there’s not much you can’t do in it. However, having used it for a few years, we felt that it was perhaps more complex than it needed to be — the sheer number of menu options and choices made some common operations more awkward than they needed to be. So to try to make file management easier, we have implemented a cut-down mode for the file manager.

Raspberry Pi Desktop Stretch - file manager

Most of the changes are to do with the menus. We’ve removed a lot of options that most people are unlikely to change, and moved some other options into the Preferences screen rather than the menus. The two most common settings people tend to change — how icons are displayed and sorted — are now options on the toolbar and in a top-level menu rather than hidden away in submenus.

The sidebar now only shows a single hierarchical view of the file system, and we’ve tidied the toolbar and updated the icons to make them match our house style. We’ve removed the option for a tabbed interface, and we’ve stomped a few bugs as well.

One final change was to make it possible to rename a file just by clicking on its icon to highlight it, and then clicking on its name. This is the way renaming works on both Windows and macOS, and it’s always seemed slightly awkward that Unix desktop environments tend not to support it.

As with most of the other changes we’ve made to the desktop over the last few years, the intention is to make it simpler to use, and to ease the transition from non-Unix environments. But if you really don’t like what we’ve done and long for the old file manager, just untick the box for Display simplified user interface and menus in the Layout page of Preferences, and everything will be back the way it was!

Raspberry Pi Desktop Stretch - preferences GUI

Battery indicator for laptops

One important feature missing from the previous release was an indication of the amount of battery life. Eben runs our desktop on his Mac, and he was becoming slightly irritated by having to keep rebooting into macOS just to check whether his battery was about to die — so fixing this was a priority!

We’ve added a battery status icon to the taskbar; this shows current percentage charge, along with whether the battery is charging, discharging, or connected to the mains. When you hover over the icon with the mouse pointer, a tooltip with more details appears, including the time remaining if the battery can provide this information.

Raspberry Pi Desktop Stretch - battery indicator

While this battery monitor is mainly intended for the PC version, it also supports the first-generation pi-top — to see it, you’ll only need to make sure that I2C is enabled in Configuration. A future release will support the new second-generation pi-top.

New PC applications

We have included a couple of new applications in the PC version. One is called PiServer — this allows you to set up an operating system, such as Raspbian, on the PC which can then be shared by a number of Pi clients networked to it. It is intended to make it easy for classrooms to have multiple Pis all running exactly the same software, and for the teacher to have control over how the software is installed and used. PiServer is quite a clever piece of software, and it’ll be covered in more detail in another blog post in December.

We’ve also added an application which allows you to easily use the GPIO pins of a Pi Zero connected via USB to a PC in applications using Scratch or Python. This makes it possible to run the same physical computing projects on the PC as you do on a Pi! Again, we’ll tell you more in a separate blog post this month.

Both of these applications are included as standard on the PC image, but not on the Raspbian image. You can run them on a Pi if you want — both can be installed from apt.

How to get the new versions

New images for both Raspbian and Debian versions are available from the Downloads page.

It is possible to update existing installations of both Raspbian and Debian versions. For Raspbian, this is easy: just open a terminal window and enter

sudo apt-get update
sudo apt-get dist-upgrade

Updating Raspbian on your Raspberry Pi

How to update to the latest version of Raspbian on your Raspberry Pi. Download Raspbian here: More information on the latest version of Raspbian: Buy a Raspberry Pi:

It is slightly more complex for the PC version, as the previous release was based around Debian Jessie. You will need to edit the files /etc/apt/sources.list and /etc/apt/sources.list.d/raspi.list, using sudo to do so. In both files, change every occurrence of the word “jessie” to “stretch”. When that’s done, do the following:

sudo apt-get update 
sudo dpkg --force-depends -r libwebkitgtk-3.0-common
sudo apt-get -f install
sudo apt-get dist-upgrade
sudo apt-get install python3-thonny
sudo apt-get install sonic-pi=2.10.0~repack-rpt1+2
sudo apt-get install piserver
sudo apt-get install usbbootgui

At several points during the upgrade process, you will be asked if you want to keep the current version of a configuration file or to install the package maintainer’s version. In every case, keep the existing version, which is the default option. The update may take an hour or so, depending on your network connection.

As with all software updates, there is the possibility that something may go wrong during the process, which could lead to your operating system becoming corrupted. Therefore, we always recommend making a backup first.

Enjoy the new versions, and do let us know any feedback you have in the comments or on the forums!

The post Stretch for PCs and Macs, and a Raspbian update appeared first on Raspberry Pi.

AWS Cloud9 – Cloud Developer Environments

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/aws-cloud9-cloud-developer-environments/

One of the first things you learn when you start programming is that, just like any craftsperson, your tools matter. Notepad.exe isn’t going to cut it. A powerful editor and testing pipeline supercharge your productivity. I still remember learning to use Vim for the first time and being able to zip around systems and complex programs. Do you remember how hard it was to setup all your compilers and dependencies on a new machine? How many cycles have you wasted matching versions, tinkering with configs, and then writing documentation to onboard a new developer to a project?

Today we’re launching AWS Cloud9, an Integrated Development Environment (IDE) for writing, running, and debugging code, all from your web browser. Cloud9 comes prepackaged with essential tools for many popular programming languages (Javascript, Python, PHP, etc.) so you don’t have to tinker with installing various compilers and toolchains. Cloud9 also provides a seamless experience for working with serverless applications allowing you to quickly switch between local and remote testing or debugging. Based on the popular open source Ace Editor and c9.io IDE (which we acquired last year), AWS Cloud9 is designed to make collaborative cloud development easy with extremely powerful pair programming features. There are more features than I could ever cover in this post but to give a quick breakdown I’ll break the IDE into 3 components: The editor, the AWS integrations, and the collaboration.

Editing


The Ace Editor at the core of Cloud9 is what lets you write code quickly, easily, and beautifully. It follows a UNIX philosophy of doing one thing and doing it well: writing code.

It has all the typical IDE features you would expect: live syntax checking, auto-indent, auto-completion, code folding, split panes, version control integration, multiple cursors and selections, and it also has a few unique features I want to highlight. First of all, it’s fast, even for large (100000+ line) files. There’s no lag or other issues while typing. It has over two dozen themes built-in (solarized!) and you can bring all of your favorite themes from Sublime Text or TextMate as well. It has built-in support for 40+ language modes and customizable run configurations for your projects. Most importantly though, it has Vim mode (or emacs if your fingers work that way). It also has a keybinding editor that allows you to bend the editor to your will.

The editor supports powerful keyboard navigation and commands (similar to Sublime Text or vim plugins like ctrlp). On a Mac, with ⌘+P you can open any file in your environment with fuzzy search. With ⌘+. you can open up the command pane which allows you to do invoke any of the editor commands by typing the name. It also helpfully displays the keybindings for a command in the pane, for instance to open to a terminal you can press ⌥+T. Oh, did I mention there’s a terminal? It ships with the AWS CLI preconfigured for access to your resources.

The environment also comes with pre-installed debugging tools for many popular languages – but you’re not limited to what’s already installed. It’s easy to add in new programs and define new run configurations.

The editor is just one, admittedly important, component in an IDE though. I want to show you some other compelling features.

AWS Integrations

The AWS Cloud9 IDE is the first IDE I’ve used that is truly “cloud native”. The service is provided at no additional charge, and you only charged for the underlying compute and storage resources. When you create an environment you’re prompted for either: an instance type and an auto-hibernate time, or SSH access to a machine of your choice.

If you’re running in AWS the auto-hibernate feature will stop your instance shortly after you stop using your IDE. This can be a huge cost savings over running a more permanent developer desktop. You can also launch it within a VPC to give it secure access to your development resources. If you want to run Cloud9 outside of AWS, or on an existing instance, you can provide SSH access to the service which it will use to create an environment on the external machine. Your environment is provisioned with automatic and secure access to your AWS account so you don’t have to worry about copying credentials around. Let me say that again: you can run this anywhere.

Serverless Development with AWS Cloud9

I spend a lot of time on Twitch developing serverless applications. I have hundreds of lambda functions and APIs deployed. Cloud9 makes working with every single one of these functions delightful. Let me show you how it works.


If you look in the top right side of the editor you’ll see an AWS Resources tab. Opening this you can see all of the lambda functions in your region (you can see functions in other regions by adjusting your region preferences in the AWS preference pane).

You can import these remote functions to your local workspace just by double-clicking them. This allows you to edit, test, and debug your serverless applications all locally. You can create new applications and functions easily as well. If you click the Lambda icon in the top right of the pane you’ll be prompted to create a new lambda function and Cloud9 will automatically create a Serverless Application Model template for you as well. The IDE ships with support for the popular SAM local tool pre-installed. This is what I use in most of my local testing and serverless development. Since you have a terminal, it’s easy to install additional tools and use other serverless frameworks.

 

Launching an Environment from AWS CodeStar

With AWS CodeStar you can easily provision an end-to-end continuous delivery toolchain for development on AWS. Codestar provides a unified experience for building, testing, deploying, and managing applications using AWS CodeCommit, CodeBuild, CodePipeline, and CodeDeploy suite of services. Now, with a few simple clicks you can provision a Cloud9 environment to develop your application. Your environment will be pre-configured with the code for your CodeStar application already checked out and git credentials already configured.

You can easily share this environment with your coworkers which leads me to another extremely useful set of features.

Collaboration

One of the many things that sets AWS Cloud9 apart from other editors are the rich collaboration tools. You can invite an IAM user to your environment with a few clicks.

You can see what files they’re working on, where their cursors are, and even share a terminal. The chat features is useful as well.

Things to Know

  • There are no additional charges for this service beyond the underlying compute and storage.
  • c9.io continues to run for existing users. You can continue to use all the features of c9.io and add new team members if you have a team account. In the future, we will provide tools for easy migration of your c9.io workspaces to AWS Cloud9.
  • AWS Cloud9 is available in the US West (Oregon), US East (Ohio), US East (N.Virginia), EU (Ireland), and Asia Pacific (Singapore) regions.

I can’t wait to see what you build with AWS Cloud9!

Randall

KDE’s Goals for 2018 and Beyond (KDE.news)

Post Syndicated from corbet original https://lwn.net/Articles/740359/rss

KDE.news covers the
goals that the KDE project has set for itself
in the coming year.
In synch with KDE’s vision, Sebastian Kugler says that ‘KDE is in a
unique position to offer users a complete software environment that helps
them to protect their privacy’. Being in that position, Sebastian explains,
KDE as a FLOSS community is morally obliged to do its utmost to provide the
most privacy-protecting environment for users. This is especially true
since KDE has been developing not only for desktop devices, but also for
mobile – an area where the respect for users’ privacy is nearly
non-existent.

T2 Unlimited – Going Beyond the Burst with High Performance

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-t2-unlimited-going-beyond-the-burst-with-high-performance/

I first wrote about the T2 instances in the summer of 2014, and talked about how many workloads have a modest demand for continuous compute power and an occasional need for a lot more. This model resonated with our customers; the T2 instances are very popular and are now used to host microservices, low-latency interactive applications, virtual desktops, build & staging environments, prototypes, and the like.

New T2 Unlimited
Today we are extending the burst model that we pioneered with the T2, giving you the ability to sustain high CPU performance over any desired time frame while still keeping your costs as low as possible. You simply enable this feature when you launch your instance; you can also enable it for an instance that is already running. The hourly T2 instance price covers all interim spikes in usage if the average CPU utilization is lower than the baseline over a 24-hour window. There’s a small hourly charge if the instance runs at higher CPU utilization for a prolonged period of time. For example, if you run a t2.micro instance at an average of 15% utilization (5% above the baseline) for 24 hours you will be charged an additional 6 cents (5 cents per vCPU-hour * 1 vCPU * 5% * 24 hours).

To launch a T2 Unlimited instance from the EC2 Console, select any T2 instance and then click on Enable next to T2 Unlimited:

And here’s how to switch a running instance from T2 Standard to T2 Unlimited:

Behind the Scenes
As I described in my original post, each T2 instance accumulates CPU Credits as it runs and consumes them while it is running at full-core speed, decelerating to a baseline level when the supply of Credits is exhausted. T2 Unlimited instances have the ability to borrow an entire day’s worth of future credits, allowing them to perform additional bursting. This borrowing is tracked by the new CPUSurplusCreditBalance CloudWatch metric. When this balance rises to the level where it represents an entire day’s worth of future credits, the instance continues to deliver full-core performance, charged at the rate of $0.05 per vCPU per hour for Linux and $0.096 for Windows. These charged surplus credits are tracked by the new CPUSurplusCreditsCharged metric. You will be charged on a per-millisecond basis for partial hours of bursting (further reducing your costs) if you exhaust your surplus late in a given hour.

The charge for any remaining CPUSurplusCreditBalance is processed when the instance is terminated or configured as a T2 Standard. Any accumulated CPUCreditBalance carries over during the transition to T2 Standard.

The T2 Unlimited model is designed to spare you the trouble of watching the CloudWatch metrics, but (if you are like me) you will do it anyway. Let’s take a quick look at a t2.nano and watch the credits over time. First, CPU utilization grows to 100% and the instance begins to consume 5 credits every 5 minutes (one credit is equivalent to a VCPU-minute):

The CPU credit balance remains at 0 because the credits are being produced and consumed at the same rate. The surplus credit balance (tracked by the CPUSurplusCreditBalance metric) ramps up to 72, representing the credits that are being borrowed from the future:

Once the surplus credit balance hits 72, there’s nothing more to borrow from the future, and any further CPU usage is charged at the end of the hour, tracked with the CPUSurplusCreditsCharged metric. The instance consumes 5 credits every 5 minutes and earns 0.25, resulting in a net charge of 4.75 VCPU-minutes for each 5 minutes of bursting:

You can switch each of your instances back and forth between T2 Standard and T2 Unlimited at any time; all credit balances except CPUSurplusCreditsCharged remain and are carried over. Because T2 Unlimited instances have the ability to burst at any time, they do not receive the 30 minutes of credits given to newly launched T2 Standard instances. Also, since each AWS account can launch a limited number of T2 Standard instances with initial CPU credits each day, T2 Unlimited instances can be a better fit for use in Auto Scaling Groups and other scenarios where large numbers of instances come and go each day.

Available Now
You can launch T2 Unlimited instances today in the US East (Northern Virginia), US East (Ohio), US West (Northern California), US West (Oregon), Canada (Central), South America (São Paulo), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Asia Pacific (Seoul), EU (Frankfurt), EU (Ireland), and EU (London) Regions today.

Jeff;