Tag Archives: desktop

Raspbian Stretch has arrived for Raspberry Pi

Post Syndicated from Simon Long original https://www.raspberrypi.org/blog/raspbian-stretch/

It’s now just under two years since we released the Jessie version of Raspbian. Those of you who know that Debian run their releases on a two-year cycle will therefore have been wondering when we might be releasing the next version, codenamed Stretch. Well, wonder no longer – Raspbian Stretch is available for download today!

Disney Pixar Toy Story Raspbian Stretch Raspberry Pi

Debian releases are named after characters from Disney Pixar’s Toy Story trilogy. In case, like me, you were wondering: Stretch is a purple octopus from Toy Story 3. Hi, Stretch!

The differences between Jessie and Stretch are mostly under-the-hood optimisations, and you really shouldn’t notice any differences in day-to-day use of the desktop and applications. (If you’re really interested, the technical details are in the Debian release notes here.)

However, we’ve made a few small changes to our image that are worth mentioning.

New versions of applications

Version 3.0.1 of Sonic Pi is included – this includes a lot of new functionality in terms of input/output. See the Sonic Pi release notes for more details of exactly what has changed.

Raspbian Stretch Raspberry Pi

The Chromium web browser has been updated to version 60, the most recent stable release. This offers improved memory usage and more efficient code, so you may notice it running slightly faster than before. The visual appearance has also been changed very slightly.

Raspbian Stretch Raspberry Pi

Bluetooth audio

In Jessie, we used PulseAudio to provide support for audio over Bluetooth, but integrating this with the ALSA architecture used for other audio sources was clumsy. For Stretch, we are using the bluez-alsa package to make Bluetooth audio work with ALSA itself. PulseAudio is therefore no longer installed by default, and the volume plugin on the taskbar will no longer start and stop PulseAudio. From a user point of view, everything should still work exactly as before – the only change is that if you still wish to use PulseAudio for some other reason, you will need to install it yourself.

Better handling of other usernames

The default user account in Raspbian has always been called ‘pi’, and a lot of the desktop applications assume that this is the current user. This has been changed for Stretch, so now applications like Raspberry Pi Configuration no longer assume this to be the case. This means, for example, that the option to automatically log in as the ‘pi’ user will now automatically log in with the name of the current user instead.

One other change is how sudo is handled. By default, the ‘pi’ user is set up with passwordless sudo access. We are no longer assuming this to be the case, so now desktop applications which require sudo access will prompt for the password rather than simply failing to work if a user without passwordless sudo uses them.

Scratch 2 SenseHAT extension

In the last Jessie release, we added the offline version of Scratch 2. While Scratch 2 itself hasn’t changed for this release, we have added a new extension to allow the SenseHAT to be used with Scratch 2. Look under ‘More Blocks’ and choose ‘Add an Extension’ to load the extension.

This works with either a physical SenseHAT or with the SenseHAT emulator. If a SenseHAT is connected, the extension will control that in preference to the emulator.

Raspbian Stretch Raspberry Pi

Fix for Broadpwn exploit

A couple of months ago, a vulnerability was discovered in the firmware of the BCM43xx wireless chipset which is used on Pi 3 and Pi Zero W; this potentially allows an attacker to take over the chip and execute code on it. The Stretch release includes a patch that addresses this vulnerability.

There is also the usual set of minor bug fixes and UI improvements – I’ll leave you to spot those!

How to get Raspbian Stretch

As this is a major version upgrade, we recommend using a clean image; these are available from the Downloads page on our site as usual.

Upgrading an existing Jessie image is possible, but is not guaranteed to work in every circumstance. If you wish to try upgrading a Jessie image to Stretch, we strongly recommend taking a backup first – we can accept no responsibility for loss of data from a failed update.

To upgrade, first modify the files /etc/apt/sources.list and /etc/apt/sources.list.d/raspi.list. In both files, change every occurrence of the word ‘jessie’ to ‘stretch’. (Both files will require sudo to edit.)

Then open a terminal window and execute

sudo apt-get update
sudo apt-get -y dist-upgrade

Answer ‘yes’ to any prompts. There may also be a point at which the install pauses while a page of information is shown on the screen – hold the ‘space’ key to scroll through all of this and then hit ‘q’ to continue.

Finally, if you are not using PulseAudio for anything other than Bluetooth audio, remove it from the image by entering

sudo apt-get -y purge pulseaudio*

The post Raspbian Stretch has arrived for Raspberry Pi appeared first on Raspberry Pi.

Solus 3 released

Post Syndicated from corbet original https://lwn.net/Articles/731121/rss

The Solus distribution project has announced
the availability of Solus 3. “This is the third iteration of
Solus since our move to become a rolling release operating system. Unlike
the previous iterations, however, this is a release and not a
snapshot. We’ve now moved away from the ‘regular snapshot’ model to
accommodate the best hybrid approach possible – feature rich releases with
explicit goals and technology enabling, along with the benefits of a
curated rolling release operating system.
” Headline features
include support for the Snap packaging format, a lot of desktop changes,
and numerous software updates. (LWN looked at
Solus
in 2016).

Firefox 55 released

Post Syndicated from ris original https://lwn.net/Articles/730198/rss

Firefox 55.0 has been released. From the release
notes
: “Today’s release brings innovative functionality, improvements to core browser performance, and more proof that we’re committed to making Firefox better than ever. New features include support for WebVR, making Firefox the first Windows desktop browser to support VR experiences. Performance changes include significantly faster startup times when restoring lots of tabs and settings that let users take greater control of our new multi-process architecture. We’ve also upgraded the address bar to make finding what you want easier, with search suggestions and the integration of our one-click search feature, and safer, by prioritizing the secure – https – version of sites when possible.

Former Vuze Developers Launch BiglyBT, a ‘New’ Open Source Torrent Client

Post Syndicated from Ernesto original https://torrentfreak.com/former-vuze-developers-launch-biglybt-a-new-open-source-torrent-client-170803/

Back in the summer of 2003 a group of developers debuted a new torrent client, which they called Azureus.

BitTorrent itself was still a relatively new technology at the time and users were eager to find new tools to transfer their files. The feature-rich Azureus client, which later rebranded to Vuze, delivered just that.

In recent years, however, things have gone relatively quiet, up to a point where Vuze development appears to have stalled completely. Perhaps not surprising, as two of the core developers, parg and TuxPaper, have left the project and moved on to something new.

“We are no longer involved in Vuze or Azureus Software, Inc. We can not speak to what their intentions are with the development of their product,” they inform us.

The developers, who were also part of the original Azureus team, are not saying farewell to their code though. While they are no longer working on Vuze, the pair have started a new Azureus branch, one they will actively maintain.

“We have invested such a large amount of our lives in the endeavor that we feel the need to keep the open source project active, for both our and our users’ enjoyment!” parg and TuxPaper tell us.

BiglyBT, as they have named their new client, will continue where Vuze development stalled. In addition to optimizing the code and releasing new features, BiglyBT is determined to keep the open source project alive, without any commercial interests.

“Our main goals for BiglyBT is to keep it ad-free and open source, and to continue to develop it into an even better torrent client. We also hope that a community will form again around the product.”

BiglyBT main window (large)

People who try the new client will notice that it’s indeed very similar to Vuze, but without the ads and some other ‘cluttering’ features, such as DVD-burning.

While BiglyBT looks and operates in a similar manner to Vuze, in the future the developers will work on a new set of features, a new style, and various other changes that will set it apart from its older brother.

“Our first release is mostly a name change, but we have removed some of the things that we know users don’t particularly want or use, such as the content network, games promotions, DVD burning, the huge ad in the corner of the app, and the offers in the installer.”

While Vuze appears to have downsized its development efforts, BiglyBT promises to go full steam ahead. The new client will also stay true to the Open Source nature. Previously, some people complained that Vuze included proprietary code, resulting in more restrictive license terms. BiglyBT is purely GPL, and will remain so.

The client is currently available on all major desktop platforms, including Windows, MacOS and Linux. An open source Android app, forked from Vuze remote, will follow in a few weeks.

BiglyBT should appeal to a wide range of users, especially the more seasoned torrent user who wants a client they can configure to their liking.

“Our target users are people who love to delve into the world of torrenting. People who like to tinker and watch torrents do their thing. Hoarders who like to seed, automate, categorize and contribute back to the torrenting community,” the developers note.

People who are interested in giving BiglyBT a spin can download the latest version from the official site. The application is free and won’t install any other applications or adware. Instead, it’s solely supported by donations from the public.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

A new twist on data backup: CloudNAS

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/cloudnas-backup/

Morro CacheDrive

There are many ways for SMBs, professionals, and advanced users to back up their data. The process can be as simple as copying files to a flash drive or an external drive, or as sophisticated as using a Synology or QNAP NAS device as your primary storage device and syncing the files to a cloud storage service such as Backblaze B2.

A recent entry into the backup arena is Morro Data and their CloudNAS solution, where files are stored in the cloud, cached locally as needed, and synced globally among the other CloudNAS systems in a given organization. There are three components to the solution:

  • A Morro CacheDrive — This resides on your internal network like a NAS device and stores from 1- to 8 TB of data depending on the model
  • The CloudNAS service — This software runs on the Morro CacheDrive to keep track of and manage the data
  • Backblaze B2 Cloud Storage — Where the data is stored in the cloud

The Morro CacheDrive is installed on your local network and looks like a network share. On Windows, the share can be mounted as a letter device, M:, for example. On the Mac, the device is mounted as a Shared device (Databank in the example below).

CloudNAS software dashboard

In either case, the device works like a folder/directory, typically on your desktop. You then either drag-and-drop or save a file to the folder/directory. This places the file on the CacheDrive. Once there, the file is automatically backed up to the cloud. In the case of CloudNAS solution, that cloud is Backblaze B2.

All that sounds pretty straight-forward, but what makes the CloudNAS solution unique is the solution allows you to have unlimited storage space. For example, you can access 5 TB of data from a 1 TB CacheDrive. Confused? Let me explain. All 5 TB of the data is stored in B2, having been uploaded to B2 each time you stored data on the CacheDrive. The 1 TB CacheDrive keeps (caches) the most recent or most often used files on the CacheDrive. When you need a file not currently stored on the CacheDrive, the CloudNAS software automatically downloads the file from the B2 cloud to the CacheDrive and makes it available to use as desired.

Things to know about the CloudNAS solution

  • Sharing Systems: Multiple users can mount the same CacheDrive with each being able to update and share the files.
  • Synced Systems: If you have two or more CloudNAS systems on your network, they will keep the B2 directory of files synced between all of the systems. Everyone on the network sees the same file list.
  • Unlimited Data: Regardless of the size of the CacheDrive device you purchase, you will not run out of space as Backblaze B2 will contain all of your data. That said, you should choose the size of your CacheDrive that fits your operational environment.
  • Network Speed: Files are initially stored on the CacheDrive, then copied to B2. Local network connections are typically much faster than internet network speeds. This means your files are uploaded to the CacheDrive fast then transferred to B2 as time allows at the speed of your internet connection, all without slowing you down. This should be interesting to those of you who have slower internet connections.
  • Access: The files stored using the Cloud NAS solution can be accessed through the shared folder/directory on your desktop as well as through a web-based Team Portal.

Getting Started

To start, you purchase a Morro CacheDrive. The price starts at $499.00 for a unit with 1 TB of cache storage. Next you choose a CloudNAS subscription. This starts at $10/month for the Standard plan, and lets you manage up to 10 TB of data. Finally, you connect Backblaze B2 to the Morro system to finish the set-up process. You pay Backblaze each month for the data you store in and download from B2 while using the Morro solution.

The CloudNAS solution is certainly a different approach to storing your data. You get the ability to store a nearly unlimited amount of data without having to upgrade your hardware as you go, and all of your data is readily available with just a few clicks. For users who need to store terabytes of data that needs be available anytime, the CloudNAS solution is worth a look.

The post A new twist on data backup: CloudNAS appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

[$] Flatpaks for Fedora 27

Post Syndicated from jake original https://lwn.net/Articles/728699/rss

A proposal
to add Flatpak as an option for
distributing desktop applications in Fedora 27 has recently made an
appearance. It is meant as an experiment
of sorts to see how well Flatpak and RPM will play together—and to fix any
problems found.
There is a view that containers are the future, on the desktop as well as
the server; Flatpaks would provide Fedora one possible path toward that future.
The proposal sparked a huge thread on the Fedora devel
mailing list; while the proposal itself doesn’t really change much for
those uninterested in Flatpaks, some are concerned with where Fedora
packaging might be headed once the experiment ends.

New – GPU-Powered Streaming Instances for Amazon AppStream 2.0

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-gpu-powered-streaming-instances-for-amazon-appstream-2-0/

We launched Amazon AppStream 2.0 at re:Invent 2016. This application streaming service allows you to deliver Windows applications to a desktop browser.

AppStream 2.0 is fully managed and provides consistent, scalable performance by running applications on general purpose, compute optimized, and memory optimized streaming instances, with delivery via NICE DCV – a secure, high-fidelity streaming protocol. Our enterprise and public sector customers have started using AppStream 2.0 in place of legacy application streaming environments that are installed on-premises. They use AppStream 2.0 to deliver both commercial and line of business applications to a desktop browser. Our ISV customers are using AppStream 2.0 to move their applications to the cloud as-is, with no changes to their code. These customers focus on demos, workshops, and commercial SaaS subscriptions.

We are getting great feedback on AppStream 2.0 and have been adding new features very quickly (even by AWS standards). So far this year we have added an image builder, federated access via SAML 2.0, CloudWatch monitoring, Fleet Auto Scaling, Simple Network Setup, persistent storage for user files (backed by Amazon S3), support for VPC security groups, and built-in user management including web portals for users.

New GPU-Powered Streaming Instances
Many of our customers have told us that they want to use AppStream 2.0 to deliver specialized design, engineering, HPC, and media applications to their users. These applications are generally graphically intensive and are designed to run on expensive, high-end PCs in conjunction with a GPU (Graphics Processing Unit). Due to the hardware requirements of these applications, cost considerations have traditionally kept them out of situations where part-time or occasional access would otherwise make sense. Recently, another requirement has come to the forefront. These applications almost always need shared, read-write access to large amounts of sensitive data that is best stored, processed, and secured in the cloud. In order to meet the needs of these users and applications, we are launching two new types of streaming instances today:

Graphics Desktop – Based on the G2 instance type, Graphics Desktop instances are designed for desktop applications that use the CUDA, DirectX, or OpenGL for rendering. These instances are equipped with 15 GiB of memory and 8 vCPUs. You can select this instance family when you build an AppStream image or configure an AppStream fleet:

Graphics Pro – Based on the brand-new G3 instance type, Graphics Pro instances are designed for high-end, high-performance applications that can use the NVIDIA APIs and/or need access to large amounts of memory. These instances are available in three sizes, with 122 to 488 GiB of memory and 16 to 64 vCPUs. Again, you can select this instance family when you configure an AppStream fleet:

To learn more about how to launch, run, and scale a streaming application environment, read Scaling Your Desktop Application Streams with Amazon AppStream 2.0.

As I noted earlier, you can use either of these two instance types to build an AppStream image. This will allow you to test and fine tune your applications and to see the instances in action.

Streaming Instances in Action
We’ve been working with several customers during a private beta program for the new instance types. Here are a few stories (and some cool screen shots) to show you some of the applications that they are streaming via AppStream 2.0:

AVEVA is a world leading provider of engineering design and information management software solutions for the marine, power, plant, offshore and oil & gas industries. As part of their work on massive capital projects, their customers need to bring many groups of specialist engineers together to collaborate on the creation of digital assets. In order to support this requirement, AVEVA is building SaaS solutions that combine the streamed delivery of engineering applications with access to a scalable project data environment that is shared between engineers across the globe. The new instances will allow AVEVA to deliver their engineering design software in SaaS form while maximizing quality and performance. Here’s a screen shot of their Everything 3D app being streamed from AppStream:

Nissan, a Japanese multinational automobile manufacturer, trains its automotive specialists using 3D simulation software running on expensive graphics workstations. The training software, developed by The DiSti Corporation, allows its specialists to simulate maintenance processes by interacting with realistic 3D models of the vehicles they work on. AppStream 2.0’s new graphics capability now allows Nissan to deliver these training tools in real time, with up to date content, to a desktop browser running on low-cost commodity PCs. Their specialists can now interact with highly realistic renderings of a vehicle that allows them to train for and plan maintenance operations with higher efficiency.

Cornell University is an American private Ivy League and land-grant doctoral university located in Ithaca, New York. They deliver advanced 3D tools such as AutoDesk AutoCAD and Inventor to students and faculty to support their course work, teaching, and research. Until now, these tools could only be used on GPU-powered workstations in a lab or classroom. AppStream 2.0 allows them to deliver the applications to a web browser running on any desktop, where they run as if they were on a local workstation. Their users are no longer limited by available workstations in labs and classrooms, and can bring their own devices and have access to their course software. This increased flexibility also means that faculty members no longer need to take lab availability into account when they build course schedules. Here’s a copy of Autodesk Inventor Professional running on AppStream at Cornell:

Now Available
Both of the graphics streaming instance families are available in the US East (Northern Virginia), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo) Regions and you can start streaming from them today. Your applications must run in a Windows 2012 R2 environment, and can make use of DirectX, OpenGL, CUDA, OpenCL, and Vulkan.

With prices in the US East (Northern Virginia) Region starting at $0.50 per hour for Graphics Desktop instances and $2.05 per hour for Graphics Pro instances, you can now run your simulation, visualization, and HPC workloads in the AWS Cloud on an economical, pay-by-the-hour basis. You can also take advantage of fast, low-latency access to Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (S3), AWS Lambda, Amazon Redshift, and other AWS services to build processing workflows that handle pre- and post-processing of your data.

Jeff;

 

AWS HIPAA Eligibility Update (July 2017) – Eight Additional Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-hipaa-eligibility-update-july-2017-eight-additional-services/

It is time for an update on our on-going effort to make AWS a great host for healthcare and life sciences applications. As you can see from our Health Customer Stories page, Philips, VergeHealth, and Cambia (to choose a few) trust AWS with Protected Health Information (PHI) and Personally Identifying Information (PII) as part of their efforts to comply with HIPAA and HITECH.

In May we announced that we added Amazon API Gateway, AWS Direct Connect, AWS Database Migration Service, and Amazon Simple Queue Service (SQS) to our list of HIPAA eligible services and discussed our how customers and partners are putting them to use.

Eight More Eligible Services
Today I am happy to share the news that we are adding another eight services to the list:

Amazon CloudFront can now be utilized to enhance the delivery and transfer of Protected Health Information data to applications on the Internet. By providing a completely secure and encryptable pathway, CloudFront can now be used as a part of applications that need to cache PHI. This includes applications for viewing lab results or imaging data, and those that transfer PHI from Healthcare Information Exchanges (HIEs).

AWS WAF can now be used to protect applications running on AWS which operate on PHI such as patient care portals, patient scheduling systems, and HIEs. Requests and responses containing encrypted PHI and PII can now pass through AWS WAF.

AWS Shield can now be used to protect web applications such as patient care portals and scheduling systems that operate on encrypted PHI from DDoS attacks.

Amazon S3 Transfer Acceleration can now be used to accelerate the bulk transfer of large amounts of research, genetics, informatics, insurance, or payer/payment data containing PHI/PII information. Transfers can take place between a pair of AWS Regions or from an on-premises system and an AWS Region.

Amazon WorkSpaces can now be used by researchers, informaticists, hospital administrators and other users to analyze, visualize or process PHI/PII data using on-demand Windows virtual desktops.

AWS Directory Service can now be used to connect the authentication and authorization systems of organizations that use or process PHI/PII to their resources in the AWS Cloud. For example, healthcare providers operating hybrid cloud environments can now use AWS Directory Services to allow their users to easily transition between cloud and on-premises resources.

Amazon Simple Notification Service (SNS) can now be used to send notifications containing encrypted PHI/PII as part of patient care, payment processing, and mobile applications.

Amazon Cognito can now be used to authenticate users into mobile patient portal and payment processing applications that use PHI/PII identifiers for accounts.

Additional HIPAA Resources
Here are some additional resources that will help you to build applications that comply with HIPAA and HITECH:

Keep in Touch
In order to make use of any AWS service in any manner that involves PHI, you must first enter into an AWS Business Associate Addendum (BAA). You can contact us to start the process.

Jeff;

New – Next-Generation GPU-Powered EC2 Instances (G3)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-next-generation-gpu-powered-ec2-instances-g3/

I first wrote about the benefits of GPU-powered computing in 2013 when we launched the G2 instance type. Since that launch, AWS customers have used the G2 instances to deliver high performance graphics to mobile devices, TV sets, and desktops.

Today we are taking a step forward and launching the G3 instance type. Powered by NVIDIA Tesla M60 GPUs, these instances are available in three sizes (all VPC-only and EBS-only):

Model GPUs GPU Memory vCPUs Main Memory EBS Bandwidth
g3.4xlarge 1 8 GiB 16 122 GiB 3.5 Gbps
g3.8xlarge 2 16 GiB 32 244 GiB 7 Gbps
g3.16xlarge 4 32 GiB 64 488 GiB 14 Gbps

Each GPU supports 8 GiB of GPU memory, 2048 parallel processing cores, and a hardware encoder capable of supporting up to 10 H.265 (HEVC) 1080p30 streams and up to 18 H.264 1080p30 streams, making them a great fit for 3D rendering & visualization, virtual reality, video encoding, remote graphics workstation (NVIDIA GRID), and other server-side graphics workloads that need a massive amount of parallel processing power. The GPUs support OpenGL 4.5, DirectX 12.0, CUDA 8.0, and OpenCL 1.2. When you launch a G3 instance you have access to an NVIDIA GRID Virtual Workstation License and can make use of the NVIDIA GRID driver without purchasing a license on your own.

The instances use Intel Xeon E5-2686 v4 (Broadwell) processors running at 2.7 GHz. On the networking side, Enhanced Networking (via the Elastic Network Adapter) provides up to 20 Gbps of aggregate network bandwidth within a Placement Group, along with up to 14 Gbps of EBS bandwidth.

Our customers have told us that they are looking forward to visualizing large 3D seismic models, configuring cars in 3D, and providing students with the ability to run high-end 2D and 3D applications. For example, Calgary Scientific can take applications that are powered by the Unreal Engine and make them accessible on mobile devices and from within web pages, with collaborative viewing support. Visit their Demo Gallery to see PureWeb Reality in action:

You can launch these instances today in the US East (Ohio), US East (Northern Virginia), US West (Oregon), US West (Northern California), AWS GovCloud (US), and EU (Ireland) Regions as On-Demand, Reserved Instances, Spot Instances, and Dedicated Hosts, with more Regions coming soon.

Jeff;

[$] Highlights in Fedora 26

Post Syndicated from jake original https://lwn.net/Articles/727586/rss

The much anticipated release of Fedora 26 was made on
July 11
. As usual, it came with a wide array of updated packages,
everything from the kernel through programming languages to desktops, but
there are also internal tools and installation mechanisms that have changed
as well. Beyond that, the new Python Classroom
Lab
is aimed at teachers and instructors to make it easier to get a
full-featured Python (of various flavors and with lots of extras) in
several different easily
installable forms. Though it was delayed by more
than a month
from its original planned release date—something the project embraces at some level—Fedora 26
looks like it was worth waiting for.

How to Configure Even Stronger Password Policies to Help Meet Your Security Standards by Using AWS Directory Service for Microsoft Active Directory

Post Syndicated from Ravi Turlapati original https://aws.amazon.com/blogs/security/how-to-configure-even-stronger-password-policies-to-help-meet-your-security-standards-by-using-aws-directory-service-for-microsoft-active-directory/

With AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as AWS Microsoft AD, you can now create and enforce custom password policies for your Microsoft Windows users. AWS Microsoft AD now includes five empty password policies that you can edit and apply with standard Microsoft password policy tools such as Active Directory Administrative Center (ADAC). With this capability, you are no longer limited to the default Windows password policy. Now, you can configure even stronger password policies and define lockout policies that specify when to lock out an account after login failures.

In this blog post, I demonstrate how to edit these new password policies to help you meet your security standards by using AWS Microsoft AD. I also introduce the password attributes you can modify and demonstrate how to apply password policies to user groups in your domain.

Prerequisites

The instructions in this post assume that you already have the following components running:

  • An active AWS Microsoft AD directory.
  • An Amazon EC2 for Windows Server instance that is domain joined to your AWS Microsoft AD directory and on which you have installed ADAC.

If you still need to meet these prerequisites before proceeding:

Scenario overview

Let’s say I am the Active Directory (AD) administrator of Example Corp. At Example Corp., we have a group of technical administrators, several groups of senior managers, and general, nontechnical employees. I need to create password policies for these groups that match our security standards.

Our general employees have access only to low-sensitivity information. However, our senior managers regularly access confidential information and we want to enforce password complexity (a mix of upper and lower case letters, numbers, and special characters) to reduce the risk of data theft. For our administrators, we want to enforce password complexity policies to prevent unauthorized access to our system administration tools.

Our security standards call for the following enforced password and account lockout policies:

  • General employees – To make it easier for nontechnical general employees to remember their passwords, we do not enforce password complexity. However, we want to enforce a minimum password length of 8 characters and a lockout policy after 6 failed login attempts as a minimum bar to protect against unwanted access to our low-sensitivity information. If a general employee forgets their password and becomes locked out, we let them try again in 5 minutes, rather than require escalated password resets. We also want general employees to rotate their passwords every 60 days with no duplicated passwords in the past 10 password changes.
  • Senior managers – For senior managers, we enforce a minimum password length of 10 characters and require password complexity. An account lockout is enforced after 6 failed attempts with an account lockout duration of 15 minutes. Senior managers must rotate their passwords every 45 days, and they cannot duplicate passwords from the past 20 changes.
  • Administrators – For administrators, we enforce password complexity with a minimum password length of 15 characters. We also want to lock out accounts after 6 failed attempts, have password rotation every 30 days, and disallow duplicate passwords in the past 30 changes. When a lockout occurs, we require a special administrator to intervene and unlock the account so that we can be aware of any potential hacking.
  • Fine-Grained Password Policy administrators – To ensure that only trusted administrators unlock accounts, we have two special administrator accounts (admin and midas) that can unlock accounts. These two accounts have the same policy as the other administrators except they have an account lockout duration of 15 minutes, rather than requiring a password reset. These two accounts are also the accounts used to manage Example Corp.’s password policies.

The following table summarizes how I edit each of the four policies I intend to use.

Policy name EXAMPLE-PSO-01 EXAMPLE-PSO-02 EXAMPLE-PSO-03 EXAMPLE-PSO-05
Precedence 10 20 30 50
User group Fine-Grained Password Policy Administrators Other Administrators Senior Managers General Employees
Minimum password length 15 15 10 8
Password complexity Enable Enable Enable Disable
Maximum password age 30 days 30 days 45 days 60 days
Account complexity Enable Enable Enable Disable
Number of failed logon attempts allowed 6 6 6 6
Duration 15 minutes Not applicable 15 minutes 5 minutes
Password history 24 30 20 10
Until admin manually unlocks account Not applicable Selected Not applicable Not applicable

To implement these password policies, I use 4 of the 5 new password policies available in AWS Microsoft AD:

  1. I first explain how to configure the password policies.
  2. I then demonstrate how to apply the four password policies that match Example Corp.’s security standards for these user groups.

1. Configure password policies in AWS Microsoft AD

To help you get started with password policies, AWS has added the Fine-Grained Pwd Policy Admins AD security group to your AWS Microsoft AD directory. Any user or other security group that is part of the Fine-Grained Pwd Policy Admins group has permissions to edit and apply the five new password policies. By default, your directory Admin is part of the new group and can add other users or groups to this group.

Adding users to the Fine-Grained Pwd Policy Admins user group

Follow these steps to add more users or AD security groups to the Fine-Grained Pwd Policy Admins security group so that they can administer fine-grained password policies:

  1. Launch ADAC from your managed instance.
  2. Switch to the Tree View and navigate to CORP > Users.
  3. Find the Fine Grained Pwd Policy Admins user group. Add any users or groups in your domain to this group.

Edit password policies

To edit fine-grained password policies, open ADAC from any management instance joined to your domain. Switch to the Tree View and navigate to System > Password Settings Container. You will see the five policies containing the string -PSO- that AWS added to your directory, as shown in the following screenshot. Select a policy to edit it.

Screenshot showing the five new password policies

After editing the password policy, apply the policy by adding users or AD security groups to these policies by choosing Add. The default domain GPO applies if you do not configure any of the five password policies. For additional details about using Password Settings Container, go to Step-by-Step: Enabling and Using Fine-Grained Password Policies in AD on the Microsoft TechNet Blog.

The password attributes you can edit

AWS allows you to edit all of the password attributes except Precedence (I explain more about Precedence in the next section). These attributes include:

  • Password history
  • Minimum password length
  • Minimum password age
  • Maximum password age
  • Store password using reversible encryption
  • Password must meet complexity requirements

You also can enforce the following attributes for account lockout settings:

  • The number of failed login attempts allowed
  • Account lockout duration
  • Reset failed login attempts after a specified duration

For more details about how these attributes affect password enforcement, see AD DS: Fine-Grained Password Policies on Microsoft TechNet.

Understanding password policy precedence

AD password policies have a precedence (a numerical attribute that AD uses to determine the resultant policy) associated with them. Policies with a lower value for Precedence have higher priority than other policies. A user inherits all policies that you apply directly to the user or to any groups to which the user belongs. For example, suppose jsmith is a member of the HR group and also a member of the MANAGERS group. If I apply a policy with a Precedence of 50 to the HR group and a policy with a Precedence of 40 to MANAGERS, the policy with the Precedence value of 40 ranks higher and AD applies that policy to jsmith.

If you apply multiple policies to a user or group, the resultant policy is determined as follows by AD:

  1. If you apply a policy directly to a user, AD enforces the lowest directly applied password policy.
  2. If you did not apply a policy directly to the user, AD enforces the policy with the lowest Precedence value of all policies inherited by the user through the user’s group membership.

For more information about AD fine-grained policies, see AD DS: Fine-Grained Password Policies on Microsoft TechNet.

2. Apply password policies to user groups

In this section, I demonstrate how to apply Example Corp.’s password policies. Except in rare cases, I only apply policies by group membership, which ensures that AD does not enforce a lower priority policy on an individual user if have I added them to a group with a higher priority policy.

Because my directory is new, I use a Remote Desktop Protocol (RDP) connection to sign in to the Windows Server instance I domain joined to my AWS Microsoft AD directory. Signing in with the admin account, I launch ADAC to perform the following tasks:

  1. First, I set up my groups so that I can apply password policies to them. Later, I can create user accounts and add them to my groups and AD applies the right policy by using the policy precedence and resultant policy algorithms I discussed previously. I start by adding the two special administrative accounts (admin and midas) that I described previously to the Fine-Grained Pwd Policy Admins. Because AWS Microsoft AD adds my default admin account to Fine-Grained Pwd Policy Admins, I only need to create midas and then add midas to the Fine-Grained Pwd Policy Admins group.
  2. Next, I create the Other Administrators, Senior Managers, and General Employees groups that I described previously, as shown in the following screenshot.
    Screenshot of the groups created

For this post’s example, I use these four policies:

  1. EXAMPLE-PSO-01 (highest priority policy) – For the administrators who manage Example Corp.’s password policies. Applying this highest priority policy to the Fine-Grained Pwd Policy Admins group prevents these users from being locked out if they also are assigned to a different policy.
  2. EXAMPLE-PSO-02 (the second highest priority policy) – For Example Corp.’s other administrators.
  3. EXAMPLE-PSO-03 (the third highest priority policy) – For Example Corp.’s senior managers.
  4. EXAMPLE-PSO-05 (the lowest priority policy) – For Example Corp.’s general employees.

This leaves me one password policy (EXAMPLE-PSO-04) that I can use for in the future if needed.

I start by editing the policy, EXAMPLE-PSO-01. To edit the policy, I follow the Edit password policies section from earlier in this post. When finished, I add the Fine-Grained Pwd Policy Admins group to that policy, as shown in the following screenshot. I then repeat the process for each of the remaining policies, as described in the Scenario overview section earlier in this post.

Screenshot of adding the Fine-Grained Pwd Policy Admins group to the EXAMPLE-PSO-01 policy

Though AD enforces new password policies, the timing related to how password policies replicate in the directory, the types of attributes that are changed, and the timing of user password changes can cause variability in the immediacy of policy enforcement. In general, after the policies are replicated throughout the directory, attributes that affect account lockout and password age take effect. Attributes that affect the quality of a password, such as password length, take effect when the password is changed. If the password age for a user is in compliance, but their password strength is out of compliance, the user is not forced to change their password. For more information password policy impact, see this Microsoft TechNet article.

Summary

In this post, I have demonstrated how you can configure strong password policies to meet your security standards by using AWS Microsoft AD. To learn more about AWS Microsoft AD, see the AWS Directory Service home page.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about this blog post, start a new thread on the Directory Service forum.

– Ravi

Chrome’s Default ‘Ad-Blocker’ is Bad News for Torrent Sites

Post Syndicated from Ernesto original https://torrentfreak.com/chromes-default-ad-blocker-is-bad-news-for-torrent-sites-170705/

Online advertising can be quite a nuisance. Flashy and noisy banners, or intrusive pop-ups, are a thorn in the side of many Internet users.

These type of ads are particularly popular on pirate sites, so it’s no surprise that their users are more likely to have an ad-blocker installed.

The increasing popularity of these ad-blocking tools hasn’t done the income of site owners any good and the trouble on this front is about to increase.

A few weeks ago Google announced that its Chrome browser will start blocking ‘annoying’ ads in the near future, by default. This applies to all ads that don’t fall within the “better ads standards,” including popups and sticky ads.

Since Chrome is the leading browser on many pirate sites, this is expected to have a serious effect on torrent sites and other pirate platforms. TorrentFreak spoke to the operator of one of the largest torrent sites, who’s sounding the alarm bell.

The owner, who prefers not to have his site mentioned, says that it’s already hard to earn enough money to pay for hardware and hosting to keep the site afloat. This, despite millions of regular visitors.

“The torrent site economy is in a bad state. Profits are very low. Profits are f*cked up compared to previous years,” the torrent site owner says.

At the moment, 40% of the site’s users already have an ad-blocker installed, but when Chrome joins in with its default filter, it’s going to get much worse. A third of all visitors to the torrent site in question use the Chrome browser, either through mobile or desktop.

“Chrome’s ad-blocker will kill torrent sites. If they don’t at least cover their costs, no one is going to use money out of his pocket to keep them alive. I won’t be able to do so at least,” the site owner says.

It’s too early to assess how broad Chrome’s ad filtering will be, but torrent site owners may have to look for cleaner ads. That’s easier said than done though, as it’s usually the lower tier advertisers that are willing to work with these sites and they often serve more annoying ads.

The torrent site owner we spoke with isn’t very optimistic about the future. While he’s tested alternative revenue sources, he sees advertising as the only viable option. And with Chrome lining up to target part of their advertising inventory, revenue may soon dwindle.

“I’ve tested all types of ads and affiliates that are safe to work with, and advertising is the only way to cover costs. Also, most services that you can make good money promoting don’t work with torrent sites,” the torrent site owner notes.

Just a few months ago popular torrent site TorrentHound decided to shut down, citing a lack in revenue as one of the main reasons. This is by no means an isolated incident. TorrentFreak spoke to other site owners who confirm that it’s becoming harder and harder to pay the bills through advertisements.

The operator of Torlock, for example, confirms that those who are in the business to make a profit are having a hard time.

“All in all it’s a tough time for torrent sites but those that do it for the money will have a far more difficult time in the current climate than those who do this as a hobby and as a passion. We do it for the love of it so it doesn’t really affect us as much,” Torlock’s operator says.

Still, there is plenty of interest from advertisers, some of whom are trying their best to circumvent ad-blockers.

“Every day we receive emails from willing advertisers wanting to work with us so the market is definitely still there and most of them have the technology in place to circumvent adblockers, including Chrome’s default one,” he adds.

Google’s decision to ship Chrome with a default ad-blocker appears to be self-serving in part. If users see less annoying ads, they are less likely to install a third-party ad-blocker which blocks more of Google’s own advertisements.

Inadvertently, however, they may have also announced their most effective anti-piracy strategy to date.

If pirate sites are unable to generate enough revenue through advertisements, there are few options left. In theory, they could start charging visitors money, but most pirates go to these sites to avoid paying.

Asking for voluntary donations is an option, but that’s unlikely to cover the all the costs.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

New Power Bundle for Amazon WorkSpaces – More vCPUs, Memory, and Storage

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-power-bundle-for-amazon-workspaces-more-vcpus-memory-and-storage/

Are you tired of hearing me talk about Amazon WorkSpaces yet? I hope not, because we have a lot of customer-driven additions on the roadmap! Our customers in the developer and analyst community have been asking for a workstation-class machine that will allow them to take advantage of the low cost and flexibility of WorkSpaces. Developers want to run Visual Studio, IntelliJ, Eclipse, and other IDEs. Analysts want to run complex simulations and statistical analysis using MatLab, GNU Octave, R, and Stata.

New Power Bundle
Today we are extending the current set of WorkSpaces bundles with a new Power bundle. With four vCPUs, 16 GiB of memory, and 275 GB of storage (175 GB on the system volume and another 100 GB on the user volume), this bundle is designed to make developers, analysts, (and me) smile. You can launch them in all of the usual ways: Console, CLI (create-workspaces), or API (CreateWorkSpaces):

One really interesting benefit to using a cloud-based virtual desktop for simulations and statistical analysis is the ease of access to data that’s already stored in the cloud. Analysts can mine and analyze petabytes of data stored in S3 that is effectively local (with respect to access time) to the WorkSpace. This low-latency access will boost productivity and also simplifies the use of other AWS data analysis tools such as Amazon Redshift, Amazon Redshift Spectrum, Amazon QuickSight, and Amazon Athena.

Like the existing bundles, the new Power bundle can be used in either billing configuration, AlwaysOn or AutoStop (read Amazon WorkSpaces Update – Hourly Usage and Expanded Root Volume to learn more). The bundle is available in all AWS Regions where WorkSpaces is available and you can launch one today! Visit the WorkSpaces Pricing page for pricing in your region.

Jeff;

Yahoo Mail’s New Tech Stack, Built for Performance and Reliability

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/162320493306

By Suhas Sadanandan, Director of Engineering 

When it comes to performance and reliability, there is perhaps no application where this matters more than with email. Today, we announced a new Yahoo Mail experience for desktop based on a completely rewritten tech stack that embodies these fundamental considerations and more.

We built the new Yahoo Mail experience using a best-in-class front-end tech stack with open source technologies including React, Redux, Node.js, react-intl (open-sourced by Yahoo), and others. A high-level architectural diagram of our stack is below.

image

New Yahoo Mail Tech Stack

In building our new tech stack, we made use of the most modern tools available in the industry to come up with the best experience for our users by optimizing the following fundamentals:

Performance

A key feature of the new Yahoo Mail architecture is blazing-fast initial loading (aka, launch).

We introduced new network routing which sends users to their nearest geo-located email servers (proximity-based routing). This has resulted in a significant reduction in time to first byte and should be immediately noticeable to our international users in particular.

We now do server-side rendering to allow our users to see their mail sooner. This change will be immediately noticeable to our low-bandwidth users. Our application is isomorphic, meaning that the same code runs on the server (using Node.js) and the client. Prior versions of Yahoo Mail had programming logic duplicated on the server and the client because we used PHP on the server and JavaScript on the client.   

Using efficient bundling strategies (JavaScript code is separated into application, vendor, and lazy loaded bundles) and pushing only the changed bundles during production pushes, we keep the cache hit ratio high. By using react-atomic-css, our homegrown solution for writing modular and scoped CSS in React, we get much better CSS reuse.  

In prior versions of Yahoo Mail, the need to run various experiments in parallel resulted in additional branching and bloating of our JavaScript and CSS code. While rewriting all of our code, we solved this issue using Mendel, our homegrown solution for bucket testing isomorphic web apps, which we have open sourced.  

Rather than using custom libraries, we use native HTML5 APIs and ES6 heavily and use PolyesterJS, our homegrown polyfill solution, to fill the gaps. These factors have further helped us to keep payload size minimal.

With all the above optimizations, we have been able to reduce our JavaScript and CSS footprint by approximately 50% compared to the previous desktop version of Yahoo Mail, helping us achieve a blazing-fast launch.

In addition to initial launch improvements, key features like search and message read (when a user opens an email to read it) have also benefited from the above optimizations and are considerably faster in the latest version of Yahoo Mail.

We also significantly reduced the memory consumed by Yahoo Mail on the browser. This is especially noticeable during a long running session.

Reliability

With this new version of Yahoo Mail, we have a 99.99% success rate on core flows: launch, message read, compose, search, and actions that affect messages. Accomplishing this over several billion user actions a day is a significant feat. Client-side errors (JavaScript exceptions) are reduced significantly when compared to prior Yahoo Mail versions.

Product agility and launch velocity

We focused on independently deployable components. As part of the re-architecture of Yahoo Mail, we invested in a robust continuous integration and delivery flow. Our new pipeline allows for daily (or more) pushes to all Mail users, and we push only the bundles that are modified, which keeps the cache hit ratio high.

Developer effectiveness and satisfaction

In developing our tech stack for the new Yahoo Mail experience, we heavily leveraged open source technologies, which allowed us to ensure a shorter learning curve for new engineers. We were able to implement a consistent and intuitive onboarding program for 30+ developers and are now using our program for all new hires. During the development process, we emphasise predictable flows and easy debugging.

Accessibility

The accessibility of this new version of Yahoo Mail is state of the art and delivers outstanding usability (efficiency) in addition to accessibility. It features six enhanced visual themes that can provide accommodation for people with low vision and has been optimized for use with Assistive Technology including alternate input devices, magnifiers, and popular screen readers such as NVDA and VoiceOver. These features have been rigorously evaluated and incorporate feedback from users with disabilities. It sets a new standard for the accessibility of web-based mail and is our most-accessible Mail experience yet.

Open source 

We have open sourced some key components of our new Mail stack, like Mendel, our solution for bucket testing isomorphic web applications. We invite the community to use and build upon our code. Going forward, we plan on also open sourcing additional components like react-atomic-css, our solution for writing modular and scoped CSS in React, and lazy-component, our solution for on-demand loading of resources.

Many of our company’s best technical minds came together to write a brand new tech stack and enable a delightful new Yahoo Mail experience for our users.

We encourage our users and engineering peers in the industry to test the limits of our application, and to provide feedback by clicking on the Give Feedback call out in the lower left corner of the new version of Yahoo Mail.

Backblaze B2, Cloud Storage on a Budget: One Year Later

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/backblaze-b2-cloud-storage-on-a-budget-one-year-later/

B2 Cloud Storage Review

A year ago, Backblaze B2 Cloud Storage came out of beta and became available for everyone to use. We were pretty excited, even though it seemed like everyone and their brother had a cloud storage offering. Now that we are a year down the road let’s see how B2 has fared in the real world of tight budgets, maxed-out engineering schedules, insanely funded competition, and more. Spoiler alert: We’re still pretty excited…

Cloud Storage on a Budget

There are dozens of companies offering cloud storage and the landscape is cluttered with incomprehensible pricing models, cleverly disguised transfer and download charges, and differing levels of service that seem to be driven more by marketing departments than customer needs.

Backblaze B2 keeps things simple: A single performant level of service, a single affordable price for storage ($0.005/GB/month), a single affordable price for downloads ($0.02/GB), and a single list of transaction charges – all on a single pricing page.

Who’s Using B2?

By making cloud storage affordable, companies and organizations now have a way to store their data in the cloud and still be able to access and restore it as quickly as needed. You don’t have to choose between price and performance. Here are a few examples:

  • Media & Entertainment: KLRU-TV, Austin PBS, is using B2 to preserve their video catalog of the world renown musical anthology series, Austin City Limits.
  • LTO Migration: The Girl Scouts San Diego, were able to move their daily incremental backups from LTO tape to the cloud, saving money and time, while helping automate their entire backup process.
  • Cloud Migration: Vintage Aerial found it cost effective to discard their internal data server and store their unique hi-resolution images in B2 Cloud Storage.
  • Backup: Ahuja and Clark, a boutique accounting firm, was able to save over 80% on the cost to backup all their corporate and client data.

How is B2 Being Used?

B2 Cloud Storage can be accessed in four ways: using the Web GUI, using the CLI, using the API library, and using a product or service integrated with B2. While many customers are using the Web GUI, CLI and API to store and retrieve data, the most prolific use of B2 occurs via our integration partners. Each integration partner has certified they have met our best practices for integrating to B2 and we’ve tested each of the integrations submitted to us. Here are a few of the highlights.

  • NAS Devices – Synology and QNAP have integrations which allow their NAS devices to sync their data to/from B2.
  • Backup and Sync – CloudBerry, GoodSync, and Retrospect are just a few of the services that can backup and/or sync data to/from B2.
  • Hybrid Cloud – 45 Drives and OpenIO are solutions that allow you to setup and operate a hybrid data storage cloud environment.
  • Desktop Apps – CyberDuck, MountainDuck, Dropshare, and more allow users an easy way to store and use data in B2 right from your desktop.
  • Digital Asset Management – Cantemo, Cubix, CatDV, and axle Video, let you catalog your digital assets and then store them in B2 for fast retrieval when they are needed.

If you have an application or service that stores data in the cloud and it isn’t integrated with Backblaze B2, then your customers are probably paying too much for cloud storage.

What’s New in B2?

B2 Fireball – our rapid data ingest service. We send you a storage device, and you load it up with up to 40 TB of data and send it back, then we load the data into your B2 account. The cost is $550 per trip plus shipping. Save your network bandwidth with the B2 Fireball.

Lowered the download price – When we introduced B2, we set the price to download a gigabyte of data to be $0.05/GB – the same as most competitors. A year in, we reevaluated the price based on usage and decided to lower the price to $0.02/GB.

B2 User Groups – Backblaze Groups functionality is now available in B2. An administrator can invite users to a B2 centric Group to centralize the storage location for that group of users. For example, multiple members of a department working on a project will be able to archive their work-in-process activities into a single B2 bucket.

Time Machine backup – You may know that you can use your Synology NAS as the destination for your Time Machine backup. With B2 you can also sync your Synology NAS to B2 for a true 3-2-1 backup solution. If your system crashes or is lost, you can restore your Time Machine image directly from B2 to your new machine.

Life Cycle Rules – Create rules that allow you to manage the length of time deleted files will remain in your B2 bucket before they are deleted. A great option for managing the cleanup of outdated file versions to save on storage costs.

Large Files – In the B2 Web GUI you can upload files as large as 500 MB using either the upload or drag-and-drop functionality. The B2 CLI and API support the ability to upload/download files as large as 10 TB.

5 MB file part size – When working with large files, the minimum file part size can now be set as low as 5 MB versus the previous low setting of 100 MB. Now the range of a file part when working with large files can be from 5 MB to 5GB. This increases the throughput of your data uploads and downloads.

SHA-1 at the end – This feature allows you to compute the SHA-1 checksum and append it to the end of the request body versus doing the computation before the file is sent. This is especially useful for those applications which stream data to/from B2.

Cache-Control – When data is downloaded from B2 into a browser, the length of time the file remains in the browser cache can be set at the bucket level using the b2_create_bucket and b2_update_bucket API calls. Setting this policy is optional.

Customized delimiters – Used in the API, this allows you to specify a delimiter to use for a given purpose. A common use is to set a delimiter in the file name string. Then use that delimiter to detect a folder name within the string.

Looking Ahead

Over the past year we added nearly 30,000 new B2 customers to the fold and are welcoming more and more each day as B2 continues to grow. We have plans to expand our storage footprint by adding more data centers as we look forward to moving towards a multi-region environment.

For those of you who are B2 customers – thank you for helping build B2. If you have an interesting way you are using B2, tell us in the comments below.

The post Backblaze B2, Cloud Storage on a Budget: One Year Later appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Scratch 2.0: all-new features for your Raspberry Pi

Post Syndicated from Rik Cross original https://www.raspberrypi.org/blog/scratch-2-raspberry-pi/

We’re very excited to announce that Scratch 2.0 is now available as an offline app for the Raspberry Pi! This new version of Scratch allows you to control the Pi’s GPIO (General Purpose Input and Output) pins, and offers a host of other exciting new features.

Offline accessibility

The most recent update to Raspbian includes the app, which makes Scratch 2.0 available offline on the Raspberry Pi. This is great news for clubs and classrooms, where children can now use Raspberry Pis instead of connected laptops or desktops to explore block-based programming and physical computing.

Controlling GPIO with Scratch 2.0

As with Scratch 1.4, Scratch 2.0 on the Raspberry Pi allows you to create code to control and respond to components connected to the Pi’s GPIO pins. This means that your Scratch projects can light LEDs, sound buzzers and use input from buttons and a range of sensors to control the behaviour of sprites. Interacting with GPIO pins in Scratch 2.0 is easier than ever before, as text-based broadcast instructions have been replaced with custom blocks for setting pin output and getting current pin state.

Scratch 2.0 GPIO blocks

To add GPIO functionality, first click ‘More Blocks’ and then ‘Add an Extension’. You should then select the ‘Pi GPIO’ extension option and click OK.

Scratch 2.0 GPIO extension

In the ‘More Blocks’ section you should now see the additional blocks for controlling and responding to your Pi GPIO pins. To give an example, the entire code for repeatedly flashing an LED connected to GPIO pin 2.0 is now:

Flashing an LED with Scratch 2.0

To react to a button connected to GPIO pin 2.0, simply set the pin as input, and use the ‘gpio (x) is high?’ block to check the button’s state. In the example below, the Scratch cat will say “Pressed” only when the button is being held down.

Responding to a button press on Scractch 2.0

Cloning sprites

Scratch 2.0 also offers some additional features and improvements over Scratch 1.4. One of the main new features of Scratch 2.0 is the ability to create clones of sprites. Clones are instances of a particular sprite that inherit all of the scripts of the main sprite.

The scripts below show how cloned sprites are used — in this case to allow the Scratch cat to throw a clone of an apple sprite whenever the space key is pressed. Each apple sprite clone then follows its ‘when i start as clone’ script.

Cloning sprites with Scratch 2.0

The cloning functionality avoids the need to create multiple copies of a sprite, for example multiple enemies in a game or multiple snowflakes in an animation.

Custom blocks

Scratch 2.0 also allows the creation of custom blocks, allowing code to be encapsulated and used (possibly multiple times) in a project. The code below shows a simple custom block called ‘jump’, which is used to make a sprite jump whenever it is clicked.

Custom 'jump' block on Scratch 2.0

These custom blocks can also optionally include parameters, allowing further generalisation and reuse of code blocks. Here’s another example of a custom block that draws a shape. This time, however, the custom block includes parameters for specifying the number of sides of the shape, as well as the length of each side.

Custom shape-drawing block with Scratch 2.0

The custom block can now be used with different numbers provided, allowing lots of different shapes to be drawn.

Drawing shapes with Scratch 2.0

Peripheral interaction

Another feature of Scratch 2.0 is the addition of code blocks to allow easy interaction with a webcam or a microphone. This opens up a whole new world of possibilities, and for some examples of projects that make use of this new functionality see Clap-O-Meter which uses the microphone to control a noise level meter, and a Keepie Uppies game that uses video motion to control a football. You can use the Raspberry Pi or USB cameras to detect motion in your Scratch 2.0 projects.

Other new features include a vector image editor and a sound editor, as well as lots of new sprites, costumes and backdrops.

Update your Raspberry Pi for Scratch 2.0

Scratch 2.0 is available in the latest Raspbian release, under the ‘Programming’ menu. We’ve put together a guide for getting started with Scratch 2.0 on the Raspberry Pi online (note that GPIO functionality is only available via the desktop version). You can also try out Scratch 2.0 on the Pi by having a go at a project from the Code Club projects site.

As always, we love to see the projects you create using the Raspberry Pi. Once you’ve upgraded to Scratch 2.0, tell us about your projects via Twitter, Instagram and Facebook, or by leaving us a comment below.

The post Scratch 2.0: all-new features for your Raspberry Pi appeared first on Raspberry Pi.

A Raspbian desktop update with some new programming tools

Post Syndicated from Simon Long original https://www.raspberrypi.org/blog/a-raspbian-desktop-update-with-some-new-programming-tools/

Today we’ve released another update to the Raspbian desktop. In addition to the usual small tweaks and bug fixes, the big new changes are the inclusion of an offline version of Scratch 2.0, and of Thonny (a user-friendly IDE for Python which is excellent for beginners). We’ll look at all the changes in this post, but let’s start with the biggest…

Scratch 2.0 for Raspbian

Scratch is one of the most popular pieces of software on Raspberry Pi. This is largely due to the way it makes programming accessible – while it is simple to learn, it covers many of the concepts that are used in more advanced languages. Scratch really does provide a great introduction to programming for all ages.

Raspbian ships with the original version of Scratch, which is now at version 1.4. A few years ago, though, the Scratch team at the MIT Media Lab introduced the new and improved Scratch version 2.0, and ever since we’ve had numerous requests to offer it on the Pi.

There was, however, a problem with this. The original version of Scratch was written in a language called Squeak, which could run on the Pi in a Squeak interpreter. Scratch 2.0, however, was written in Flash, and was designed to run from a remote site in a web browser. While this made Scratch 2.0 a cross-platform application, which you could run without installing any Scratch software, it also meant that you had to be able to run Flash on your computer, and that you needed to be connected to the internet to program in Scratch.

We worked with Adobe to include the Pepper Flash plugin in Raspbian, which enables Flash sites to run in the Chromium browser. This addressed the first of these problems, so the Scratch 2.0 website has been available on Pi for a while. However, it still needed an internet connection to run, which wasn’t ideal in many circumstances. We’ve been working with the Scratch team to get an offline version of Scratch 2.0 running on Pi.

Screenshot of Scratch on Raspbian

The Scratch team had created a website to enable developers to create hardware and software extensions for Scratch 2.0; this provided a version of the Flash code for the Scratch editor which could be modified to run locally rather than over the internet. We combined this with a program called Electron, which effectively wraps up a local web page into a standalone application. We ended up with the Scratch 2.0 application that you can find in the Programming section of the main menu.

Physical computing with Scratch 2.0

We didn’t stop there though. We know that people want to use Scratch for physical computing, and it has always been a bit awkward to access GPIO pins from Scratch. In our Scratch 2.0 application, therefore, there is a custom extension which allows the user to control the Pi’s GPIO pins without difficulty. Simply click on ‘More Blocks’, choose ‘Add an Extension’, and select ‘Pi GPIO’. This loads two new blocks, one to read and one to write the state of a GPIO pin.

Screenshot of new Raspbian iteration of Scratch 2, featuring GPIO pin control blocks.

The Scratch team kindly allowed us to include all the sprites, backdrops, and sounds from the online version of Scratch 2.0. You can also use the Raspberry Pi Camera Module to create new sprites and backgrounds.

This first release works well, although it can be slow for some operations; this is largely unavoidable for Flash code running under Electron. Bear in mind that you will need to have the Pepper Flash plugin installed (which it is by default on standard Raspbian images). As Pepper Flash is only compatible with the processor in the Pi 2.0 and Pi 3, it is unfortunately not possible to run Scratch 2.0 on the Pi Zero or the original models of the Pi.

We hope that this makes Scratch 2.0 a more practical proposition for many users than it has been to date. Do let us know if you hit any problems, though!

Thonny: a more user-friendly IDE for Python

One of the paths from Scratch to ‘real’ programming is through Python. We know that the transition can be awkward, and this isn’t helped by the tools available for learning Python. It’s fair to say that IDLE, the Python IDE, isn’t the most popular piece of software ever written…

Earlier this year, we reviewed every Python IDE that we could find that would run on a Raspberry Pi, in an attempt to see if there was something better out there than IDLE. We wanted to find something that was easier for beginners to use but still useful for experienced Python programmers. We found one program, Thonny, which stood head and shoulders above all the rest. It’s a really user-friendly IDE, which still offers useful professional features like single-stepping of code and inspection of variables.

Screenshot of Thonny IDE in Raspbian

Thonny was created at the University of Tartu in Estonia; we’ve been working with Aivar Annamaa, the lead developer, on getting it into Raspbian. The original version of Thonny works well on the Pi, but because the GUI is written using Python’s default GUI toolkit, Tkinter, the appearance clashes with the rest of the Raspbian desktop, most of which is written using the GTK toolkit. We made some changes to bring things like fonts and graphics into line with the appearance of our other apps, and Aivar very kindly took that work and converted it into a theme package that could be applied to Thonny.

Due to the limitations of working within Tkinter, the result isn’t exactly like a native GTK application, but it’s pretty close. It’s probably good enough for anyone who isn’t a picky UI obsessive like me, anyway! Have a look at the Thonny webpage to see some more details of all the cool features it offers. We hope that having a more usable environment will help to ease the transition from graphical languages like Scratch into ‘proper’ languages like Python.

New icons

Other than these two new packages, this release is mostly bug fixes and small version bumps. One thing you might notice, though, is that we’ve made some tweaks to our custom icon set. We wondered if the icons might look better with slightly thinner outlines. We tried it, and they did: we hope you prefer them too.

Downloading the new image

You can either download a new image from the Downloads page, or you can use apt to update:

sudo apt-get update
sudo apt-get dist-upgrade

To install Scratch 2.0:

sudo apt-get install scratch2

To install Thonny:

sudo apt-get install python3-thonny

One more thing…

Before Christmas, we released an experimental version of the desktop running on Debian for x86-based computers. We were slightly taken aback by how popular it turned out to be! This made us realise that this was something we were going to need to support going forward. We’ve decided we’re going to try to make all new desktop releases for both Pi and x86 from now on.

The version of this we released last year was a live image that could run from a USB stick. Many people asked if we could make it permanently installable, so this version includes an installer. This uses the standard Debian install process, so it ought to work on most machines. I should stress, though, that we haven’t been able to test on every type of hardware, so there may be issues on some computers. Please be sure to back up your hard drive before installing it. Unlike the live image, this will erase and reformat your hard drive, and you will lose anything that is already on it!

You can still boot the image as a live image if you don’t want to install it, and it will create a persistence partition on the USB stick so you can save data. Just select ‘Run with persistence’ from the boot menu. To install, choose either ‘Install’ or ‘Graphical install’ from the same menu. The Debian installer will then walk you through the install process.

You can download the latest x86 image (which includes both Scratch 2.0 and Thonny) from here or here for a torrent file.

One final thing

This version of the desktop is based on Debian Jessie. Some of you will be aware that a new stable version of Debian (called Stretch) was released last week. Rest assured – we have been working on porting everything across to Stretch for some time now, and we will have a Stretch release ready some time over the summer.

The post A Raspbian desktop update with some new programming tools appeared first on Raspberry Pi.

Sync vs. Backup vs. Storage

Post Syndicated from Yev original https://www.backblaze.com/blog/sync-vs-backup-vs-storage/

Cloud Sync vs. Cloud Backup vs. Cloud Storage

Google Drive recently announced their new Backup and Sync feature for Google Drive, which allows users to select folders on their computer that they want to back up to their Google Drive account (note: these files count against your Google Drive storage limit). Whenever new backup services are announced, we get a lot of questions so I thought we should take a minute to review the differences in cloud based services.

What is the Cloud? Sync Vs Backup Vs Storage

There is still a lot of confusion in the space about what exactly the “cloud” is and how different services interact with it. When folks use a syncing and sharing service like Dropbox, Box, Google Drive, OneDrive or any of the others, they often assume those are acting as a cloud backup solution as well. Adding to the confusion, cloud storage services are often the backend for backup and sync services as well as standalone services. To help sort this out, we’ll define some of the terms below as they apply to a traditional computer set-up with a bunch of apps and data.

Cloud Sync (ex. Dropbox, iCloud Drive, OneDrive, Box, Google Drive) – these services sync folders on your computer to folders on other machines or to the cloud – allowing users to work from a folder or directory across devices. Typically these services have tiered pricing, meaning you pay for the amount of data you store with the service. If there is data loss, sometimes these services even have a rollback feature, of course only files that are in the synced folders are available to be recovered.

Cloud Backup (ex. Backblaze Cloud Backup, Mozy, Carbonite) – these services work in the background automatically. The user does not need to take any action like setting up specific folders. Backup services typically back up any new or changed data on your computer to another location. Before the cloud took off, that location was primarily a CD or an external hard drive – but as cloud storage became more readily available it became the most popular storage medium. Typically these services have fixed pricing, and if there is a system crash or data loss, all backed up data is available for restore. In addition, these services have rollback features in case there is data loss / accidental file deletion.

Cloud Storage (ex. Backblaze B2, Amazon S3, Microsoft Azure) – these services are where many online backup and syncing and sharing services store data. Cloud storage providers typically serve as the endpoint for data storage. These services typically provide APIs, CLIs, and access points for individuals and developers to tie in their cloud storage offerings directly. These services are priced “per GB” meaning you pay for the amount of storage that you use. Since these services are designed for high-availability and durability, data can live solely on these services – though we still recommend having multiple copies of your data, just in case.

What Should You Use?

Backblaze strongly believes in a 3-2-1 Backup Strategy. A 3-2-1 strategy means having at least 3 total copies of your data, 2 of which are local but on different mediums (e.g. an external hard drive in addition to your computer’s local drive), and at least 1 copy offsite. The best setup is data on your computer, a copy on a hard drive that lives somewhere not inside your computer, and another copy with a cloud backup provider. Backblaze Cloud Backup is a great compliment to other services, like Time Machine, Dropbox, and even the free-tiers of cloud storage services.

What is The Difference Between Cloud Sync and Backup?

Let’s take a look at some sync setups that we see fairly frequently.

Example 1) Users have one folder on their computer that is designated for Dropbox, Google Drive, OneDrive, or one of the other syncing/sharing services. Users save or place data into those directories when they want them to appear on other devices. Often these users are using the free-tier of those syncing and sharing services and only have a few GB of data uploaded in them.

Example 2) Users are paying for extended storage for Dropbox, Google Drive, OneDrive, etc… and use those folders as the “Documents” folder – essentially working out of those directories. Files in that folder are available across devices, however, files outside of that folder (e.g. living on the computer’s desktop or anywhere else) are not synced or stored by the service.

What both examples are missing however is the backup of photos, movies, videos, and the rest of the data on their computer. That’s where cloud backup providers excel, by automatically backing up user data with little or no set-up, and no need for the dragging-and-dropping of files. Backblaze actually scans your hard drive to find all the data, regardless of where it might be hiding. The results are, all the user’s data is kept in the Backblaze cloud and the portion of the data that is synced is also kept in that provider’s cloud – giving the user another layer of redundancy. Best of all, Backblaze will actually back up your Dropbox, iCloud Drive, Google Drive, and OneDrive folders.

Data Recovery

The most important feature to think about is how easy it is to get your data back from all of these services. With sync and share services, retrieving a lot of data, especially if you are in a high-data tier, can be cumbersome and take awhile. Generally, the sync and share services only allow customers to download files over the Internet. If you are trying to download more than a couple gigabytes of data, the process can take time and can be fraught with errors.

With cloud storage services, you can usually only retrieve data over the Internet as well, and you pay for both the storage and the egress of the data, so retrieving a large amount of data can be both expensive and time consuming.

Cloud backup services will enable you to download files over the internet too and can also suffer from long download times. At Backblaze we never want our customers to feel like we’re holding their data hostage, which is why we have a lot of restore options, including our Restore Return Refund policy, which allows people to restore their data via a USB Hard Drive, and then return that drive to us for a refund. Cloud sync providers do not provide this capability.

One popular data recovery use case we’ve seen when a person has a lot of data to restore is to download just the files that are needed immediately, and then order a USB Hard Drive restore for the remaining files that are not as time sensitive. The user gets all their files back in a few days, and their network is spared the download charges.

The bottom line is that all of these services have merit for different use-cases. Have questions about which is best for you? Sound off in the comments below!

The post Sync vs. Backup vs. Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.