Tag Archives: airport

The answers to your questions for Eben Upton

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/eben-q-a-1/

Before Easter, we asked you to tell us your questions for a live Q & A with Raspberry Pi Trading CEO and Raspberry Pi creator Eben Upton. The variety of questions and comments you sent was wonderful, and while we couldn’t get to them all, we picked a handful of the most common to grill him on.

You can watch the video below — though due to this being the first pancake of our live Q&A videos, the sound is a bit iffy — or read Eben’s answers to the first five questions today. We’ll follow up with the rest in the next few weeks!

Live Q&A with Eben Upton, creator of the Raspberry Pi

Get your questions to us now using #AskRaspberryPi on Twitter

Any plans for 64-bit Raspbian?

Raspbian is effectively 32-bit Debian built for the ARMv6 instruction-set architecture supported by the ARM11 processor in the first-generation Raspberry Pi. So maybe the question should be: “Would we release a version of our operating environment that was built on top of 64-bit ARM Debian?”

And the answer is: “Not yet.”

When we released the Raspberry Pi 3 Model B+, we released an operating system image on the same day; the wonderful thing about that image is that it runs on every Raspberry Pi ever made. It even runs on the alpha boards from way back in 2011.

That deep backwards compatibility is really important for us, in large part because we don’t want to orphan our customers. If someone spent $35 on an older-model Raspberry Pi five or six years ago, they still spent $35, so it would be wrong for us to throw them under the bus.

So, if we were going to do a 64-bit version, we’d want to keep doing the 32-bit version, and then that would mean our efforts would be split across the two versions; and remember, we’re still a very small engineering team. Never say never, but it would be a big step for us.

For people wanting a 64-bit operating system, there are plenty of good third-party images out there, including SUSE Linux Enterprise Server.

Given that the 3B+ includes 5GHz wireless and Power over Ethernet (PoE) support, why would manufacturers continue to use the Compute Module?

It’s a form-factor thing.

Very large numbers of people are using the bigger product in an industrial context, and it’s well engineered for that: it has module certification, wireless on board, and now PoE support. But there are use cases that can’t accommodate this form factor. For example, NEC displays: we’ve had this great relationship with NEC for a couple of years now where a lot of their displays have a socket in the back that you can put a Compute Module into. That wouldn’t work with the 3B+ form factor.

Back of an NEC display with a Raspberry Pi Compute Module slotted in.

An NEC display with a Raspberry Pi Compute Module

What are some industrial uses/products Raspberry is used with?

The NEC displays are a good example of the broader trend of using Raspberry Pi in digital signage.

A Raspberry Pi running the wait time signage at The Wizarding World of Harry Potter, Universal Studios.
Image c/o thelonelyredditor1

If you see a monitor at a station, or an airport, or a recording studio, and you look behind it, it’s amazing how often you’ll find a Raspberry Pi sitting there. The original Raspberry Pi was particularly strong for multimedia use cases, so we saw uptake in signage very early on.

An array of many Raspberry Pis

Los Alamos Raspberry Pi supercomputer

Another great example is the Los Alamos National Laboratory building supercomputers out of Raspberry Pis. Many high-end supercomputers now are built using white-box hardware — just regular PCs connected together using some networking fabric — and a collection of Raspberry Pi units can serve as a scale model of that. The Raspberry Pi has less processing power, less memory, and less networking bandwidth than the PC, but it has a balanced amount of each. So if you don’t want to let your apprentice supercomputer engineers loose on your expensive supercomputer, a cluster of Raspberry Pis is a good alternative.

Why is there no power button on the Raspberry Pi?

“Once you start, where do you stop?” is a question we ask ourselves a lot.

There are a whole bunch of useful things that we haven’t included in the Raspberry Pi by default. We don’t have a power button, we don’t have a real-time clock, and we don’t have an analogue-to-digital converter — those are probably the three most common requests. And the issue with them is that they each cost a bit of money, they’re each only useful to a minority of users, and even that minority often can’t agree on exactly what they want. Some people would like a power button that is literally a physical analogue switch between the 5V input and the rest of the board, while others would like something a bit more like a PC power button, which is partway between a physical switch and a ‘shutdown’ button. There’s no consensus about what sort of power button we should add.

So the answer is: accessories. By leaving a feature off the board, we’re not taxing the majority of people who don’t want the feature. And of course, we create an opportunity for other companies in the ecosystem to create and sell accessories to those people who do want them.

Adafruit Push-button Power Switch Breakout Raspberry Pi

The Adafruit Push-button Power Switch Breakout is one of many accessories that fill in the gaps for makers.

We have this neat way of figuring out what features to include by default: we divide through the fraction of people who want it. If you have a 20 cent component that’s going to be used by a fifth of people, we treat that as if it’s a $1 component. And it has to fight its way against the $1 components that will be used by almost everybody.

Do you think that Raspberry Pi is the future of the Internet of Things?

Absolutely, Raspberry Pi is the future of the Internet of Things!

In practice, most of the viable early IoT use cases are in the commercial and industrial spaces rather than the consumer space. Maybe in ten years’ time, IoT will be about putting 10-cent chips into light switches, but right now there’s so much money to be saved by putting automation into factories that you don’t need 10-cent components to address the market. Last year, roughly 2 million $35 Raspberry Pi units went into commercial and industrial applications, and many of those are what you’d call IoT applications.

So I think we’re the future of a particular slice of IoT. And we have ten years to get our price point down to 10 cents 🙂

The post The answers to your questions for Eben Upton appeared first on Raspberry Pi.

Amazon S3 Update: New Storage Class and General Availability of S3 Select

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-s3-update-new-storage-class-general-availability-of-s3-select/

I’ve got two big pieces of news for anyone who stores and retrieves data in Amazon Simple Storage Service (S3):

New S3 One Zone-IA Storage Class – This new storage class is 20% less expensive than the existing Standard-IA storage class. It is designed to be used to store data that does not need the extra level of protection provided by geographic redundancy.

General Availability of S3 Select – This unique retrieval option lets you retrieve subsets of data from S3 objects using simple SQL expressions, with the possibility for a 400% performance improvement in the process.

Let’s take a look at both!

S3 One Zone-IA (Infrequent Access) Storage Class
This new storage class stores data in a single AWS Availability Zone and is designed to provide eleven 9’s (99.99999999%) of data durability, just like the other S3 storage classes. Unlike those other classes, it is not designed to be resilient to the physical loss of an AZ due to major event such as an earthquake or a flood, and data could be lost in the unlikely event that an AZ is destroyed. S3 One Zone-IA storage gives you a lower cost option for secondary backups of on-premises data and for data that can be easily re-created. You can also use it as the target of S3 Cross-Region Replication from another AWS region.

You can specify the use of S3 One Zone-IA storage when you upload a new object to S3:

You can also make use of it as part of an S3 lifecycle rule:

You can set up a lifecycle rule that moves previous versions of an object to S3 One Zone-IA after 30 or more days:

And you can modify the storage class of an existing object:

You can also manage storage classes using the S3 API, CLI, and CloudFormation templates.

The S3 One Zone-IA storage class can be used in all public AWS regions. As I noted earlier, pricing is 20% lower than for the S3 Standard-IA storage class (see the S3 Pricing page for more info). There’s a 30 day minimum retention period, and a 128 KB minimum object size.

General Availability of S3 Select
Randall wrote a detailed introduction to S3 Select last year and showed you how you can use it to retrieve selected data from within S3 objects. During the preview we added support for server-side encryption and the ability to run queries from the S3 Console.

I used a CSV file of airport codes to exercise the new console functionality:

This file contains listings for over 9100 airports, so it makes for useful test data but it definitely does not test the limits of S3 Select in any way. I select the file, open the More menu, and choose Select from:

The console sets the file format and compression according to the file name and the encryption status. I set delimiter and click Show file preview to verify that my settings are correct. Then I click Next to proceed:

I type SQL expressions in the SQL editor and click Run SQL to issue the query:

Or:

I can also issue queries from the AWS SDKs. I initiate the select operation:

s3 = boto3.client('s3', region_name='us-west-2')

r = s3.select_object_content(
        Bucket='jbarr-us-west-2',
        Key='sample-data/airportCodes.csv',
        ExpressionType='SQL',
        Expression="select * from s3object s where s.\"Country (Name)\" like '%United States%'",
        InputSerialization = {'CSV': {"FileHeaderInfo": "Use"}},
        OutputSerialization = {'CSV': {}},
)

And then I process the stream of results:

for event in r['Payload']:
    if 'Records' in event:
        records = event['Records']['Payload'].decode('utf-8')
        print(records)
    elif 'Stats' in event:
        statsDetails = event['Stats']['Details']
        print("Stats details bytesScanned: ")
        print(statsDetails['BytesScanned'])
        print("Stats details bytesProcessed: ")
        print(statsDetails['BytesProcessed'])

S3 Select is available in all public regions and you can start using it today. Pricing is based on the amount of data scanned and the amount of data returned.

Jeff;

WannaCry after one year

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/03/wannacry-after-one-year.html

In the news, Boeing (an aircraft maker) has been “targeted by a WannaCry virus attack”. Phrased this way, it’s implausible. There are no new attacks targeting people with WannaCry. There is either no WannaCry, or it’s simply a continuation of the attack from a year ago.


It’s possible what happened is that an anti-virus product called a new virus “WannaCry”. Virus families are often related, and sometimes a distant relative gets called the same thing. I know this watching the way various anti-virus products label my own software, which isn’t a virus, but which virus writers often include with their own stuff. The Lazarus group, which is believed to be responsible for WannaCry, have whole virus families like this. Thus, just because an AV product claims you are infected with WannaCry doesn’t mean it’s the same thing that everyone else is calling WannaCry.

Famously, WannaCry was the first virus/ransomware/worm that used the NSA ETERNALBLUE exploit. Other viruses have since added the exploit, and of course, hackers use it when attacking systems. It may be that a network intrusion detection system detected ETERNALBLUE, which people then assumed was due to WannaCry. It may actually have been an nPetya infection instead (nPetya was the second major virus/worm/ransomware to use the exploit).

Or it could be the real WannaCry, but it’s probably not a new “attack” that “targets” Boeing. Instead, it’s likely a continuation from WannaCry’s first appearance. WannaCry is a worm, which means it spreads automatically after it was launched, for years, without anybody in control. Infected machines still exist, unnoticed by their owners, attacking random machines on the Internet. If you plug in an unpatched computer onto the raw Internet, without the benefit of a firewall, it’ll get infected within an hour.

However, the Boeing manufacturing systems that were infected were not on the Internet, so what happened? The narrative from the news stories imply some nefarious hacker activity that “targeted” Boeing, but that’s unlikely.

We have now have over 15 years of experience with network worms getting into strange places disconnected and even “air gapped” from the Internet. The most common reason is laptops. Somebody takes their laptop to some place like an airport WiFi network, and gets infected. They put their laptop to sleep, then wake it again when they reach their destination, and plug it into the manufacturing network. At this point, the virus spreads and infects everything. This is especially the case with maintenance/support engineers, who often have specialized software they use to control manufacturing machines, for which they have a reason to connect to the local network even if it doesn’t have useful access to the Internet. A single engineer may act as a sort of Typhoid Mary, going from customer to customer, infecting each in turn whenever they open their laptop.

Another cause for infection is virtual machines. A common practice is to take “snapshots” of live machines and save them to backups. Should the virtual machine crash, instead of rebooting it, it’s simply restored from the backed up running image. If that backup image is infected, then bringing it out of sleep will allow the worm to start spreading.

Jake Williams claims he’s seen three other manufacturing networks infected with WannaCry. Why does manufacturing seem more susceptible? The reason appears to be the “killswitch” that stops WannaCry from running elsewhere. The killswitch uses a DNS lookup, stopping itself if it can resolve a certain domain. Manufacturing networks are largely disconnected from the Internet enough that such DNS lookups don’t work, so the domain can’t be found, so the killswitch doesn’t work. Thus, manufacturing systems are no more likely to get infected, but the lack of killswitch means the virus will continue to run, attacking more systems instead of immediately killing itself.

One solution to this would be to setup sinkhole DNS servers on the network that resolve all unknown DNS queries to a single server that logs all requests. This is trivially setup with most DNS servers. The logs will quickly identify problems on the network, as well as any hacker or virus activity. The side effect is that it would make this killswitch kill WannaCry. WannaCry isn’t sufficient reason to setup sinkhole servers, of course, but it’s something I’ve found generally useful in the past.

Conclusion

Something obviously happened to the Boeing plant, but the narrative is all wrong. Words like “targeted attack” imply things that likely didn’t happen. Facts are so loose in cybersecurity that it may not have even been WannaCry.

The real story is that the original WannaCry is still out there, still trying to spread. Simply put a computer on the raw Internet (without a firewall) and you’ll get attacked. That, somehow, isn’t news. Instead, what’s news is whenever that continued infection hits somewhere famous, like Boeing, even though (as Boeing claims) it had no important effect.

On the Security of Walls

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/on_the_security.html

Interesting history of the security of walls:

Dún Aonghasa presents early evidence of the same principles of redundant security measures at work in 13th century castles, 17th century star-shaped artillery fortifications, and even “defense in depth” security architecture promoted today by the National Institute of Standards and Technology, the Nuclear Regulatory Commission, and countless other security organizations world-wide.

Security advances throughout the centuries have been mostly technical adjustments in response to evolving weaponry. Fortification — the art and science of protecting a place by imposing a barrier between you and an enemy — is as ancient as humanity. From the standpoint of theory, however, there is very little about modern network or airport security that could not be learned from a 17th century artillery manual. That should trouble us more than it does.

Fortification depends on walls as a demarcation between attacker and defender. The very first priority action listed in the 2017 National Security Strategy states: “We will secure our borders through the construction of a border wall, the use of multilayered defenses and advanced technology, the employment of additional personnel, and other measures.” The National Security Strategy, as well as the executive order just preceding it, are just formal language to describe the recurrent and popular idea of a grand border wall as a central tool of strategic security. There’s been a lot said about the costs of the wall. But, as the American finger hovers over the Hadrian’s Wall 2.0 button, whether or not a wall will actually improve national security depends a lot on how walls work, but moreso, how they fail.

Lots more at the link.

Server vs Endpoint Backup — Which is Best?

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/endpoint-backup-for-distributed-computing/

server and computer backup to the cloud

How common are these statements in your organization?

  • I know I saved that file. The application must have put it somewhere outside of my documents folder.” — Mike in Marketing
  • I was on the road and couldn’t get a reliable VPN connection. I guess that’s why my laptop wasn’t backed up.” — Sally in Sales
  • I try to follow file policies, but I had a deadline this week and didn’t have time to copy my files to the server.” — Felicia in Finance
  • I just did a commit of my code changes and that was when the coffee mug was knocked over onto the laptop.” — Erin in Engineering
  • If you need a file restored from backup, contact the help desk at [email protected] The IT department will get back to you.” — XYZ corporate intranet
  • Why don’t employees save files on the network drive like they’re supposed to?” — Isaac in IT

If these statements are familiar, most likely you rely on file server backups to safeguard your valuable endpoint data.

The problem is, the workplace has changed. Where server backups might have fit how offices worked at one time in the past, relying solely on server backups today means you could be missing valuable endpoint data from your backups. On top of that, you likely are unnecessarily expending valuable user and IT time in attempting to secure and restore endpoint data.

Times Have Changed, and so have Effective Enterprise Backup Strategies

The ways we use computers and handle files today are vastly different from just five or ten years ago. Employees are mobile, and we no longer are limited to monolithic PC and Mac-based office suites. Cloud applications are everywhere. Company-mandated network drive policies are difficult to enforce as office practices change, devices proliferate, and organizational culture evolves. Besides, your IT staff has other things to do than babysit your employees to make sure they follow your organization’s policies for managing files.

Server Backup has its Place, but Does it Support How People Work Today?

Many organizations still rely on server backup. If your organization works primarily in centralized offices with all endpoints — likely desktops — connected directly to your network, and you maintain tight control of how employees manage their files, it still might work for you.

Your IT department probably has set network drive policies that require employees to save files in standard places that are regularly backed up to your file server. Turns out, though, that even standard applications don’t always save files where IT would like them to be. They could be in a directory or folder that’s not regularly backed up.

As employees have become more mobile, they have adopted practices that enable them to access files from different places, but these practices might not fit in with your organization’s server policies. An employee saving a file to Dropbox might be planning to copy it to an “official” location later, but whether that ever happens could be doubtful. Often people don’t realize until it’s too late that accidentally deleting a file in one sync service directory means that all copies in all locations — even the cloud — are also deleted.

Employees are under increasing demands to produce, which means that network drive policies aren’t always followed; time constraints and deadlines can cause best practices to go out the window. Users will attempt to comply with policies as best they can — and you might get 70% or even 75% effective compliance — but getting even to that level requires training, monitoring, and repeatedly reminding employees of policies they need to follow — none of which leads to a good work environment.

Even if you get to 75% compliance with network file policies, what happens if the critical file needed to close out an end-of-year financial summary isn’t one of the files backed up? The effort required for IT to get from 70% to 80% or 90% of an endpoint’s files effectively backed up could require multiple hours from your IT department, and you still might not have backed up the one critical file you need later.

Your Organization Operates on its Data — And Today That Data Exists in Multiple Locations

Users are no longer tied to one endpoint, and may use different computers in the office, at home, or traveling. The greater the number of endpoints used, the greater the chance of an accidental or malicious device loss or data corruption. The loss of the Sales VP’s laptop at the airport on her way back from meeting with major customers can affect an entire organization and require weeks to resolve.

Even with the best intentions and efforts, following policies when out of the office can be difficult or impossible. Connecting to your private network when remote most likely requires a VPN, and VPN connectivity can be challenging from the lobby Wi-Fi at the Radisson. Server restores require time from the IT staff, which can mean taking resources away from other IT priorities and a growing backlog of requests from users to need their files as soon as possible. When users are dependent on IT to get back files critical to their work, employee productivity and often deadlines are affected.

Managing Finite Server Storage Is an Ongoing Challenge

Network drive backup usually requires on-premises data storage for endpoint backups. Since it is a finite resource, allocating that storage is another burden on your IT staff. To make sure that storage isn’t exceeded, IT departments often ration storage by department and/or user — another oversight duty for IT, and even more choices required by your IT department and department heads who have to decide which files to prioritize for backing up.

Adding Backblaze Endpoint Backup Improves Business Continuity and Productivity

Having an endpoint backup strategy in place can mitigate these problems and improve user productivity, as well. A good endpoint backup service, such as Backblaze Cloud Backup, will ensure that all devices are backed up securely, automatically, without requiring any action by the user or by your IT department.

For 99% of users, no configuration is required for Backblaze Backup. Everything on the endpoint is encrypted and securely backed up to the cloud, including program configuration files and files outside of standard document folders. Even temp files are backed up, which can prove invaluable when recovering a file after a crash or other program interruption. Cloud storage is unlimited with Backblaze Backup, so there are no worries about running out of storage or rationing file backups.

The Backblaze client can be silently and remotely installed to both Macintosh and Windows clients with no user interaction. And, with Backblaze Groups, your IT staff has complete visibility into when files were last backed up. IT staff can recover any backed up file, folder, or entire computer from the admin panel, and even give file restore capability to the user, if desired, which reduces dependency on IT and time spent waiting for restores.

With over 500 petabytes of customer data stored and one million files restored every hour of every day by Backblaze customers, you know that Backblaze Backup works for its users.

You Need Data Security That Matches the Way People Work Today

Both file server and endpoint backup have their places in an organization’s data security plan, but their use and value differ. If you already are using file server backup, adding endpoint backup will make a valuable contribution to your organization by reducing workload, improving productivity, and increasing confidence that all critical files are backed up.

By guaranteeing fast and automatic backup of all endpoint data, and matching the current way organizations and people work with data, Backblaze Backup will enable you to effectively and affordably meet the data security demands of your organization.

The post Server vs Endpoint Backup — Which is Best? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backing Up the Modern Enterprise with Backblaze for Business

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/endpoint-backup-solutions/

Endpoint backup diagram

Organizations of all types and sizes need reliable and secure backup. Whether they have as few as 3 or as many as 300,000 computer users, an organization’s computer data is a valuable business asset that needs to be protected.

Modern organizations are changing how they work and where they work, which brings new challenges to making sure that company’s data assets are not only available, but secure. Larger organizations have IT departments that are prepared to address these needs, but often times in smaller and newer organizations the challenge falls upon office management who might not be as prepared or knowledgeable to face a work environment undergoing dramatic changes.

Whether small or large, local or world-wide, for-profit or non-profit, organizations need a backup strategy and solution that matches the new ways of working in the enterprise.

The Enterprise Has Changed, and So Has Data Use

More and more, organizations are working in the cloud. These days organizations can operate just fine without their own file servers, database servers, mail servers, or other IT infrastructure that used to be standard for all but the smallest organization.

The reality is that for most organizations, though, it’s a hybrid work environment, with a combination of cloud-based and PC and Macintosh-based applications. Legacy apps aren’t going away any time soon. They will be with us for a while, with their accompanying data scattered amongst all the desktops, laptops and other endpoints in corporate headquarters, home offices, hotel rooms, and airport waiting areas.

In addition, the modern workforce likely combines regular full-time employees, remote workers, contractors, and sometimes interns, volunteers, and other temporary workers who also use company IT assets.

The Modern Enterprise Brings New Challenges for IT

These changes in how enterprises work present a problem for anyone tasked with making sure that data — no matter who uses it or where it lives — is adequately backed-up. Cloud-based applications, when properly used and managed, can be adequately backed up, provided that users are connected to the internet and data transfers occur regularly — which is not always the case. But what about the data on the laptops, desktops, and devices used by remote employees, contractors, or just employees whose work keeps them on the road?

The organization’s backup solution must address all the needs of the modern organization or enterprise using both cloud and PC and Mac-based applications, and not be constrained by employee or computer location.

A Ten-Point Checklist for the Modern Enterprise for Backing Up

What should the modern enterprise look for when evaluating a backup solution?

1) Easy to deploy to workers’ computers

Whether installed by the computer user or an IT person locally or remotely, the backup solution must be easy to implement quickly with minimal demands on the user or administrator.

2) Fast and unobtrusive client software

Backups should happen in the background by efficient (native) PC and Macintosh software clients that don’t consume valuable processing power or take memory away from applications the user needs.

3) Easy to configure

The backup solutions must be easy to configure for both the user and the IT professional. Ease-of-use means less time to deploy, configure, and manage.

4) Defaults to backing up all valuable data

By default, the solution backs up commonly used files and folders or directories, including desktops. Some backup solutions are difficult and intimidating because they require that the user chose what needs to be backed up, often missing files and folders/directories that contain valuable data.

5) Works automatically in the background

Backups should happen automatically, no matter where the computer is located. The computer user, especially the remote or mobile one, shouldn’t be required to attach cables or drives, or remember to initiate backups. A working solution backs up automatically without requiring action by the user or IT administrator.

6) Data restores are fast and easy

Whether it’s a single file, directory, or an entire system that must be restored, a user or IT sysadmin needs to be able to restore backed up data as quickly as possible. In cases of large restores to remote locations, the ability to send a restore via physical media is a must.

7) No limitations on data

Throttling, caps, and data limits complicate backups and require guesses about how much storage space will be needed.

8) Safe & Secure

Organizations require that their data is secure during all phases of initial upload, storage, and restore.

9) Easy-to-manage

The backup solution needs to provide a clear and simple web management interface for all functions. Designing for ease-of-use leads to efficiency in management and operation.

10) Affordable and transparent pricing

Backup costs should be predictable, understandable, and without surprises.

Two Scenarios for the Modern Enterprise

Enterprises exist in many forms and types, but wanting to meet the above requirements is common across all of them. Below, we take a look at two common scenarios showing how enterprises face these challenges. Three case studies are available that provide more information about how Backblaze customers have succeeded in these environments.

Enterprise Profile 1

The needs of a smaller enterprise differ from those of larger, established organizations. This organization likely doesn’t have anyone who is devoted full-time to IT. The job of on-boarding new employees and getting them set up with a computer likely falls upon an executive assistant or office manager. This person might give new employees a checklist with the software and account information and lets users handle setting up the computer themselves.

Organizations in this profile need solutions that are easy to install and require little to no configuration. Backblaze, by default, backs up all user data, which lets the organization be secure in knowing all the data will be backed up to the cloud — including files left on the desktop. Combined with Backblaze’s unlimited data policy, organizations have a truly “set it and forget it” platform.

Customizing Groups To Meet Teams’ Needs

The Groups feature of Backblaze for Business allows an organization to decide whether an individual client’s computer will be Unmanaged (backups and restores under the control of the worker), or Managed, in which an administrator can monitor the status and frequency of backups and handle restores should they become necessary. One group for the entire organization might be adequate at this stage, but the organization has the option to add additional groups as it grows and needs more flexibility and control.

The organization, of course, has the choice of managing and monitoring users using Groups. With Backblaze’s Groups, organizations can set user-based access rules, which allows the administrator to create restores for lost files or entire computers on an employee’s behalf, to centralize billing for all client computers in the organization, and to redeploy a recovered computer or new computer with the backed up data.

Restores

In this scenario, the decision has been made to let each user manage her own backups, including restores, if necessary, of individual files or entire systems. If a restore of a file or system is needed, the restore process is easy enough for the user to handle it by herself.

Case Study 1

Read about how PagerDuty uses Backblaze for Business in a mixed enterprise of cloud and desktop/laptop applications.

PagerDuty Case Study

In a common approach, the employee can retrieve an accidentally deleted file or an earlier version of a document on her own. The Backblaze for Business interface is easy to navigate and was designed with feedback from thousands of customers over the course of a decade.

In the event of a lost, damaged, or stolen laptop,  administrators of Managed Groups can  initiate the restore, which could be in the form of a download of a restore ZIP file from the web management console, or the overnight shipment of a USB drive directly to the organization or user.

Enterprise Profile 2

This profile is for an organization with a full-time IT staff. When a new worker joins the team, the IT staff is tasked with configuring the computer and delivering it to the new employee.

Backblaze for Business Groups

Case Study 2

Global charitable organization charity: water uses Backblaze for Business to back up workers’ and volunteers’ laptops as they travel to developing countries in their efforts to provide clean and safe drinking water.

charity: water Case Study

This organization can take advantage of additional capabilities in Groups. A Managed Group makes sense in an organization with a geographically dispersed work force as it lets IT ensure that workers’ data is being regularly backed up no matter where they are. Billing can be company-wide or assigned to individual departments or geographical locations. The organization has the choice of how to divide the organization into Groups (location, function, subsidiary, etc.) and whether the Group should be Managed or Unmanaged. Using Managed Groups might be suitable for most of the organization, but there are exceptions in which sensitive data might dictate using an Unmanaged Group, such as could be the case with HR, the executive team, or finance.

Deployment

By Invitation Email, Link, or Domain

Backblaze for Business allows a number of options for deploying the client software to workers’ computers. Client installation is fast and easy on both Windows and Macintosh, so sending email invitations to users or automatically enrolling users by domain or invitation link, is a common approach.

By Remote Deployment

IT might choose to remotely and silently deploy Backblaze for Business across specific Groups or the entire organization. An administrator can silently deploy the Backblaze backup client via the command-line, or use common RMM (Remote Monitoring and Management) tools such as Jamf and Munki.

Restores

Case Study 3

Read about how Bright Bear Technology Solutions, an IT Managed Service Provider (MSP), uses the Groups feature of Backblaze for Business to manage customer backups and restores, deploy Backblaze licenses to their customers, and centralize billing for all their client-based backup services.

Bright Bear Case Study

Some organizations are better equipped to manage or assist workers when restores become necessary. Individual users will be pleased to discover they can roll-back files to an earlier version if they wish, but IT will likely manage any complete system restore that involves reconfiguring a computer after a repair or requisitioning an entirely new system when needed.

This organization might chose to retain a client’s entire computer backup for archival purposes, using Backblaze B2 as the cloud storage solution. This is another advantage of having a cloud storage provider that combines both endpoint backup and cloud object storage among its services.

The Next Step: Server Backup & Data Archiving with B2 Cloud Storage

As organizations grow, they have increased needs for cloud storage beyond Macintosh and PC data backup. Backblaze’s object cloud storage, Backblaze B2, provides low-cost storage and archiving of records, media, and server data that can grow with the organization’s size and needs.

B2 Cloud Storage is available through the same Backblaze management console as Backblaze Computer Backup. This means that Admins have one console for billing, monitoring, deployment, and role provisioning. B2 is priced at 1/4 the cost of Amazon S3, or $0.005 per month per gigabyte (which equals $5/month per terabyte).

Why Modern Enterprises Chose Backblaze

Backblaze for Business

Businesses and organizations select Backblaze for Business for backup because Backblaze is designed to meet the needs of the modern enterprise. Backblaze customers are part of a a platform that has a 10+ year track record of innovation and over 400 petabytes of customer data already under management.

Backblaze’s backup model is proven through head-to-head comparisons to back up data that other backup solutions overlook in their default configurations — including valuable files that are needed after an accidental deletion, theft, or computer failure.

Backblaze is the only enterprise-level backup company that provides TOTP (Time-based One-time Password) via both SMS and Authentication app to all accounts at no incremental charge. At just $50/year/computer, Backblaze is affordable for any size of enterprise.

Modern Enterprises can Meet The Challenge of The Changing Data Environment

With the right backup solution and strategy, the modern enterprise will be prepared to ensure that its data is protected from accident, disaster, or theft, whether its data is in one office or dispersed among many locations, and remote and mobile employees.

Backblaze for Business is an affordable solution that enables organizations to meet the evolving data demands facing the modern enterprise.

The post Backing Up the Modern Enterprise with Backblaze for Business appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Some notes on the KRACK attack

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/10/some-notes-on-krack-attack.html

This is my interpretation of the KRACK attacks paper that describes a way of decrypting encrypted WiFi traffic with an active attack.

tl;dr: Wow. Everyone needs to be afraid. (Well, worried — not panicked.) It means in practice, attackers can decrypt a lot of wifi traffic, with varying levels of difficulty depending on your precise network setup. My post last July about the DEF CON network being safe was in error.

Details

This is not a crypto bug but a protocol bug (a pretty obvious and trivial protocol bug).
When a client connects to the network, the access-point will at some point send a random “key” data to use for encryption. Because this packet may be lost in transmission, it can be repeated many times.
What the hacker does is just repeatedly sends this packet, potentially hours later. Each time it does so, it resets the “keystream” back to the starting conditions. The obvious patch that device vendors will make is to only accept the first such packet it receives, ignore all the duplicates.
At this point, the protocol bug becomes a crypto bug. We know how to break crypto when we have two keystreams from the same starting position. It’s not always reliable, but reliable enough that people need to be afraid.
Android, though, is the biggest danger. Rather than simply replaying the packet, a packet with key data of all zeroes can be sent. This allows attackers to setup a fake WiFi access-point and man-in-the-middle all traffic.
In a related case, the access-point/base-station can sometimes also be attacked, affecting the stream sent to the client.
Not only is sniffing possible, but in some limited cases, injection. This allows the traditional attack of adding bad code to the end of HTML pages in order to trick users into installing a virus.

This is an active attack, not a passive attack, so in theory, it’s detectable.

Who is vulnerable?

Everyone, pretty much.
The hacker only needs to be within range of your WiFi. Your neighbor’s teenage kid is going to be downloading and running the tool in order to eavesdrop on your packets.
The hacker doesn’t need to be logged into your network.
It affects all WPA1/WPA2, the personal one with passwords that we use in home, and the enterprise version with certificates we use in enterprises.
It can’t defeat SSL/TLS or VPNs. Thus, if you feel your laptop is safe surfing the public WiFi at airports, then your laptop is still safe from this attack. With Android, it does allow running tools like sslstrip, which can fool many users.
Your home network is vulnerable. Many devices will be using SSL/TLS, so are fine, like your Amazon echo, which you can continue to use without worrying about this attack. Other devices, like your Phillips lightbulbs, may not be so protected.

How can I defend myself?

Patch.
More to the point, measure your current vendors by how long it takes them to patch. Throw away gear by those vendors that took a long time to patch and replace it with vendors that took a short time.
High-end access-points that contains “WIPS” (WiFi Intrusion Prevention Systems) features should be able to detect this and block vulnerable clients from connecting to the network (once the vendor upgrades the systems, of course). Even low-end access-points, like the $30 ones you get for home, can easily be updated to prevent packet sequence numbers from going back to the start (i.e. from the keystream resetting back to the start).
At some point, you’ll need to run the attack against yourself, to make sure all your devices are secure. Since you’ll be constantly allowing random phones to connect to your network, you’ll need to check their vulnerability status before connecting them. You’ll need to continue doing this for several years.
Of course, if you are using SSL/TLS for everything, then your danger is mitigated. This is yet another reason why you should be using SSL/TLS for internal communications.
Most security vendors will add things to their products/services to defend you. While valuable in some cases, it’s not a defense. The defense is patching the devices you know about, and preventing vulnerable devices from attaching to your network.
If I remember correctly, DEF CON uses Aruba. Aruba contains WIPS functionality, which means by the time DEF CON roles around again next year, they should have the feature to deny vulnerable devices from connecting, and specifically to detect an attack in progress and prevent further communication.
However, for an attacker near an Android device using a low-powered WiFi, it’s likely they will be able to conduct man-in-the-middle without any WIPS preventing them.

GPS Spoofing Attacks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/gps_spoofing_at.html

Wired has a story about a possible GPS spoofing attack by Russia:

After trawling through AIS data from recent years, evidence of spoofing becomes clear. Goward says GPS data has placed ships at three different airports and there have been other interesting anomalies. “We would find very large oil tankers who could travel at the maximum speed at 15 knots,” says Goward, who was formerly director for Marine Transportation Systems at the US Coast Guard. “Their AIS, which is powered by GPS, would be saying they had sped up to 60 to 65 knots for an hour and then suddenly stopped. They had done that several times.”

All of the evidence from the Black Sea points towards a co-ordinated attempt to disrupt GPS. A recently published report from NRK found that 24 vessels appeared at Gelendzhik airport around the same time as the Atria. When contacted, a US Coast Guard representative refused to comment on the incident, saying any GPS disruption that warranted further investigation would be passed onto the Department of Defence.

“It looks like a sophisticated attack, by somebody who knew what they were doing and were just testing the system,” Bonenberg says. Humphreys told NRK it “strongly” looks like a spoofing incident. Fire Eye’s Brubaker, agreed, saying the activity looked intentional. Goward is also confident that GPS were purposely disrupted. “What this case shows us is there are entities out there that are willing and eager to disrupt satellite navigation systems for whatever reason and they can do it over a fairly large area and in a sophisticated way,” he says. “They’re not just broadcasting a stronger signal and denying service this is worse they’re providing hazardously misleading information.”

Indian Movie Actor Mobbed By Press After Arrest of Torrent Site Admin

Post Syndicated from Andy original https://torrentfreak.com/indian-movie-actor-mobbed-by-press-after-airport-torrent-site-arrest-170913/

While most of the headlines relating to Internet piracy are focused on North America and Europe, there are dozens of countries where piracy is a way of life for millions of citizens. India, with its booming economy and growth in technology, is certainly one of them.

According to a recently published report, India now has 355 million Internet users out of a population of more than 1.3 billion. Not only is there massive room for growth, that figure is up from 277 million just two years ago. The rate of growth is astonishing.

Needless to say, Indians love their Internet and increasing numbers of citizens are also getting involved in the piracy game. There are many large sites and prominent release groups operating out of the country, some of them targeting the international market. Carry out a search for DVDSCR (DVD screener) on most search indexes globally and one is just as likely to find Indian movie releases as those emanating from the West.

If people didn’t know it already, India is nurturing a pirate force to be reckoned with, with local torrent and streaming sites pumping out the latest movies at an alarming rate. This has caused an outcry from many in the movie industry who are determined to do something to stem the tide.

One of these is actor Vishal Krishna, who not only stars in movies but is also a producer working in the Tamil film industry. Often referred to simply by his first name, Vishal has spoken out regularly against piracy in his role at the Tamil Film Producers Council.

In May, he referred to the operators of the hugely popular torrent site TamilRockers as ‘Internet Mafias’ while demanding their arrest for leaking the blockbuster Baahubali 2, a movie that pulled in US$120 million in six days. Now, it appears, he may have gotten his way. Well, partially, at least.

Last evening, reports began to surface of an arrest at Chennai airport in north east India. According to local media, Gauri Shankar, an alleged administrator of Tamilrockers.co, was detained by Triplicane police.

This would’ve been a huge coup for Vishal, who has been warning Tamilrockers to close down for the past three years. He even claimed to know the identity of the main perpetrator behind the site, noting that it was only a matter of time before he was brought to justice.

Soon after the initial reports, however, other media outlets claimed that Gauri Shankar is actually an operator at Tamilgun, another popular pirate portal currently blocked by ISPs on the orders of the Indian government.

So was it rockers or gun? According to Indiaglitz.com, Vishal rushed to the scene in Chennai to find out.

Outside the police station

What followed were quite extraordinary scenes outside the Triplicane police station. Emerging from the building flanked by close to 20 men, some in uniform, Vishal addressed an excited crowd of reporters. A swathe of microphones from various news outlets greeted him as he held up his hands urging the crowd to calm down.

“Just give us some time, I will give you the details,” Vishal said in two languages.

“Just give us some time. It is too early. I’ll just give it to you in a bit. It’s something connected to website piracy. Just give me some time. I have to give you all the details, proper details.”

So, even after all the excitement, it’s unclear who the police have in custody. Nevertheless, the attention this event is getting from the press is on a level rarely seen in a piracy case, so more news is bound to follow soon.

In the meantime, both TamilRockers and TamilGun remain online, operating as normal. Clearly, there is much more work to be done.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

State of MAC address randomization

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/09/state-of-mac-address-randomization.html

tldr: I went to DragonCon, a conference of 85,000 people, so sniff WiFi packets and test how many phones now uses MAC address randomization. Almost all iPhones nowadays do, but it seems only a third of Android phones do.

Ten years ago at BlackHat, we presented the “data seepage” problem, how the broadcasts from your devices allow you to be tracked. Among the things we highlighted was how WiFi probes looking to connect to access-points expose the unique hardware address burned into the phone, the MAC address. This hardware address is unique to your phone, shared by no other device in the world. Evildoers, such as the NSA or GRU, could install passive listening devices in airports and train-stations around the world in order to track your movements. This could be done with $25 devices sprinkled around a few thousand places — within the budget of not only a police state, but also the average hacker.

In 2014, with the release of iOS 8, Apple addressed this problem by randomizing the MAC address. Every time you restart your phone, it picks a new, random, hardware address for connecting to WiFi. This causes a few problems: every time you restart your iOS devices, your home network sees a completely new device, which can fill up your router’s connection table. Since that table usually has at least 100 entries, this shouldn’t be a problem for your home, but corporations and other owners of big networks saw their connection tables suddenly get big with iOS 8.

In 2015, Google added the feature to Android as well. However, even though most Android phones today support this feature in theory, it’s usually not enabled.

Recently, I went to DragonCon in order to test out how well this works. DragonCon is a huge sci-fi/fantasy conference in Atlanta in August, second to San Diego’s ComicCon in popularity. It’s spread across several neighboring hotels in the downtown area. A lot of the traffic funnels through the Marriot Marquis hotel, which has a large open area where, from above, you can see thousands of people at a time.

And, with a laptop, see their broadcast packets.

So I went up on a higher floor and setup my laptop in order to capture “probe” broadcasts coming from phones, in order to record the hardware MAC addresses. I’ve done this in years past, before address randomization, in order to record the popularity of iPhones. The first three bytes of an old-style, non-randomized address, identifies the manufacturer. This time, I should see a lot fewer manufacturer IDs, and mostly just random addresses instead.

I recorded 9,095 unique probes over a couple hours. I’m not sure exactly how long — my laptop would go to sleep occasionally because of lack of activity on the keyboard. I should probably setup a Raspberry Pi somewhere next year to get a more consistent result.

A quick summary of the results are:

The 9,000 devices were split almost evenly between Apple and Android. Almost all of the Apple devices randomized their addresses. About a third of the Android devices randomized. (This assumes Android only randomizes the final 3 bytes of the address, and that Apple randomizes all 6 bytes — my assumption may be wrong).

A table of the major results are below. A little explanation:

  • The first item in the table is the number of phones that randomized the full 6 bytes of the MAC address. I’m guessing these are either mostly or all Apple iOS devices. They are nearly half of the total, or 4498 out of 9095 unique probes.
  • The second number is those that randomized the final 3 bytes of the MAC address, but left the first three bytes identifying themselves as Android devices. I’m guessing this represents all the Android devices that randomize. My guesses may be wrong, maybe some Androids randomize the full 6 bytes, which would get them counted in the first number.
  • The following numbers are phones from major Android manufacturers like Motorola, LG, HTC, Huawei, OnePlus, ZTE. Remember: the first 3 bytes of an un-randomized address identifies who made it. There are roughly 2500 of these devices.
  • There is a count for 309 Apple devices. These are either older iOS devices pre iOS 8, or which have turned off the feature (some corporations demand this), or which are actually MacBooks instead of phones.
  • The vendor of the access-points that Marriot uses is “Ruckus”. There have a lot of access-points in the hotel.
  • The “TCT mobile” entry is actually BlackBerry. Apparently, BlackBerry stopped making phones and instead just licenses the software/brand to other hardware makers. If you buy a BlackBerry from the phone store, it’s likely going to be a TCT phone instead.
  • I’m assuming the “Amazon” devices are Kindle ebooks.
  • Lastly, I’d like to point out the two records for “Ford”. I was capturing while walking out of the building, I think I got a few cars driving by.

(random)  4498
(Android)  1562
Samsung  646
Motorola  579
Murata  505
LG  412
Apple  309
HTC-phone  226
Huawei  66
Ruckus  60
OnePlus Tec  40
ZTE  23
TCT mobile  20
Amazon Tech  19
Nintendo  17
Intel  14
Microsoft  9
-hp-  8
BLU Product  8
Kyocera  8
AsusTek  6
Yulong Comp  6
Lite-On  4
Sony Mobile  4
Z-COM, INC.  4
ARRIS Group  2
AzureWave  2
Barnes&Nobl  2
Canon  2
Ford Motor  2
Foxconn  2
Google, Inc  2
Motorola (W  2
Sonos, Inc.  2
SparkLAN Co  2
Wi2Wi, Inc  2
Xiaomi Comm  2
Alps Electr  1
Askey  1
BlackBerry  1
Chi Mei Com  1
Clover Netw  1
CNet Techno  1
eSSys Co.,L  1
GoPro  1
InPro Comm  1
JJPlus Corp  1
Private  1
Quanta  1
Raspberry P  1
Roku, Inc.  1
Sonim Techn  1
Texas Instr  1
TP-LINK TEC  1
Vizio, Inc  1

Write and Read Multiple Objects in Amazon Cloud Directory by Using Batch Operations

Post Syndicated from Vineeth Harikumar original https://aws.amazon.com/blogs/security/write-and-read-multiple-objects-in-amazon-cloud-directory-by-using-batch-operations/

Amazon Cloud Directory is a hierarchical data store that enables you to build flexible, cloud-native directories for organizing hierarchies of data along multiple dimensions. For example, you can create an organizational structure that you can navigate through multiple hierarchies for reporting structure, location, and cost center.

In this blog post, I demonstrate how you can use Cloud Directory APIs to write and read multiple objects by using batch operations. With batch write operations, you can execute a sequence of operations atomically—meaning that all of the write operations must occur, or none of them do. You also can make your application efficient by reducing the number of required round trips to read and write objects to your directory. I have used the AWS SDK for Java for all the sample code in this blog post, but you can use other language SDKs or the AWS CLI in a similar way.

Using batch write operations

To demonstrate batch write operations, let’s say that AnyCompany’s warehouses are organized to determine the fastest methods to ship orders to its customers. In North America, AnyCompany plans to open new warehouses regularly so that the company can keep up with customer demand while continuing to meet the delivery times to which they are committed.

The following diagram shows part of AnyCompany’s global network, including Asian and European warehouse networks.

Let’s take a look at how I can use batch write operations to add NorthAmerica to AnyCompany’s global network of warehouses, with the first three warehouses in New York City (NYC), Las Vegas (LAS), and Phoenix (PHX).

Adding NorthAmerica to the global network

To add NorthAmerica to the global network, I can use a batch write operation to create and link all the objects in the existing network.

First, I set up a helper method, which performs repetitive tasks, for the getBatchCreateOperation object. The following lines of code help me create an NA object for NorthAmerica and then attach the three city-related nodes: NYC, LAS, and PHX. Because AnyCompany is planning to grow its network, I add a suffix of _1 to each city code (such as PHX_1), which will be helpful hierarchically when the company adds more warehouses within a city.

    private BatchWriteOperation getBatchCreateOperation(
            String warehouseName,
            String directorySchemaARN,
            String parentReference,
            String linkName) {

        SchemaFacet warehouse_facet = new SchemaFacet()
            .withFacetName("warehouse")
            .withSchemaArn(directorySchemaARN);

        AttributeKeyAndValue kv = new AttributeKeyAndValue()
            .withKey(new AttributeKey()
                .withFacetName("warehouse")
                .withName("name")
                .withSchemaArn(directorySchemaARN))
            .withValue(new TypedAttributeValue()
                .withStringValue(warehouseName);

        List<SchemaFacet> facets = Lists.newArrayList(warehouse_facet);
        List<AttributeKeyAndValue> kvs = Lists.newArrayList(kv);

        BatchCreateObject createObject = new BatchCreateObject();

        createObject.withParentReference(new ObjectReference()
            .withSelector(parentReference));
        createObject.withLinkName(linkName);

        createObject.withBatchReferenceName(UUID.randomUUID().toString());
        createObject.withSchemaFacet(facets);
        createObject.withObjectAttributeList(kvs);

        return new BatchWriteOperation().withCreateObject
                                       (createObject);
    }

The parameters of this helper method include:

  • warehouseName – The name of the warehouse to create in the getBatchCreateOperation object.
  • directorySchemaARN – The Amazon Resource Name (ARN) of the schema applied to the directory.
  • parentReference – The object reference of the parent object.
  • linkName – The unique child path from the parent reference where the object should be attached.

I then use this helper method to set up multiple create operations for NorthAmerica, NewYork, Phoenix, and LasVegas. For the sake of simplicity, I use airport codes to stand for the cities (for example, NYC stands for NewYork).

   BatchWriteOperation createObjectNA = getBatchCreateOperation(
                      "NA",
                      directorySchemaARN,
                      "/",
                      "NorthAmerica");
   BatchWriteOperation createObjectNYC = getBatchCreateOperation(
                      "NYC_1",
                      directorySchemaARN,
                      "/NorthAmerica",
                      "NewYork");
   BatchWriteOperation createObjectPHX = getBatchCreateOperation(
                       "PHX_1",
                       directorySchemaARN,
                       "/NorthAmerica",
                       "Phoenix");
   BatchWriteOperation createObjectLAS = getBatchCreateOperation(
                      "LAS_1",
                      directorySchemaARN,
                      "/NorthAmerica",
                      "LasVegas");

   BatchWriteRequest request = new BatchWriteRequest();
   request.setDirectoryArn(directoryARN);
   request.setOperations(Lists.newArrayList(
       createObjectNA,
       createObjectNYC,
       createObjectPHX,
       createObjectLAS));

   client.batchWrite(request);

Running the preceding code results in a hierarchy for the network with NA added to the network, as shown in the following diagram.

Using batch read operations

Now, let’s say that after I add NorthAmerica to AnyCompany’s global network, an analyst wants to see the updated view of the NorthAmerica warehouse network as well as some information about the newly introduced warehouse configurations for the Phoenix warehouses. To do this, I can use batch read operations to get the network of warehouses for NorthAmerica as well as specifically request the attributes and configurations of the Phoenix warehouses.

To list the children of the NorthAmerica warehouses, I use the BatchListObjectChildren API to get all the children at the path, /NorthAmerica. Next, I want to view the attributes of the Phoenix object, so I use the BatchListObjectAttributes API to read all the attributes of the object at /NorthAmerica/Phoenix, as shown in the following code example.

    BatchListObjectChildren listObjectChildrenRequest = new BatchListObjectChildren()
        .withObjectReference(new ObjectReference().withSelector("/NorthAmerica"));
    BatchListObjectAttributes listObjectAttributesRequest = new BatchListObjectAttributes()
        .withObjectReference(new ObjectReference()
            .withSelector("/NorthAmerica/Phoenix"));
    BatchReadRequest batchRead = new BatchReadRequest()
        .withConsistencyLevel(ConsistencyLevel.EVENTUAL)
        .withDirectoryArn(directoryArn)
        .withOperations(Lists.newArrayList(listObjectChildrenRequest, listObjectAttributesRequest));

    BatchReadResult result = client.batchRead(batchRead);

Exception handling

Batch operations in Cloud Directory might sometimes fail, and it is important to know how to handle such failures, which differ for write operations and read operations.

Batch write operation failures

If a batch write operation fails, Cloud Directory fails the entire batch operation and returns an exception. The exception contains the index of the operation that failed along with the exception type and message. If you see RetryableConflictException, you can try again with exponential backoff. A simple way to do this is to double the amount of time you wait each time you get an exception or failure. For example, if your first batch write operation fails, wait 100 milliseconds and try the request again. If the second request fails, wait 200 milliseconds and try again. If the third request fails, wait 400 milliseconds and try again.

Batch read operation failures

If a batch read operation fails, the response contains either a successful response or an exception response. Individual batch read operation failures do not cause the entire batch read operation to fail—Cloud Directory returns individual success or failure responses for each operation.

Limits of batch operations

Batch operations are still constrained by the same Cloud Directory limits as other Cloud Directory APIs. A single batch operation does not limit the number of operations, but the total number of nodes or objects being written or edited in a single batch operation have enforced limits. For example, a total of 20 objects can be written in a single batch operation request to Cloud Directory, regardless of how many individual operations there are within that batch. Similarly, a total of 200 objects can be read in a single batch operation request to Cloud Directory. For more information, see limits on batch operations.

Summary

In this post, I have demonstrated how you can use batch operations to operate on multiple objects and simplify making complicated changes across hierarchies. In my next post, I will demonstrate how to use batch references within batch write operations. To learn more about batch operations, see Batches, BatchWrite, and BatchRead.

If you have comments about this post, submit them in the “Comments” section below. If you have implementation questions, start a new thread on the Directory Service forum.

– Vineeth

Secure API Access with Amazon Cognito Federated Identities, Amazon Cognito User Pools, and Amazon API Gateway

Post Syndicated from Ed Lima original https://aws.amazon.com/blogs/compute/secure-api-access-with-amazon-cognito-federated-identities-amazon-cognito-user-pools-and-amazon-api-gateway/

Ed Lima, Solutions Architect

 

Our identities are what define us as human beings. Philosophical discussions aside, it also applies to our day-to-day lives. For instance, I need my work badge to get access to my office building or my passport to travel overseas. My identity in this case is attached to my work badge or passport. As part of the system that checks my access, these documents or objects help define whether I have access to get into the office building or travel internationally.

This exact same concept can also be applied to cloud applications and APIs. To provide secure access to your application users, you define who can access the application resources and what kind of access can be granted. Access is based on identity controls that can confirm authentication (AuthN) and authorization (AuthZ), which are different concepts. According to Wikipedia:

 

The process of authorization is distinct from that of authentication. Whereas authentication is the process of verifying that “you are who you say you are,” authorization is the process of verifying that “you are permitted to do what you are trying to do.” This does not mean authorization presupposes authentication; an anonymous agent could be authorized to a limited action set.

Amazon Cognito allows building, securing, and scaling a solution to handle user management and authentication, and to sync across platforms and devices. In this post, I discuss the different ways that you can use Amazon Cognito to authenticate API calls to Amazon API Gateway and secure access to your own API resources.

 

Amazon Cognito Concepts

 

It’s important to understand that Amazon Cognito provides three different services:

Today, I discuss the use of the first two. One service doesn’t need the other to work; however, they can be configured to work together.
 

Amazon Cognito Federated Identities

 
To use Amazon Cognito Federated Identities in your application, create an identity pool. An identity pool is a store of user data specific to your account. It can be configured to require an identity provider (IdP) for user authentication, after you enter details such as app IDs or keys related to that specific provider.

After the user is validated, the provider sends an identity token to Amazon Cognito Federated Identities. In turn, Amazon Cognito Federated Identities contacts the AWS Security Token Service (AWS STS) to retrieve temporary AWS credentials based on a configured, authenticated IAM role linked to the identity pool. The role has appropriate IAM policies attached to it and uses these policies to provide access to other AWS services.

Amazon Cognito Federated Identities currently supports the IdPs listed in the following graphic.

 



Continue reading Secure API Access with Amazon Cognito Federated Identities, Amazon Cognito User Pools, and Amazon API Gateway

Extending the Airplane Laptop Ban

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/extending_the_a.html

The Department of Homeland Security is rumored to be considering extending the current travel ban on large electronics for Middle Eastern flights to European ones as well. The likely reaction of airlines will be to implement new traveler programs, effectively allowing wealthier and more frequent fliers to bring their computers with them. This will only exacerbate the divide between the haves and the have-nots — all without making us any safer.

In March, both the United States and the United Kingdom required that passengers from 10 Muslim countries give up their laptop computers and larger tablets, and put them in checked baggage. The new measure was based on reports that terrorists would try to smuggle bombs onto planes concealed in these larger electronic devices.

The security measure made no sense for two reasons. First, moving these computers into the baggage holds doesn’t keep them off planes. Yes, it is easier to detonate a bomb that’s in your hands than to remotely trigger it in the cargo hold. But it’s also more effective to screen laptops at security checkpoints than it is to place them in checked baggage. TSA already does this kind of screening randomly and occasionally: making passengers turn laptops on to ensure that they’re functional computers and not just bomb-filled cases, and running chemical tests on their surface to detect explosive material.

And, two, banning laptops on selected flights just forces terrorists to buy more roundabout itineraries. It doesn’t take much creativity to fly Doha-Amsterdam-New York instead of direct. Adding Amsterdam to the list of affected airports makes the terrorist add yet another itinerary change; it doesn’t remove the threat.

Which brings up another question: If this is truly a threat, why aren’t domestic flights included in this ban? Remember that anyone boarding a plane to the United States from these Muslim countries has already received a visa to enter the country. This isn’t perfect security — the infamous underwear bomber had a visa, after all — but anyone who could detonate a laptop bomb on his international flight could do it on his domestic connection.

I don’t have access to classified intelligence, and I can’t comment on whether explosive-filled laptops are truly a threat. But, if they are, TSA can set up additional security screenings at the gates of US-bound flights worldwide and screen every laptop coming onto the plane. It wouldn’t be the first time we’ve had additional security screening at the gate. And they should require all laptops to go through this screening, prohibiting them from being stashed in checked baggage.

This measure is nothing more than security theater against what appears to be a movie-plot threat.

Banishing laptops to the cargo holds brings with it a host of other threats. Passengers run the risk of their electronics being stolen from their checked baggage — something that has happened in the past. And, depending on the country, passengers also have to worry about border control officials intercepting checked laptops and making copies of what’s on their hard drives.

Safety is another concern. We’re already worried about large lithium-ion batteries catching fire in airplane baggage holds; adding a few hundred of these devices will considerably exacerbate the risk. Both FedEx and UPS no longer accept bulk shipments of these batteries after two jets crashed in 2010 and 2011 due to combustion.

Of course, passengers will rebel against this rule. Having access to a computer on these long transatlantic flights is a must for many travelers, especially the high-revenue business-class travelers. They also won’t accept the delays and confusion this rule will cause as it’s rolled out. Unhappy passengers fly less, or fly other routes on other airlines without these restrictions.

I don’t know how many passengers are choosing to fly to the Middle East via Toronto to avoid the current laptop ban, but I suspect there may be some. If Europe is included in the new ban, many more may consider adding Canada to their itineraries, as well as choosing European hubs that remain unaffected.

As passengers voice their disapproval with their wallets, airlines will rebel. Already Emirates has a program to loan laptops to their premium travelers. I can imagine US airlines doing the same, although probably for an extra fee. We might learn how to make this work: keeping our data in the cloud or on portable memory sticks and using unfamiliar computers for the length of the flight.

A more likely response will be comparable to what happened after the US increased passenger screening post-9/11. In the months and years that followed, we saw different ways for high-revenue travelers to avoid the lines: faster first-class lanes, and then the extra-cost trusted traveler programs that allow people to bypass the long lines, keep their shoes on their feet and leave their laptops and liquids in their bags. It’s a bad security idea, but it keeps both frequent fliers and airlines happy. It would be just another step to allow these people to keep their electronics with them on their flight.

The problem with this response is that it solves the problem for frequent fliers, while leaving everyone else to suffer. This is already the case; those of us enrolled in a trusted traveler program forget what it’s like to go through “normal” security screening. And since frequent fliers — likely to be more wealthy — no longer see the problem, they don’t have any incentive to fix it.

Dividing security checks into haves and have-nots is bad social policy, and we should actively fight any expansion of it. If the TSA implements this security procedure, it should implement it for every flight. And there should be no exceptions. Force every politically connected flier, from members of Congress to the lobbyists that influence them, to do without their laptops on planes. Let the TSA explain to them why they can’t work on their flights to and from D.C.

This essay previously appeared on CNN.com.

EDITED TO ADD: US officials are backing down.

Why Is the TSA Scanning Paper?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/why_is_the_tsa_.html

I’ve been reading a bunch of anecdotal reports that the TSA is starting to scan paper separately:

A passenger going through security at Kansas City International Airport (MCI) recently was asked by security officers to remove all paper products from his bag. Everything from books to Post-It Notes, documents and more. Once the paper products were removed, the passenger had to put them in a separate bin to be scanned separately.

When the passenger inquired why he was being forced to remove the paper products from his carry-on bag, the agent told him that it was a pilot program that’s being tested at MCI and will begin rolling out nationwide. KSHB Kansas City is reporting that other passengers traveling through MCI have also reported the paper-removal procedure at the airport. One person said that security dug through the suitcase for two “blocks” of Post-It Notes at the bottom.

Does anyone have any guesses as to why the TSA is doing this?

EDITED TO ADD (5/11): This article says that the TSA has stopped doing this. They blamed it on their contractor, Akai Security.

The TSA’s Selective Laptop Ban

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/the_tsas_select.html

Last Monday, the TSA announced a peculiar new security measure to take effect within 96 hours. Passengers flying into the US on foreign airlines from eight Muslim countries would be prohibited from carrying aboard any electronics larger than a smartphone. They would have to be checked and put into the cargo hold. And now the UK is following suit.

It’s difficult to make sense of this as a security measure, particularly at a time when many people question the veracity of government orders, but other explanations are either unsatisfying or damning.

So let’s look at the security aspects of this first. Laptop computers aren’t inherently dangerous, but they’re convenient carrying boxes. This is why, in the past, TSA officials have demanded passengers turn their laptops on: to confirm that they’re actually laptops and not laptop cases emptied of their electronics and then filled with explosives.

Forcing a would-be bomber to put larger laptops in the plane’s hold is a reasonable defense against this threat, because it increases the complexity of the plot. Both the shoe-bomber Richard Reid and the underwear bomber Umar Farouk Abdulmutallab carried crude bombs aboard their planes with the plan to set them off manually once aloft. Setting off a bomb in checked baggage is more work, which is why we don’t see more midair explosions like Pan Am Flight 103 over Lockerbie, Scotland, in 1988.

Security measures that restrict what passengers can carry onto planes are not unprecedented either. Airport security regularly responds to both actual attacks and intelligence regarding future attacks. After the liquid bombers were captured in 2006, the British banned all carry-on luggage except passports and wallets. I remember talking with a friend who traveled home from London with his daughters in those early weeks of the ban. They reported that airport security officials confiscated every tube of lip balm they tried to hide.

Similarly, the US started checking shoes after Reid, installed full-body scanners after Abdulmutallab and restricted liquids in 2006. But all of those measure were global, and most lessened in severity as the threat diminished.

This current restriction implies some specific intelligence of a laptop-based plot and a temporary ban to address it. However, if that’s the case, why only certain non-US carriers? And why only certain airports? Terrorists are smart enough to put a laptop bomb in checked baggage from the Middle East to Europe and then carry it on from Europe to the US.

Why not require passengers to turn their laptops on as they go through security? That would be a more effective security measure than forcing them to check them in their luggage. And lastly, why is there a delay between the ban being announced and it taking effect?

Even more confusing, the New York Times reported that “officials called the directive an attempt to address gaps in foreign airport security, and said it was not based on any specific or credible threat of an imminent attack.” The Department of Homeland Security FAQ page makes this general statement, “Yes, intelligence is one aspect of every security-related decision,” but doesn’t provide a specific security threat. And yet a report from the UK states the ban “follows the receipt of specific intelligence reports.”

Of course, the details are all classified, which leaves all of us security experts scratching our heads. On the face of it, the ban makes little sense.

One analysis painted this as a protectionist measure targeted at the heavily subsidized Middle Eastern airlines by hitting them where it hurts the most: high-paying business class travelers who need their laptops with them on planes to get work done. That reasoning makes more sense than any security-related explanation, but doesn’t explain why the British extended the ban to UK carriers as well. Or why this measure won’t backfire when those Middle Eastern countries turn around and ban laptops on American carriers in retaliation. And one aviation official told CNN that an intelligence official informed him it was not a “political move.”

In the end, national security measures based on secret information require us to trust the government. That trust is at historic low levels right now, so people both in the US and other countries are rightly skeptical of the official unsatisfying explanations. The new laptop ban highlights this mistrust.

This essay previously appeared on CNN.com.

EDITED TO ADD: Here are two essays that look at the possible political motivations, and fallout, of this ban. And the EFF rightly points out that letting a laptop out of your hands and sight is itself a security risk — for the passenger.

Uber was right to disable surge pricing at JFK

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/01/uber-was-right-to-disable-surge-pricing.html

Yesterday, the NYC taxi union had a one-hour strike protesting Trump’s “Muslim Ban”, refusing to pick up passengers at the JFK airport. Uber responded by disabling surge pricing at the airport. This has widely been interpreted as a bad thing, so the hashtag “#DeleteUber” has been trending, encouraging people to delete their Uber accounts/app.
These people are wrong, obviously so.

Surge Pricing

Uber’s “Surge Pricing” isn’t price gouging, as many assume. Instead, the additional money goes directly to the drivers, to encourage them come to the area surging and pick up riders. Uber isn’t a taxi company. It can’t direct drivers to go anywhere. All it can do is provide incentives. “Surge Pricing” for customers means “Surge Income” for the drivers, giving them an incentive. Drivers have a map showing which areas of the city are surging, so they can drive there.

Another way of thinking about it is “Demand Pricing”. It’s simply the economic Law of Supply and Demand. If demand increases, then prices increase, and then supply increases chasing the higher profits. It’s why famously you can’t get a taxi cab on New Years Eve, but you can get an Uber driver. Taxi drivers can’t charge more when demand is surging, so there’s no more taxis available on that date than on any other. But Uber drivers can/do charge more, so there’s more Uber drivers.

Supply and Demand is every much a law as Gravity. If the supply of taxi drivers is less than the demand, then not everyone is going to get a ride. That’s basic math. If there’s only 20 drivers right now, and 100 people wanting a ride, then 80 riders are going to be disappointed. The only solution is more drivers. Paying drivers more money gets more drivers. The part time drivers, the drivers planning on partying instead of working, will decide to work New Years chasing the surge wages.

Uber’s announcement

Uber made the following announcement:

Without turning off Surge Pricing, Uber’s computers would notice the spike in demand, as would-be taxi customers switch to Uber. The computers would then institute surge pricing around JFK automatically. This would notify the drivers in the area, who would then flock to JFK, chasing the higher income. This would be bad for the strike.

By turning off surge pricing, there would be no increase in supply. It would mean the only drivers going to JFK are those dropping off passengers. It would mean that Uber wouldn’t be servicing any more riders than on a normal day, making no difference to the taxi strike, one way or the other.

Why wouldn’t Uber stop pickups at JFK altogether, joining the strike? Because it’d be a tough decision for them. They have a different relationship with their drivers. Both taxis and Uber are required to take passengers to the airport if asked, but taxis are much better at weaseling out of it [*]. That means screwing drivers, forcing them to go way out to JFK with no return fare. In contrast, taxis were warned enough ahead of time to avoid the trip.

The timing

The above section assumes a carefully considered Uber policy. In reality, they didn’t have the time.
The taxi union didn’t announce their decision until 5pm, with the strike set for only one hour, between 6pm and 7pm.

Uber’s announcement was at 7:30pm, half hour after the strike was over. They may not have been aware of the strike until after it started, when somebody noticed an enormous surge starting at 6pm. I can imagine them running around in a panic at 6:05pm, trying to figure out how to respond.

Disabling surge pricing is probable their default action. They’ve been down this route before. Every time there is a terrorist attack or natural disaster, and computers turn on Surge Pricing, somebody has to rush to go turn it off again, offer customer rebates, and so on for PR purposes.

Why doesn’t the press report this?

Everyone knows Surge Pricing is evil. After all, that’s what you always read it in the press. But that’s because the press knows as little about basic economics as their readers.

A good example is this CNN story on the incident [*].

CNN describes this as “effectively lowering the cost of a ride“. They ignore the reality, that this was “effectively lowering the supply of rides“. Reading this, readers will naturally assume there’s an unlimited supply ready to service the lower priced rides. What CNN fails to tell readers is that there is no increase in supply, that there can’t be more rides than normal. They ignore the bit in the tweet that warns against longer wait times due to lack of supply.

Conclusion

The timing alone makes the #DeleteUber claims nonsense, as the strike was already over for 36 minutes when Uber tweeted. But in any case, Uber’s decision not to do surge pricing did not “entice” customers with lower prices — they would still have long waits (as the tweet says), causing a strong dis-enticement. No rational person could interpret this as Uber trying to profit from this event.

On the other hand, before this event, Uber announced it’s opposition to Trump’s action, and promised to help any of it’s drivers adversely affected.


Update: The #deleteUber people are switching to Lyft, which continued to pick up passengers during the strike. Lyft is a company funded by Trump adviser Peter Thiel.

Our full 2017 schedule of UK Picademy events

Post Syndicated from Dan Fisher original https://www.raspberrypi.org/blog/our-full-2017-schedule-of-uk-picademy-events/

Happy new year to everyone! We’re back with a new programme of Picademy events for 2017. All our UK events have been scheduled up to the end of the year, so you can look ahead and choose something at a location and date that is convenient.

An educator gets to grips with our Camera Module

For the uninitiated, Picademy is a free CPD programme that aims to give educators the skills and knowledge they need to get creative with computing, no matter what their level of experience. In fact, you don’t need any previous experience to apply, just an enthusiasm for teaching kids computing. Each course lasts for two full days and is a mixture of digital making workshops, project-based learning, and hacking. Delegates graduate as Raspberry Pi Certified Educators (RCEs).

Last year’s Picademy events yielded some wonderful moments. We trained over 540 educators in the UK and the US, so we had lots of highlights to choose from; I certainly witnessed many in person while delivering events in Glasgow. Two of my favourites included the educator who created music by coding DNA into Sonic Pi as note values (amazing!), and the project that used the Sense HAT to input notes to Sonic Pi and then convert them into coloured blocks in Minecraft for a digital disco.

It was so great to see the enthusiasm, the camaraderie, and the willingness of educators to be open to new experiences. You could see the cogs turning as they thought about how they could apply the new ideas to work in their own classrooms. It was also great to hear about things educators found less easy, and to answer questions about aspects of the computing curriculum. We find this feedback particularly useful as we are always looking for ways to improve our content and provide better support.

Below you’ll find details of the Picademy events we’re running across the UK in 2017:

CITY VENUE DATES
Cardiff Tramshed,
Clare Rd,
Cardiff,
CF11 6QP
21/22 February
Manchester MadLab Manchester,
36-40 Edge St,
Manchester,
M4 1HN
14/15 March
02/03 October
Birmingham The Learning Hub,
Birmingham Airport,
Birmingham,
B26 3QJ
10/11 April
04/05 December
Cambridge Raspberry Pi Foundation,
30 Station Road,
Cambridge,
CB1 2JH
15/16 May
London TBC Late May*
Late November*

* While London details are not fully confirmed, you can still apply for these events. We will email details to applicants later in 2017.

Who should apply?

We are looking for inspirational educators who are passionate about computing, enthusiastic about creating awesome learning experiences for their students, and proactive at sharing good practice.

While we’re primarily looking for primary, secondary, FE and HE teachers to apply, we’re also seeking other outstanding educators such as librarians, community educators, trainee teachers, and trainers of teachers.

We’re committed to running free high-quality training, and we invest substantial time (and money) in the educators that attend. Our hope is that our certified educators not only return home with a digital making mindset to inspire students and colleagues, but also have an impact on their wider education community through social media, meetups, or running their own training.

With this in mind, we should point out that Picademy events are often oversubscribed: for this reason, it’s really important that we get a sense of the person behind the application. We would therefore urge you to take your time when answering questions that ask you to reflect on your own experiences and reasons for applying.

A cohort of Picademy graduates in Manchester

How to apply

To apply for any of the events, fill in our Online Application Form. If you have any further questions, you can email [email protected] or post a message in the Picademy area on our forums.

The post Our full 2017 schedule of UK Picademy events appeared first on Raspberry Pi.