Tag Archives: research

Confusing Security Risks with Moral Judgments

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/confusing_secur.html

Interesting research that shows we exaggerate the risks of something when we find it morally objectionable.

From an article about and interview with the researchers:

To get at this question experimentally, Thomas and her collaborators created a series of vignettes in which a parent left a child unattended for some period of time, and participants indicated the risk of harm to the child during that period. For example, in one vignette, a 10-month-old was left alone for 15 minutes, asleep in the car in a cool, underground parking garage. In another vignette, an 8-year-old was left for an hour at a Starbucks, one block away from her parent’s location.To Rebuild ‘The Collapse Of Parenting,’ It’s Going To Be A Chal
To experimentally manipulate participants’ moral attitude toward the parent, the experimenters varied the reason the child was left unattended across a set of six experiments with over 1,300 online participants. In some cases, the child was left alone unintentionally (for example, in one case, a mother is hit by a car and knocked unconscious after buckling her child into her car seat, thereby leaving the child unattended in the car seat). In other cases, the child was left unattended so the parent could go to work, do some volunteering, relax or meet a lover.

Not surprisingly, the parent’s reason for leaving a child unattended affected participants’ judgments of whether the parent had done something immoral: Ratings were over 3 on a 10-point scale even when the child was left unattended unintentionally, but they skyrocketed to nearly 8 when the parent left to meet a lover. Ratings for the other cases fell in between.

The more surprising result was that perceptions of risk followed precisely the same pattern. Although the details of the cases were otherwise the same -­ that is, the age of the child, the duration and location of the unattended period, and so on -­ participants thought children were in significantly greater danger when the parent left to meet a lover than when the child was left alone unintentionally. The ratings for the other cases, once again, fell in between. In other words, participants’ factual judgments of how much danger the child was in while the parent was away varied according to the extent of their moral outrage concerning the parent’s reason for leaving.

In Case You Missed These: AWS Security Blog Posts from June, July, and August

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx3KVD6T490MM47/In-Case-You-Missed-These-AWS-Security-Blog-Posts-from-June-July-and-August

In case you missed any AWS Security Blog posts from June, July, and August, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from a tagging limit increase to recording SSH sessions established through a bastion host.

August

August 16: Updated Whitepaper Available: AWS Best Practices for DDoS Resiliency
We recently released the 2016 version of the AWS Best Practices for DDoS Resiliency Whitepaper, which can be helpful if you have public-facing endpoints that might attract unwanted distributed denial of service (DDoS) activity.

August 15: Now Organize Your AWS Resources by Using up to 50 Tags per Resource
Tagging AWS resources simplifies the way you organize and discover resources, allocate costs, and control resource access across services. Many of you have told us that as the number of applications, teams, and projects running on AWS increases, you need more than 10 tags per resource. Based on this feedback, we now support up to 50 tags per resource. You do not need to take additional action—you can begin applying as many as 50 tags per resource today.

August 11: New! Import Your Own Keys into AWS Key Management Service
Today, we are happy to announce the launch of the new import key feature that enables you to import keys from your own key management infrastructure (KMI) into AWS Key Management Service (KMS). After you have exported keys from your existing systems and imported them into KMS, you can use them in all KMS-integrated AWS services and custom applications.

August 2: Customer Update: Amazon Web Services and the EU-US Privacy Shield
Recently, the European Commission and the US Government agreed on a new framework called the EU-US Privacy Shield, and on July 12, the European Commission formally adopted it. AWS welcomes this new framework for transatlantic data flow. As the EU-US Privacy Shield replaces Safe Harbor, we understand many of our customers have questions about what this means for them. The security of our customers’ data is our number one priority, so I wanted to take a few moments to explain what this all means.

August 2: How to Remove Single Points of Failure by Using a High-Availability Partition Group in Your AWS CloudHSM Environment
In this post, I will walk you through steps to remove single points of failure in your AWS CloudHSM environment by setting up a high-availability (HA) partition group. Single points of failure occur when a single CloudHSM device fails in a non-HA configuration, which can result in the permanent loss of keys and data. The HA partition group, however, allows for one or more CloudHSM devices to fail, while still keeping your environment operational.

July

July 28: Enable Your Federated Users to Work in the AWS Management Console for up to 12 Hours
AWS Identity and Access Management (IAM) supports identity federation, which enables external identities, such as users in your corporate directory, to sign in to the AWS Management Console via single sign-on (SSO). Now with a small configuration change, your AWS administrators can allow your federated users to work in the AWS Management Console for up to 12 hours, instead of having to reauthenticate every 60 minutes. In addition, administrators can now revoke active federated user sessions. In this blog post, I will show how to configure the console session duration for two common federation use cases: using Security Assertion Markup Language (SAML) 2.0 and using a custom federation broker that leverages the sts:AssumeRole* APIs (see this downloadable sample of a federation proxy). I will wrap up this post with a walkthrough of the new session revocation process.

July 28: Amazon Cognito Your User Pools is Now Generally Available
Amazon Cognito makes it easy for developers to add sign-up, sign-in, and enhanced security functionality to mobile and web apps. With Amazon Cognito Your User Pools, you get a simple, fully managed service for creating and maintaining your own user directory that can scale to hundreds of millions of users.

July 27: How to Audit Cross-Account Roles Using AWS CloudTrail and Amazon CloudWatch Events
In this blog post, I will walk through the process of auditing access across AWS accounts by a cross-account role. This process links API calls that assume a role in one account to resource-related API calls in a different account. To develop this process, I will use AWS CloudTrail, Amazon CloudWatch Events, and AWS Lambda functions. When complete, the process will provide a full audit chain from end user to resource access across separate AWS accounts.

July 25: AWS Becomes First Cloud Service Provider to Adopt New PCI DSS 3.2
We are happy to announce the availability of the Amazon Web Services PCI DSS 3.2 Compliance Package for the 2016/2017 cycle. AWS is the first cloud service provider (CSP) to successfully complete the assessment against the newly released PCI Data Security Standard (PCI DSS) version 3.2, 18 months in advance of the mandatory February 1, 2018, deadline. The AWS Attestation of Compliance (AOC), available upon request, now features 26 PCI DSS certified services, including the latest additions of Amazon EC2 Container Service (ECS), AWS Config, and AWS WAF (a web application firewall). We at AWS are committed to this international information security and compliance program, and adopting the new standard as early as possible once again demonstrates our commitment to information security as our highest priority. Our customers (and customers of our customers) can operate confidently as they store and process credit card information (and any other sensitive data) in the cloud knowing that AWS products and services are tested against the latest and most mature set of PCI compliance requirements.

July 20: New AWS Compute Blog Post: Help Secure Container-Enabled Applications with IAM Roles for ECS Tasks
Amazon EC2 Container Service (ECS) now allows you to specify an IAM role that can be used by the containers in an ECS task, as a new AWS Compute Blog post explains. 

July 14: New Whitepaper Now Available: The Security Perspective of the AWS Cloud Adoption Framework
Today, AWS released the Security Perspective of the AWS Cloud Adoption Framework (AWS CAF). The AWS CAF provides a framework to help you structure and plan your cloud adoption journey, and build a comprehensive approach to cloud computing throughout the IT lifecycle. The framework provides seven specific areas of focus or Perspectives: business, platform, maturity, people, process, operations, and security.

July 14: New Amazon Inspector Blog Post on the AWS Blog
On the AWS Blog yesterday, Jeff Barr published a new security-related blog post written by AWS Principal Security Engineer Eric Fitzgerald. Here’s the beginning of the post, which is entitled, Scale Your Security Vulnerability Testing with Amazon Inspector:

July 12: How to Use AWS CloudFormation to Automate Your AWS WAF Configuration with Example Rules and Match Conditions
We recently announced AWS CloudFormation support for all current features of AWS WAF. This enables you to leverage CloudFormation templates to configure, customize, and test AWS WAF settings across all your web applications. Using CloudFormation templates can help you reduce the time required to configure AWS WAF. In this blog post, I will show you how to use CloudFormation to automate your AWS WAF configuration with example rules and match conditions.

July 11: How to Restrict Amazon S3 Bucket Access to a Specific IAM Role
In this blog post, I show how you can restrict S3 bucket access to a specific IAM role or user within an account using Conditions instead of with the NotPrincipal element. Even if another user in the same account has an Admin policy or a policy with s3:*, they will be denied if they are not explicitly listed. You can use this approach, for example, to configure a bucket for access by instances within an Auto Scaling group. You can also use this approach to limit access to a bucket with a high-level security need.

July 7: How to Use SAML to Automatically Direct Federated Users to a Specific AWS Management Console Page
In this blog post, I will show you how to create a deep link for federated users via the SAML 2.0 RelayState parameter in Active Directory Federation Services (AD FS). By using a deep link, your users will go directly to the specified console page without additional navigation.

July 6: How to Prevent Uploads of Unencrypted Objects to Amazon S3
In this blog post, I will show you how to create an S3 bucket policy that prevents users from uploading unencrypted objects, unless they are using server-side encryption with S3–managed encryption keys (SSE-S3) or server-side encryption with AWS KMS–managed keys (SSE-KMS).

June

June 30: The Top 20 AWS IAM Documentation Pages so Far This Year
The following 20 pages have been the most viewed AWS Identity and Access Management (IAM) documentation pages so far this year. I have included a brief description with each link to give you a clearer idea of what each page covers. Use this list to see what other people have been viewing and perhaps to pique your own interest about a topic you’ve been meaning to research. 

June 29: The Most Viewed AWS Security Blog Posts so Far in 2016
The following 10 posts are the most viewed AWS Security Blog posts that we published during the first six months of this year. You can use this list as a guide to catch up on your blog reading or even read a post again that you found particularly useful.

June 25: AWS Earns Department of Defense Impact Level 4 Provisional Authorization
I am pleased to share that, for our AWS GovCloud (US) Region, AWS has received a Defense Information Systems Agency (DISA) Provisional Authorization (PA) at Impact Level 4 (IL4). This will allow Department of Defense (DoD) agencies to use the AWS Cloud for production workloads with export-controlled data, privacy information, and protected health information as well as other controlled unclassified information. This new authorization continues to demonstrate our advanced work in the public sector space; you might recall AWS was the first cloud service provider to obtain an Impact Level 4 PA in August 2014, paving the way for DoD pilot workloads and applications in the cloud. Additionally, we recently achieved a FedRAMP High provisional Authorization to Operate (P-ATO) from the Joint Authorization Board (JAB), also for AWS GovCloud (US), and today’s announcement allows DoD mission owners to continue to leverage AWS for critical production applications.

June 23: AWS re:Invent 2016 Registration Is Now Open
Register now for the fifth annual AWS re:Invent, the largest gathering of the global cloud computing community. Join us in Las Vegas for opportunities to connect, collaborate, and learn about AWS solutions. This year we are offering all-new technical deep-dives on topics such as security, IoT, serverless computing, and containers. We are also delivering more than 400 sessions, more hands-on labs, bootcamps, and opportunities for one-on-one engagements with AWS experts.

June 23: AWS Achieves FedRAMP High JAB Provisional Authorization
We are pleased to announce that AWS has received a FedRAMP High JAB Provisional Authorization to Operate (P-ATO) from the Joint Authorization Board (JAB) for the AWS GovCloud (US) Region. The new Federal Risk and Authorization Management Program (FedRAMP) High JAB Provisional Authorization is mapped to more than 400 National Institute of Standards and Technology (NIST) security controls. This P-ATO recognizes AWS GovCloud (US) as a secure environment on which to run highly sensitive government workloads, including Personally Identifiable Information (PII), sensitive patient records, financial data, law enforcement data, and other Controlled Unclassified Information (CUI).

June 22: AWS IAM Service Last Accessed Data Now Available for South America (Sao Paulo) and Asia Pacific (Seoul) Regions
In December, AWS IAM released service last accessed data, which helps you identify overly permissive policies attached to an IAM entity (a user, group, or role). Today, we have extended service last accessed data to support two additional regions: South America (Sao Paulo) and Asia Pacific (Seoul). With this release, you can now view the date when an IAM entity last accessed an AWS service in these two regions. You can use this information to identify unnecessary permissions and update policies to remove access to unused services.

June 20: New Twitter Handle Now Live: @AWSSecurityInfo
Today, we launched a new Twitter handle: @AWSSecurityInfo. The purpose of this new handle is to share security bulletins, security whitepapers, compliance news and information, and other AWS security-related and compliance-related information. The scope of this handle is broader than that of @AWSIdentity, which focuses primarily on Security Blog posts. However, feel free to follow both handles!

June 15: Announcing Two New AWS Quick Start Reference Deployments for Compliance
As part of the Professional Services Enterprise Accelerator – Compliance program, AWS has published two new Quick Start reference deployments to assist federal government customers and others who need to meet National Institute of Standards and Technology (NIST) SP 800-53 (Revision 4) security control requirements, including those at the high-impact level. The new Quick Starts are AWS Enterprise Accelerator – Compliance: NIST-based Assurance Frameworks and AWS Enterprise Accelerator – Compliance: Standardized Architecture for NIST High-Impact Controls Featuring Trend Micro Deep Security. These Quick Starts address many of the NIST controls at the infrastructure layer. Furthermore, for systems categorized as high impact, AWS has worked with Trend Micro to incorporate its Deep Security product into a Quick Start deployment in order to address many additional high-impact controls at the workload layer (app, data, and operating system). In addition, we have worked with Telos Corporation to populate security control implementation details for each of these Quick Starts into the Xacta product suite for customers who rely upon that suite for governance, risk, and compliance workflows.

June 14: Now Available: Get Even More Details from Service Last Accessed Data
In December, AWS IAM released service last accessed data, which shows the time when an IAM entity (a user, group, or role) last accessed an AWS service. This provided a powerful tool to help you grant least privilege permissions. Starting today, it’s easier to identify where you can reduce permissions based on additional service last accessed data.

June 14: How to Record SSH Sessions Established Through a Bastion Host
A bastion host is a server whose purpose is to provide access to a private network from an external network, such as the Internet. Because of its exposure to potential attack, a bastion host must minimize the chances of penetration. For example, you can use a bastion host to mitigate the risk of allowing SSH connections from an external network to the Linux instances launched in a private subnet of your Amazon Virtual Private Cloud (VPC). In this blog post, I will show you how to leverage a bastion host to record all SSH sessions established with Linux instances. Recording SSH sessions enables auditing and can help in your efforts to comply with regulatory requirements.

June 14: AWS Granted Authority to Operate for Department of Commerce and NOAA
AWS already has a number of federal agencies onboarded to the cloud, including the Department of Energy, The Department of the Interior, and NASA. Today we are pleased to announce the addition of two more ATOs (authority to operate) for the Department of Commerce (DOC) and the National Oceanic and Atmospheric Administration (NOAA). Specifically, the DOC will be utilizing AWS for their Commerce Data Service, and NOAA will be leveraging the cloud for their “Big Data Project." According to NOAA, the goal of the Big Data Project is to “create a sustainable, market-driven ecosystem that lowers the cost barrier to data publication. This project will create a new economic space for growth and job creation while providing the public far greater access to the data created with its tax dollars.”

June 2: How to Set Up DNS Resolution Between On-Premises Networks and AWS by Using Unbound
In previous AWS Security Blog posts, Drew Dennis covered two options for establishing DNS connectivity between your on-premises networks and your Amazon Virtual Private Cloud (Amazon VPC) environments. His first post explained how to use Simple AD to forward DNS requests originating from on-premises networks to an Amazon Route 53 private hosted zone. His second post showed how you can use Microsoft Active Directory (also provisioned with AWS Directory Service) to provide the same DNS resolution with some additional forwarding capabilities. In this post, I will explain how you can set up DNS resolution between your on-premises DNS with Amazon VPC by using Unbound, an open-source, recursive DNS resolver. This solution is not a managed solution like Microsoft AD and Simple AD, but it does provide the ability to route DNS requests between on-premises environments and an Amazon VPC–provided DNS.

June 1: How to Manage Secrets for Amazon EC2 Container Service–Based Applications by Using Amazon S3 and Docker
In this blog post, I will show you how to store secrets on Amazon S3, and use AWS IAM roles to grant access to those stored secrets using an example WordPress application deployed as a Docker image using ECS. Using IAM roles means that developers and operations staff do not have the credentials to access secrets. Only the application and staff who are responsible for managing the secrets can access them. The deployment model for ECS ensures that tasks are run on dedicated EC2 instances for the same AWS account and are not shared between customers, which gives sufficient isolation between different container environments.

If you have comments  about any of these posts, please add your comments in the "Comments" section of the appropriate post. If you have questions about or issues implementing the solutions in any of these posts, please start a new thread on the AWS IAM forum.

– Craig

Another lesson in confirmation bias

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/08/another-lesson-in-confirmation-bias.html

The biggest problem with hacker attribution is the confirmation bias problem. Once you develop a theory, your mind shifts to distorting evidence trying to prove the theory. After a while, only your theory seems possible as one that can fit all your carefully selected evidence.

You can watch this happen in two recent blogposts [1] [2] by Krypt3ia attributing bitcoin payments to the Shadow Broker hackers as coming from the government (FBI, NSA, TAO). These posts are absolutely wrong. Nonetheless, the press has picked up on the story and run with it [*]. [Note: click on the pictures in this post to blow them up so you can see them better].

The Shadow Brokers published their bitcoin address (19BY2XCgbDe6WtTVbTyzM9eR3LYr6VitWK) asking for donations to release the rest of their tools. They’ve received 66 transactions so far, totally 1.78 bitcoin, or roughly $1000 at today’s exchange rate.

Bitcoin is not anonymous by pseudonymous. Bitcoin is a public ledger with all transaction visible by everyone. Sometimes we can’t tie addresses back to people, but sometimes we can. There are a lot of researchers who spent a lot of time on “taint anlysis” trying to track down the real identity of evildoers. Thus, it seems plausible that we might be able to discover the identities of those people making contributions to Shadow Brokers.

The first of Krypt3ia’s errant blogposts tries to use the Bitcoin taint analysis plugin within Maltego in order to do some analysis on the Shadow Broker address. What he found was links to the Silk Road address — the address controlled by the FBI since they took down that darknet marketplace several years ago. Therefore, he created the theory that the government (FBI? NSA? TAO?) was up to some evil tricks, such as trying to fill the account with money so that they could then track where the money went in the public blockchain.

But he misinterpreted the links. (He was wrong.) There were no payments from the Silk Road accounts to the Shadow Broker account. Instead, there were people making payments to both accounts. As a prank.

To demonstrate how this prank wors, I made my own transaction, where I pay money to the Shadow Brokers (19BY2…), to Silk Road (1F1A…), and to a few other well-known accounts controlled by the government.

The point here is that anybody can do these shenanigans. That government controlled addresses are involved means nothing. They are public, and anybody can send coin to them.

That blogpost points to yet more shenanigans, such as somebody “rick rolling”, to confirm that TAO hackers were involved. What you see in the picture below is a series of transactions using bitcoin addresses containing the phrase “never gonna give you up“, the title of Rich Astley’s song (I underlined the words in red).

Far from the government being involved, somebody else took credit for the hack, with the Twitter handle @MalwareTechBlog. In a blogpost [*], he describes what he did. He then proves his identity by signing a message at the bottom of his post, using the same key (the 1never…. key above) in his tricks. Below is a screenshot of how I verified (and how anybody can verify) the key.

Moreover, these pranks should be seen in context. Goofball shenanigans on the blockchain are really, really common. An example is the following transaction:

Notice the vanity bitcoin address transfering money to the Silk Road account. There is also a “Public Note” on this transaction, a feature unique to BlockChain.info — which recently removed the feature because it was so extensively abused.

Bitcoin also has a feature where 40 bytes of a message can be added to transactions. The first transaction sending bitcoins to both Shadow Brokers and Silk Road was this one. If you tell it to “show scripts”, you see that it contains an email address for Cryptome, the biggest and oldest Internet leaks site (albeit not as notorious as Wikileaks).

The point is this: shenanigans and pranks are common on the Internet. What we see with Shadow Brokers is normal trickery. If you are unfamiliar with Bitcoin culture, it may look like some extra special trickery just for Shadow Brokers, but it isn’t.

After much criticism why his first blogpost was wrong, Krypt3ia published a second. The point of the second was to lambaste his critics — just because he jotted down some idle thoughts in a post doesn’t make him responsible for journalists like ZDnet picking up as a story that’s now suddenly being passed around.

But his continues with the claim that there is somehow evidence of government involvement, even though his original claim of payments from Silk Road were wrong. As he says:

However, my contention still stands that there be some fuckery going on here with those wallet transactions by the looks of it and that the likely candidate would be the government

Krypt3ia goes onto then claim, about the Rick Astley trick:

So yeah, these accounts as far as I can tell so far without going and spending way to many fucking hours on bitcoin.ifo or some such site, were created to purposely rick roll and fuck with the ShadowBrokers. Now, they may be fractions of bitcoins but I ask you, who the fuck has bitcoin money to burn here? Any of you out there? I certainly don’t and the way it was done, so tongue in cheek kinda reminds me of the audacity of TAO…

Who has bitcoin money to burn? The answer is everyone. Krypt3ia obvious isn’t paying attention to the value of bitcoin here, which are pennies. Each transaction of 0.0001337 bitcoins is worth about 10 cents at current exchange rates, meaning this Rick Roll was less than $1. It takes minutes to open an account (like at Circle.com) and use your credit card (or debit card) to $1 worth of bitcoin and carry out this prank.

He goes on to say:

If you also look at the wallets that I have marked with the super cool “Invisible Man” logo, you can see how some of those were actually transfering money from wallet to wallet in sequence to then each post transactions to Shadow. Now what is that all about huh? More wallets acting together? As Velma would often say in Scooby Doo, JINKY’S! Something is going on there.

Well, no, it’s normal bitcoin transactions. (I’ve made this mistake too — learned about it, then forgot about it, then had to relearn about it). A Bitcoin transaction needs to consume all the previous transactions that it refers to. This invariably leaves some bitcoin left over, so has to be transferred back into the user’s wallet. Thus, on my hijinx at the top of this post, you see the address 1HFWw… receives most of the bitcoin. That was a newly created by my wallet back in 2014 to receive the unspent portions of transactions. While it looks strange, it’s perfectly normal.

It’s easy to point out that Krypt3ia just doesn’t understand much about bitcoin, and is getting excited by Maltego output he doesn’t understand.

But the real issue is confirmation bias. He’s developed a theory, and searches for confirmation of that theory. He says “there are connections that cannot be discounted”, when in fact all the connections can easily be discounted with more research, with more knowledge. When he gets attacked, he’s becomes even more motivated to search for reasons why he’s actually right. He’s not motivated to be proven wrong.

And this is the case of most “attribution” in the cybersec issue. We don’t have smoking guns (such as bitcoin coming from the Silk Road account), and must make do with flimsy data (like here, bitcoin going to the Silk Road account). Sometimes our intuition is right, and this flimsy data does indeed point us to the hacker. In other cases, it leads us astray, as I’ve documented before in this blog. The less we understand something, the more it confirms our theory rather than conforming we just don’t understand. That “we just don’t know” is rarely an acceptable answer.

I point this out because I’m always the skeptic when the government attributes attacks to North Korea, China, Russia, Iran, and so on. I’ve seen them be right sometimes, and I’ve seem them be absolutely wrong. And when they are wrong, it’s easy figuring out why — because of things like confirmation bias.

Maltego plugin showing my Bitcoin hijinx transaction from above

Creating vanity addresses, for rickrolling or other reasons

Seattle AWS Big Data Meetup: Building Smart Healthcare Applications on AWS

Post Syndicated from Andy Werth original https://blogs.aws.amazon.com/bigdata/post/Tx3906R4W1RTTSO/Seattle-AWS-Big-Data-Meetup-Building-Smart-Healthcare-Applications-on-AWS

Please join us at the upcoming Seattle AWS Big Data Meetup on Wednesday, August 31. The topic is “Building Smart Healthcare Apps on AWS,” with a spotlight on machine learning.

Join now and get details on the Meetup page

Lisa McFerrin, PhD, Bioinformatics is a Project Manager for Seattle Translational Tumor Research at Fred Hutchinson Cancer Research Center. She will speak about Oncoscape, an open-source web application used to apply and develop analysis tools for molecular and clinical data.  Oncoscape enables researchers to discover new patterns and relationships to advance cancer research.

Chris Crosbie is a Solutions Architect with AWS and a statistician by training. In his talk, you will learn about building smart healthcare applications with AWS and the Amazon Machine Learning Service (see this blog post for background). The presentation will focus on a common problem faced by the healthcare industry: reducing patient readmissions.

We invite you to think about the following quesitons in advance:

  1. What smart applications, if any, have you heard about–or built yourself–that use big data solutions to address challenges in healthcare?
  1. Based on your understanding of machine learning, how can machine learning help you address the specific opportunities and challenges that your organization faces?

If you’re new to machine learning or smart healthcare applications, check out the following:

Amazon Machine Learning (Provides background on machine learning in general and Amazon Machine Learning in particular)

Building a Numeric Regression Model with Amazon Machine Learning (Do-it-now tutorial for building a solution to one of the most common machine learning use cases)

Readmission Prediction Through Patient Risk Stratification Using Amazon Machine Learning (We’ll discuss this post during the Meetup)

Cloud Computing in Healthcare (Learn more about building smart healthcare solutions on AWS)

See you there!

 

United Airlines Sets Minimum Bar on Security

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/08/united-airlines-sets-minimum-bar-on-security/

United Airlines has rolled out a series of updates to its Web site that the company claims will help beef up the security of customer accounts. But at first glance, the core changes — moving from a 4-digit PINs to password and requiring customers to pick five different security questions and answers — may seem like a security playbook copied from Yahoo.com, circa 2009. Here’s a closer look at what’s changed in how United authenticates customers, and hopefully a bit of insight into what the nation’s fourth-largest airline is trying to accomplish with its new system.

United, like many other carriers, has long relied on a frequent flyer account number and a 4-digit personal identification number (PIN) for authenticating customers at its Web site. This has left customer accounts ripe for takeover by crooks who specialize in hacking and draining loyalty accounts for cash.

Earlier this year, however, United began debuting new authentication systems wherein customers are asked to pick a strong password and to choose from five sets of security questions and pre-selected answers. Customers may be asked to provide the answers to two of these questions if they are logging in from a device United has never seen associated with that account, trying to reset a password, or interacting with United via phone.

Some of the questions and answers United come up with.

Some of the questions and answers United come up with.

Yes, you read that right: The answers are pre-selected as well as the questions. For example, to the question “During what month did you first meet your spouse or significant other,” users may select only from one of…you guessed it — 12 answers (January through December).

The list of answers to another security question, “What’s your favorite pizza topping,” had me momentarily thinking I using a pull down menu at Dominos.com — waffling between “pepperoni” and “mashed potato.” (Fun fact: If you were previously unaware that mashed potatoes qualify as an actual pizza topping, United has you covered with an answer to this bit of trivia in its Frequently Asked Questions page on the security changes.)

I recorded a short video of some of these rather unique questions and answers.

United said it opted for pre-defined questions and answers because the company has found “the majority of security issues our customers face can be traced to computer viruses that record typing, and using predefined answers protects against this type of intrusion.”

This struck me as a dramatic oversimplification of the threat. I asked United why they stated this, given that any halfway decent piece of malware that is capable of keylogging is likely also doing what’s known as “form grabbing” — essentially snatching data submitted in forms — regardless of whether the victim types in this information or selects it from a pull-down menu.

Benjamin Vaughn, director of IT security intelligence at United, said the company was randomizing the questions to confound bot programs that seek to automate the submission of answers, and that security questions answered wrongly would be “locked” and not asked again. He added that multiple unsuccessful attempts at answering these questions could result in an account being locked, necessitating a call to customer service.

United said it plans to use these same questions and answers — no longer passwords or PINs — to authenticate those who call in to the company’s customer service hotline. When I went to step through United’s new security system, I discovered my account was locked for some reason. A call to United customer service unlocked it in less than two minutes. All the agent asked me for was my frequent flyer number and my name.

(Incidentally, United still somewhat relies on “security through obscurity” to protect the secrecy of customer usernames by very seldom communicating the full frequent flyer number in written and digital communications with customers. I first pointed this out in my story about the data that can be gleaned from a United boarding pass barcode, because while the full frequent flyer number is masked with “x’s” on the boarding pass, the full number is stored on the pass’s barcode).

Conventional wisdom dictates that what little additional value security questions add to the equation is nullified when the user is required to choose from a set of pre-selected answers. After all, the only sane and secure way to use secret questions if one must is to pick answers that are not only incorrect and/or irrelevant to the question, but that also can’t be guessed or gleaned by collecting facts about you from background checking sites or from your various social media presences online.

Google published some fascinating research last year that spoke to the efficacy and challenges of secret questions and answers, concluding that they are “neither secure nor reliable enough to be used as a standalone account recovery mechanism.”

Overall, the Google research team found the security answers are either somewhat secure or easy to remember—but rarely both. Put another way, easy answers aren’t secure, and hard answers aren’t as useable.

But wait, you say: United asks you to answer up to five security questions. So more security questions equals more layers for the bad guys to hack through, which equals more security, right? Well, not so fast, the Google security folks found.

“When users had to answer both together, the spread between the security and usability of secret questions becomes increasingly stark,” the researchers wrote. “The probability that an attacker could get both answers in ten guesses is 1%, but users will recall both answers only 59% of the time. Piling on more secret questions makes it more difficult for users to recover their accounts and is not a good solution, as a result.”

Vaughn said the beauty of United’s approach is that it uniquely addresses the problem identified by Google researchers — that so many people in the study had so much trouble remembering the answers — by providing users with a set of pre-selected answers from which to choose.

An infographic from Google's research study on secret questions. Source: Google.

An infographic from Google’s research study on secret questions. Source: Google.

The security team at United reached out a few weeks back to highlight the new security changes, and in a conversation this week they asked what I thought about their plan. I replied that if United is getting pushback from security experts and tech publications about its approach, that’s probably because security people are techies/nerds at heart, and techies/nerds want options and stuff. Or at least the ability to add, enable or disable certain security features.

But the reality today is that almost any security system designed for use by tens of millions of people who aren’t techies is always going to cater to the least sophisticated user on the planet — and that’s about where the level of security for that system is bound to stay for a while.

So I told the United people that I was a somewhat despondent about this reality, mainly because I end up having little other choice but to fly United quite often.

“At the scale that United faces, we felt this approach was really optimal to fix this problem for our customers,” Vaughn said. “We have to start with something that is universally available to our customers. We can’t sent a text message to you when you’re on an airplane or out of the country, we can’t rely on all of our customers to have a smart phone, and we didn’t feel it would be a great use of our customers’ time to send them in the mail 93 million secure ID tokens. We felt a powerful onus to do something, and the something we implemented we feel improves security greatly, especially for non-technical savvy customers.”

Arlan McMillan, United’s chief information security officer, said the basic system that the company has just rolled out is built to accommodate additional security features going forward. McMillan said United has discussed rolling out some type of app-based time-based one-time password (TOTP) systems (Google Authenticator is one popular TOTP example).

“It is our intent to provide additional capabilities to our customers, and to even bring in additional security controls if [customers] choose to,” McMillan said. “We set the minimum bar here, and we think that’s a higher bar than you’re going to find at most of our competitors. And we’re going to do more, but we had to get this far first.”

Lest anyone accuse me of claiming that the thrust of this story is somehow newsy, allow me to recommend some related, earlier stories worth reading about United’s security changes:

TechCrunch: It’s Time to Publicly Shame United Airlines’ So-called Online Security

Slate: United Airlines Uses Multiple Choice Security Questions

A Life or Death Case of Identity Theft?

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/08/a-life-or-death-case-of-identity-theft/

Identity thieves have perfected a scam in which they impersonate existing customers at retail mobile phone stores, pay a small cash deposit on pricey new phones, and then charge the rest to the victim’s account. In most cases, switching on the new phones causes the victim account owner’s phone(s) to go dead. This is the story of a Pennsylvania man who allegedly died of a heart attack because his wife’s phone was switched off by ID thieves and she was temporarily unable to call for help.

On Feb. 20, 2016, James William Schwartz, 84, was going about his daily routine, which mainly consisted of caring for his wife, MaryLou. Mrs. Schwartz was suffering from the end stages of endometrial cancer and wasn’t physically mobile without assistance. When Mr. Schwartz began having a heart attack that day, MaryLou went to use her phone to call for help and discovered it was completely shut off.

Little did MaryLou know, but identity thieves had the day before entered a “premium authorized Verizon dealer” store in Florida and impersonated the Schwartzes. The thieves paid a $150 cash deposit to “upgrade” the elderly couple’s simple mobiles to new iPhone 6s devices, with the balance to be placed on the Schwartz’s account.

“Despite her severely disabled and elderly condition, MaryLou Schwartz was finally able to retrieve her husband’s cellular telephone using a mechanical arm,” reads a lawsuit (PDF) filed in Beaver County, Penn. on behalf of the Schwartz’s two daughters, alleging negligence by the Florida mobile phone store. “This monumental, determined and desperate endeavor to reach her husband’s working telephone took Mrs. Schwartz approximately forty minutes to achieve due to her condition. This vital delay in reaching emergency help proved to be fatal.”

By the time paramedics arrived, Mr. Schwartz was pronounced dead. MaryLou Schwartz died seventeen days later, on March 8, 2016. Incredibly, identity thieves would continue robbing the Schwartzes even after they were both deceased: According to the lawsuit, on April 14, 2016 the account of MaryLou Schwartz was again compromised and a tablet device was also fraudulently acquired in MaryLou’s name.

The Schwartz’s daughters say they didn’t learn about the fraud until after both parents passed away. According to them, they heard about it from the guy at a local Verizon reseller that noticed his longtime customers’ phones had been deactivated. That’s when they discovered that while their mother’s phone was inactive at the time of her father’s death, their father’s mobile had inexplicably been able to make but not receive phone calls.

KNOW YOUR RIGHTS AND OPTIONS

Exactly what sort of identification was demanded of the thieves who impersonated the Schwartzes is in dispute at the moment. But it seems clear that this is a fairly successful and common scheme for thieves to steal (and, in all likelihood, resell) high-end phones.

Lorrie Cranor, chief technologist for the U.S. Federal Trade Commission, was similarly victimized this summer when someone walked into a mobile phone store, claimed to be her, asked to upgrade her phones and walked out with two brand new iPhones assigned to her telephone numbers.

“My phones immediately stopped receiving calls, and I was left with a large bill and the anxiety and fear of financial injury that spring from identity theft,” Cranor wrote in a blog on the FTC’s site.  Cranor’s post is worth a read, as she uses the opportunity to explain how she recovered from the identity theft episode.

She also used her rights under the Fair Credit Reporting Act, which requires that companies provide business records related to identity theft to victims within 30 days of receiving a written request. Cranor said the mobile store took about twice that time to reply, but ultimately explained that the thief had used a fake ID with Cranor’s name but the impostor’s photo.

“She had acquired the iPhones at a retail store in Ohio, hundreds of miles from where I live, and charged them to my account on an installment plan,” Cranor wrote. “It appears she did not actually make use of either phone, suggesting her intention was to sell them for a quick profit. As far as I’m aware the thief has not been caught and could be targeting others with this crime.”

Cranor notes that records of identity thefts reported to the FTC provide some insight into how often thieves hijack a mobile phone account or open a new mobile phone account in a victim’s name.

“In January 2013, there were 1,038 incidents of these types of identity theft reported, representing 3.2% of all identity theft incidents reported to the FTC that month,” she explained. “By January 2016, that number had increased to 2,658 such incidents, representing 6.3% of all identity thefts reported to the FTC that month.  Such thefts involved all four of the major mobile carriers.”

The reality, Cranor said, is that identity theft reports to the FTC likely represent only the tip of a much larger iceberg. According to data from the Identity Theft Supplement to the 2014 National Crime Victimization Survey conducted by the U.S. Department of Justice, less than 1% of identity theft victims reported the theft to the FTC.

While dealing with diverted calls can be a hassle, having your phone calls and incoming text messages siphoned to another phone also can present new security problems, thanks to the growing use of text messages in authentication schemes for financial services and other accounts.

Perhaps the most helpful part of Cranor’s post is a section on the security options offered by the four major mobile providers in the U.S. For example, AT&T offers an “extra security” feature that requires customers to present a custom passcode when dealing with the wireless provider via phone or online.

“All of the carriers have slightly different procedures but seem to suffer from the same problem, which is that they’re relying on retail stores relying on store employee to look at the driver’s license,” Cranor told KrebsOnSecurity. “They don’t use services that will check the information on the drivers license, and so that [falls to] the store employee who has no training in spotting fake IDs.”

Some of the security options offered by the four major providers. Source: FTC.

Some of the security options offered by the four major providers. Source: FTC.

It’s important to note that secret passcodes often can be bypassed by determined attackers or identity thieves who are adept at social engineering — that is, tricking people into helping them commit fraud.

I’ve used a six-digit passcode for more than two years on my account with AT&T, and last summer noticed that I’d stopped receiving voicemails. A call to AT&T’s customer service revealed that all voicemails were being forwarded to a number in Seattle that I did not recognized or authorize.

Since it’s unlikely that the attackers in this case guessed my six-digit PIN, they likely tricked a customer service representative at AT&T into “authenticating” me via other methods — probably by offering static data points about me such as my Social Security number, date of birth, and other information that is widely available for sale in the cybercrime underground on virtually all Americans over the age of 35. In any case, Cranor’s post has inspired me to exercise my rights under the FCRA and find out for certain.

Vineetha Paruchuri, a masters in computer science student at Dartmouth College, recently gave a talk at the Bsides security conference in Las Vegas on her research into security at the major U.S. mobile phone providers. Paruchuri said all of the major mobile providers suffer from a lack of strict protocols for authenticating customers, leaving customer service personnel exposed to social engineering.

“As a computer science student, my contention was that if we take away the control from the humans, we can actually make this process more secure,” Paruchuri said.

Paruchuri said perhaps the most dangerous threat is the smooth-talking social engineer who spends time collecting information about the verbal shorthand or mobile industry patois used by employees at these companies. The thief then simply phones up customer support and poses as a mobile store technician or employee trying to assist a customer. This was the exact approach used in 2014, when young hooligans tricked my then-ISP Cox Communications into resetting the password for my Cox email account.

I suppose one aspect of this problem that makes the lack of strong customer authentication measures by the mobile industry so frustrating is that it’s hard to imagine a device which holds more personal and intimate details about you than your wireless phone. After all, your phone likely knows where you were last night, when you last traveled, the phone number you last called and numbers you most frequently text.

And yet, the best the mobile providers and their fleet of reseller stores can do to tell you apart from an ID thief is to store a PIN that could be bypassed by clever social engineers (who may or may not be shaving yet).

A NOTE FOR AT&T READERS

By the way, readers with AT&T phones may have received a notice this week that AT&T is making some changes to “authorized users” allowed on accounts. The notice advised that starting Sept. 1, 2016, customers can designate up to 10 authorized users per account.

“If your Authorized User does not know your account passcode or extra security passcode, your Authorized User may still access your account in a retail store using a Forgotten Passcode process. Effective Nov. 5, 2016, Authorized Users and those persons who call into Customer Service and provide sufficient account information (“Authenticated Callers”) Will have the ability to add a new line of service to your account. Such requests, whether made by you, an Authorized User, an Authenticated Caller or someone with online access to your account, will trigger a credit check on you.”

AT&T's message this week about upcoming account changes.

AT&T’s message this week about upcoming account changes.

I asked AT&T about what need this new policy was designed to address, and the company responded that AT&T has made no changes to how an authorized user can be added to an account. AT&T spokesman Jim Greer sent me the following:

“With this notice, we are simply increasing the number of authorized users you may add to your account and giving them the ability to add a line in stores or over the phone. We made this change since more customers have multiple lines for multiple people. Authorized users still cannot access the account holder’s sensitive personal information.”

“Over the past several years, the authentication process has been strengthened. In stores, we’re safeguarding customers through driver’s license or other government issued ID authentication.  We use a two-factor authentication when you contact us online or by phone that requires a one-time PIN. We’re continuing our efforts to better protect customers, with additional improvements on the horizon.”

“You don’t have to designate anyone to become an authorized user on your account. You will be notified if any significant changes are made to your account by an authorized user, and you can remove any person as an authorized user at any time.”

The rub is what AT&T does — or more specifically, what the AT&T customer representative does — to verify your identity when the caller says he doesn’t remember his PIN or passcode. If they allow PIN-less authentication by asking for your Social Security number, date of birth and other static information about you, ID thieves can defeat that easily.

Has someone fraudulently ordered phone service or phones in your name? Sound off in the comments below.

If you’re wondering what you can do to shield yourself and your family against identity theft, check out these primers:

How I Learned to Stop Worrying and Embrace the Security Freeze (this primer goes well beyond security freezes and includes a detailed Q&A as well as other tips to help prevent and recover from ID theft).

Are Credit Monitoring Services Worth It? 

What Tax Fraud Victims Can Do

The Lowdown on Freezing Your Kid’s Credit

Research on the Timing of Security Warnings

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/research_on_the_2.html

fMRI experiments show that we are more likely to ignore security warnings when they interrupt other tasks.

A new study from BYU, in collaboration with Google Chrome engineers, finds the status quo of warning messages appearing haphazardly­ — while people are typing, watching a video, uploading files, etc.­ — results in up to 90 percent of users disregarding them.

Researchers found these times are less effective because of “dual task interference,” a neural limitation where even simple tasks can’t be simultaneously performed without significant performance loss. Or, in human terms, multitasking.

“We found that the brain can’t handle multitasking very well,” said study coauthor and BYU information systems professor Anthony Vance. “Software developers categorically present these messages without any regard to what the user is doing. They interrupt us constantly and our research shows there’s a high penalty that comes by presenting these messages at random times.”

[…]

For part of the study, researchers had participants complete computer tasks while an fMRI scanner measured their brain activity. The experiment showed neural activity was substantially reduced when security messages interrupted a task, as compared to when a user responded to the security message itself.

The BYU researchers used the functional MRI data as they collaborated with a team of Google Chrome security engineers to identify better times to display security messages during the browsing experience.

Research paper. News article.

Bugs don’t come from the Zero-Day Faerie

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/08/bugs-come-from-zero-day-faerie.html

This WIRED “article” (aka. thinly veiled yellow journalism) demonstrates the essential thing wrong with the 0day debate. Those arguing for NSA disclosure of 0days believe the Zero-Day Faerie brings them, that sometimes when the NSA wakes up in the morning, it finds a new 0day under its pillow.

The article starts with the sentences:

WHEN THE NSA discovers a new method of hacking into a piece of software or hardware, it faces a dilemma. Report the security flaw it exploits to the product’s manufacturer so it gets fixed, or keep that vulnerability secret—what’s known in the security industry as a “zero day”—and use it to hack its targets, gathering valuable intelligence.

But the NSA doesn’t accidentally “discover” 0days — it hunts for them, for the purpose of hacking. The NSA first decides it needs a Cisco 0day to hack terrorists, then spends hundreds of thousands of dollars either researching or buying the 0day. The WIRED article imagines that at this point, late in the decision cycle, that suddenly this dilemma emerges. It doesn’t.

The “dilemma” starts earlier in the decision chain. Is it worth it for the government to spend $100,000 to find and disclose a Cisco 0day? Or is it worth $100,000 for the government to find a Cisco 0day and use it to hack terrorists.

The answers are obviously “no” and “yes”. There is little value of the national interest in spending $100,000 to find a Cisco 0day. There are so many more undiscovered vulnerabilities that this will make little dent in the total number of bugs. Sure, in the long run, “vuln disclosure” makes computers more secure, but a large government investment in vuln disclosure (and bug bounties) will only be a small increase on the total vuln disclosure that happens without government involvement.

Conversely, if it allows the NSA to hack into a terrorist network, a $100,000 is cheap, and an obvious benefit.

My point is this. There are legitimate policy questions about government hacking and use of 0days. At the bare minimum, there should be more transparency. But the premises of activists like Andy Greenburg are insane. NSA 0days aren’t accidentally “discovered”, they don’t come from a magic Zero-Day Faerie. The NSA instead hunts for them, after they’ve come up with a clearly articulated need for one that exceeds mere disclosure.


Credit: @dinodaizovi, among others, has recently tweeted that “discover” is a flawed term that derails the 0day debate, as those like Greenberg assume it means as he describes it in his opening paragraph, that the NSA comes across them accidentally. Dino suggested the word “hunt” instead.

Prisoner’s Dilemma Experiment Illustrates Four Basic Phenotypes

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/prisoners_dilem.html

If you’ve read my book Liars and Outliers, you know I like the prisoner’s dilemma as a way to think about trust and security. There is an enormous amount of research — both theoretical and experimental — about the dilemma, which is why I found this new research so interesting. Here’s a decent summary:

The question is not just how people play these games­ — there are hundreds of research papers on that­ — but instead whether people fall into behavioral types that explain their behavior across different games. Using standard statistical methods, the researchers identified four such player types: optimists (20 percent), who always go for the highest payoff, hoping the other player will coordinate to achieve that goal; pessimists (30 percent), who act according to the opposite assumption; the envious (21 percent), who try to score more points than their partners; and the trustful (17 percent), who always cooperate. The remaining 12 percent appeared to make their choices completely at random.

Powerful Bit-Flipping Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/powerful_bit-fl.html

New research: “Flip Feng Shui: Hammering a Needle in the Software Stack,” by Kaveh Razavi, Ben Gras, Erik Bosman Bart Preneel, Cristiano Giuffrida, and Herbert Bos.

Abstract: We introduce Flip Feng Shui (FFS), a new exploitation vector which allows an attacker to induce bit flips over arbitrary physical memory in a fully controlled way. FFS relies on hardware bugs to induce bit flips over memory and on the ability to surgically control the physical memory layout to corrupt attacker-targeted data anywhere in the software stack. We show FFS is possible today with very few constraints on the target data, by implementing an instance using the Rowhammer bug and memory deduplication (an OS feature widely deployed in production). Memory deduplication allows an attacker to reverse-map any physical page into a virtual page she owns as long as the page’s contents are known. Rowhammer, in turn, allows an attacker to flip bits in controlled (initially unknown) locations in the target page.

We show FFS is extremely powerful: a malicious VM in a practical cloud setting can gain unauthorized access to a co-hosted victim VM running OpenSSH. Using FFS, we exemplify end-to-end attacks breaking OpenSSH public-key authentication, and forging GPG signatures from trusted keys, thereby compromising the Ubuntu/Debian update mechanism. We conclude by discussing mitigations and future directions for FFS attacks.

Applying the Business Source Licensing (BSL)

Post Syndicated from Michael "Monty" Widenius original http://monty-says.blogspot.com/2016/08/applying-business-source-licensing-bsl.html

I believe that Open Source is one of the best ways to develop software. However, as I have written in blogs before, the Open Source model presents challenges to creating a software company that has the needed resources to continually invest in product development and innovation.

One reason for this is a lack of understanding of the costs associated with developing and extending software. As one example of what I regard to be unrealistic user expectations, here is a statement from a large software company when I asked them to support MariaDB development with financial support:

As you may remember, we’re a fairly traditional and conservative company. A donation from us would require feature work in exchange for the donation. Unfortunately, I cannot think of a feature that I would want developed that we would be willing to pay for this year.”

This thinking is flawed on many fronts — a new feature can take more than a year to develop! It also shows that the company saw that features create value they would invest in, but was not willing to pay for features that had already been developed and was not prepared to invest into keeping alive a product they depend upon. They also don’t trust the development team with the ability to independently define new features that would bring value. Without that investment, a technology company cannot invest in ongoing research and development, thereby dooming its survival.

To be able to compete with closed source technology companies who have massive profit margins, one needs income.

Dual licensing on Free Software, as we applied it at MySQL, works only for a limited subset of products (something I have called ‘infrastructure software’) that customers need to combine with their own closed source software and distribute to their customers. Most software products are not like that. This is why David Axmark and I created the Business Source license (BSL), a license designed to harmonize producing Open Source software and running a successful software company.

The intent of BSL is to increase the overall freedom and innovation in the software industry, for customers, developers, user and vendors. Finally, I hope that BSL will pave the way for a new business model that sustains software development without relying primarily on support.

For those who are interested in the background, Linus Nyman, a doctoral student from Hanken School of Economics in Finland), and I worked together on an academic article on the BSL.

Today, MariaDB Corporation is excited to introduce the beta release of MariaDB MaxScale 2.0, our database proxy, which is released under BSL. I am very happy to see MariaDB MaxScale being released under BSL, rather than under an Open Core or Closed Source license.  Developing software under BSL will provide more resources to enhance it for future releases, in similar ways as Dual Licensing did for MySQL. MariaDB Corporation will over time create more BSL products. Even with new products coming under BSL, MariaDB Server will continue to be licensed under GPL in perpetuity. Keep in mind that because MariaDB Server extends earlier MySQL GPL code it is forever legally bound by the original GPL license of MySQL.

In addition to putting MaxScale under BSL, we have also created a framework to make it easy for anyone else to license their software under BSL.

Here follows the copyright notice used in the MaxScale 2.0 source code:

/*
* Copyright (c) 2016 MariaDB Corporation Ab
*
* Use of this software is governed by the Business Source License
* included in the LICENSE.TXT file and at www.mariadb.com/bsl.
*
* Change Date: 2019-01-01
*
* On the date above, in accordance with the Business Source
* License, use of this software will be governed by version 2
* or later of the General Public License.
*/

Two out of three top characteristics of the BSL are already shown here: The Change Date and the Change License. Starting on 1 January 2019 (the Change Date), MaxScale 2.0 is governed by GPLv2 or later (the Change License).

The centrepiece of the LICENSE.TXT file itself is this text:

Use Limitation: Usage of the software is free when your application uses the Software with a total of less than three database server instances for production purposes.

This third top characteristic is in effect until the Change Date.

What this means is that the software can be distributed, used, modified, etc., for free, within the use limitation. Beyond it, a commercial relationship is required – which, in the case of MaxScale 2.0, is a MariaDB Enterprise Subscription, which permits the use of MaxScale with three or more database servers.

You can find the full license text for MaxScale at mariadb.com/bsl and a general BSL FAQ at mariadb.com/bsl-faq-adopting. Feel free to copy or refer to them for your own BSL software!

The key characteristics of BSL are as follows:

  • The source code of BSL software is available in full from day one.
  • Users of BSL software can modify, distribute and compile the source.
  • Code contributions are encouraged and accepted through the “new BSD” license.
  • The BSL is purposefully designed to avoid vendor lock-in. With vendor lock in, I here mean that users of BSL software are not depending on one single vendor for support, fixing bugs or enhancing the BSL product.
  • The Change Date and Change License provide a time-delayed safety net for users, should the vendor stop developing the software.
  • Testing BSL software is always free of cost.
  • Production use of the software is free of cost within the use limitation.
  • Adoption of BSL software is encouraged with use limitations that provide ample freedom.
  • Monetisation of BSL software is driven by incremental sales in cases where the use limitation applies.

Whether BSL will be widely adopted remains to be seen. It’s certainly my desire that this new business model will inspire companies who develop Closed Source software or Open Core software to switch to BSL, which will ultimately result in more Open Source software in the community. With BSL, companies can realize a similar amount of revenue for the company, as they could with closed source or open core, while the free of cost usage in core production scenarios establishes a much larger user base to drive testing, innovation and adoption.

Yet Another Government-Sponsored Malware

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/yet_another_gov.html

Both Kaspersky and Symantec have uncovered another piece of malware that seems to be a government design:

The malware — known alternatively as “ProjectSauron” by researchers from Kaspersky Lab and “Remsec” by their counterparts from Symantec — has been active since at least 2011 and has been discovered on 30 or so targets. Its ability to operate undetected for five years is a testament to its creators, who clearly studied other state-sponsored hacking groups in an attempt to replicate their advances and avoid their mistakes.

[…]

Part of what makes ProjectSauron so impressive is its ability to collect data from computers considered so sensitive by their operators that they have no Internet connection. To do this, the malware uses specially prepared USB storage drives that have a virtual file system that isn’t viewable by the Windows operating system. To infected computers, the removable drives appear to be approved devices, but behind the scenes are several hundred megabytes reserved for storing data that is kept on the “air-gapped” machines. The arrangement works even against computers in which data-loss prevention software blocks the use of unknown USB drives.

Kaspersky researchers still aren’t sure precisely how the USB-enabled exfiltration works. The presence of the invisible storage area doesn’t in itself allow attackers to seize control of air-gapped computers. The researchers suspect the capability is used only in rare cases and requires use of a zero-day exploit that has yet to be discovered. In all, Project Sauron is made up of at least 50 modules that can be mixed and matched to suit the objectives of each individual infection.

“Once installed, the main Project Sauron modules start working as ‘sleeper cells,’ displaying no activity of their own and waiting for ‘wake-up’ commands in the incoming network traffic,” Kaspersky researchers wrote in a separate blog post. “This method of operation ensures Project Sauron’s extended persistence on the servers of targeted organizations.”

We don’t know who designed this, but it certainly seems likely to be a country with a serious cyberespionage budget.

Visa Alert and Update on the Oracle Breach

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/08/visa-alert-and-update-on-the-oracle-breach/

Credit card industry giant Visa on Friday issued a security alert warning companies using point-of-sale devices made by Oracle‘s MICROS retail unit to double-check the machines for malicious software or unusual network activity, and to change passwords on the devices. Visa also published a list of Internet addresses that may have been involved in the Oracle breach and are thought to be closely tied to an Eastern European organized cybercrime gang.

VSA-oracle

The Visa alert is the first substantive document that tries to help explain what malware and which malefactors might have hit Oracle — and by extension many of Oracle’s customers — since KrebsOnSecurity broke news of the breach on Aug. 8. That story cited sources close to the investigation saying hackers had broken into hundreds of servers at Oracle’s retail division, and had completely compromised Oracle’s main online support portal for MICROS customers.

MICROS is among the top three point-of-sale vendors globally. Oracle’s MICROS division sells point-of-sale systems used at more than 330,000 cash registers worldwide. When Oracle bought MICROS in 2014, the company said MICROS’s systems were deployed at some 200,000+ food and beverage outlets, 100,000+ retail sites, and more than 30,000 hotels.

In short, tens of millions of credit cards are swiped at MICROS terminals monthly, and a breach involving the theft of credentials that might have granted remote access to even just a small percentage of those systems is potentially a big and costly problem for all involved.

So far, however, most MICROS customers are left scratching their heads for answers. A frequently asked questions bulletin (PDF) Oracle also released last Monday held little useful information. Oracle issued the same cryptic response to everyone who asked for particulars about how far the breach extended. “Oracle has detected and addressed malicious code in certain legacy MICROS systems.”

Oracle also urged MICROS customers to change their passwords, and said “we also recommend that you change the password for any account that was used by a MICROS representative to access your on-premises systems.”

One of two documents Oracle sent to MICROS customers and the sum total of information the company has released so far about the breach.

One of two documents Oracle sent to MICROS customers and the sum total of information the company has released so far about the breach.

Some technology and fraud experts, including Gartner Analyst Avivah Litan, read that statement highlighted in yellow above as an acknowledgement by Oracle that hackers may have abused credentials gained in the MICROS portal breach to plant malicious code on the point-of-sale devices run by an unknown number of MICROS customers.

“This [incident] could explain a lot about the source of some of these retail and merchant point-of-sale hacks that nobody has been able to definitively tie to any one point-of-sale services provider,” Litan told me last week. “I’d say there’s a big chance that the hackers in this case found a way to get remote access” to MICROS customers’ on-premises point-of-sale devices.”

Clearly, Visa is concerned about this possibility as well.

INDICATORS OF COMPROMISE

In my original story about the breach, I wasn’t able to reveal all the data I’d gathered about the apparent source of the attacks and attackers. A key source in that story asked that I temporarily delay publishing certain details of the investigation, specifically those known as indicators of compromise (IOCs). Basically, IOCs are list of suspect Internet addresses, domain names, filenames and other curious digital clues that are thought to connect the victim with its attacker.

I’ve been inundated all week with calls and emails from security experts asking for that very data, but sharing it wasn’t my call. That is, until yesterday (8/12/16), when Visa published a “merchant communication alert” to some customers. In that alert (PDF), Visa published IOCs that may be connected with the intrusion. These IOCs could be extremely useful to MICROS customers because the presence of Internet traffic to and from these online destinations would strongly suggest the organization’s point-of-sale systems may be similarly compromised.

Some of the addresses on this list from Visa are known to be associated with the Carbanak Gang, a group of Eastern European hackers that Russian security firm Kaspersky Lab estimates has stolen more than $1 billion from banks and retailers. Here’s the IOCs list from the alert Visa pushed out Friday:

VISA warned merchants to check their systems for any communications to and from these Internet addresses and domain names associated with a Russian organized cybercrime gang called "Carbanak."

Visa warned merchants to check their systems for any communications to and from these Internet addresses and domain names associated with a Russian organized cybercrime gang called “Carbanak.”

Thankfully, since at least one of the addresses listed above (192.169.82.86) matched what’s on my source’s list, the source agreed to let me publish the entire thing. Here it is. I checked my source’s list and found at least five Internet addresses that were seen in both the Oracle attack and in a Sept. 2015 writeup about Carbanak by ESET Security, a Slovakian antivirus and security company. [NB: If you are unskilled at safely visiting malicious Web sites and/or handling malware, it’s probably best not to visit the addresses in the above-linked list.]

Visa also mentioned a specific POS-malware threat in its alert called “MalumPOS.” According to researchers at Trend Micro, MalumPOS is malware designed to target point-of-sale systems in hotels and related industries. In fact, Trend found that MalumPOS is set up to collect data specifically from point-of-sale systems running on Oracle’s MICROS platform.

It should come as no surprise then that many of Oracle’s biggest customers in the hospitality industry are starting to make noise, accusing Oracle of holding back key information that could help MICROS-based companies stop and clean up breaches involving malware and stolen customer credit card data.

“Oracle’s silence has been deafening,” said Michael Blake, chief executive officer at HTNG, a trade association for hotels and technology. “They are still grappling and trying to answer questions on the extent of the breach. Oracle has been invited to the last three [industry] calls this week and they are still going about trying to reach each customer individually and in the process of doing so they have done nothing but given the lame advice of changing passwords.”

The hospitality industry has been particularly hard hit by point-of-sale compromises over the past two years. Last month, KrebsOnSecurity broke the news of a breach at Kimpton Hotels (Kimpton appears to run MICROS products, but the company declined to answer questions for this story).

Kimpton joins a long list of hotel brands that have acknowledged card breaches over the last year, including Trump Hotels (twice), Hilton, Mandarin Oriental, and White Lodging (twice), Starwood Hotels and Hyatt. In many of those incidents, thieves had planted malicious software on the point-of-sale devices at restaurants and bars inside of the hotel chains. And, no doubt, many of those cash registers were run on MICROS systems.

If Oracle doesn’t exactly know which — if any — of its MICROS customers had malware on their point-of-sale systems as a result of the breach, it may be because the network intruders didn’t have any reason to interact with Oracle’s customers via the MICROS portal after stealing usernames and passwords that would allow them to remotely access customer on-premises systems. In theory, at that point the fraudsters could have bypassed Oracle altogether from then on.

BREACHED BY MULTIPLE ACTORS?

Another possibly interesting development in the Oracle breach story: There are indications that Oracle may have been breached by more than one cybercrime group. Or at least handed off from one to the other.

Late this week, Thomas Fox-Brewster at Forbes published a story noting that MICROS was just one of at least five point-of-sale companies that were recently hacked by a guy who — from an exhaustive review of his online chats — appears to have just sat himself down one day and decided to hack a bunch of point-of-sale companies.

Forbes quoted my old friend Alex Holden of Hold Security saying he had evidence that hackers had breached at least 10 payment companies, and the story focuses on getting confirmation from the various other providers apparently breached by the same cybercriminal actor.

Holden showed me multiple pages worth of chat logs between two individuals on a cybercrime forum [full disclosure: Holden’s company lists me as an adviser, but I accept no compensation for that role, and he ignores most of my advice].

The discussion between the two hackers begins around July 15, 2016, and goes on for more than a week. In it, the two hackers have been introduced to one another through a mutual, trusted contact. For a while, all they discuss is whether the seller can be trusted to deliver the Oracle MICROS database and control over the Oracle MICROS customer ticketing portal.

In the end, the buyer is convinced by what he sees and agrees to pay the bitcoin equivalent of roughly USD $13,000 for access to Oracle’s MICROS portal, as well as a handful of other point-of-sale Web sites. The buyer’s bitcoin wallet and the associated transactions can be seen here.

A screen shot shared by one of the hackers involved in compromising Oracle's MICROS support portal. This screen shot was taken of a similar Web shell the hackers placed on the Web site of another POS provider (this is not the shell that was on Oracle).

A screen shot shared by one of the hackers involved in compromising Oracle’s MICROS support portal. This screen shot was taken of a similar Web shell the hackers placed on the Web site of another POS provider (this is not the shell that was on Oracle).

According to the chat log, the hacker broke in by exploiting a file-upload function built into the MICROS customer support portal. From there the attackers were able to upload an attack tool known as a “WSO Web Shell.” This is a crude but effective text-based control panel that helps the attacker install additional attack tools to harvest data from the compromised Web server (see screen shot above). The beauty of a Web shell is that the attacker can control the infected site using nothing more than a Web browser, using nothing more than a hidden login page and a password that only he knows.

The two hackers discussed and both viewed more than a half-dozen files that were apparently left behind on the MICROS portal by the WSO shell they uploaded in mid-July (most of the malicious files ended in the file extension “wso.aspx”). The chat logs show the pair of miscreants proceeding to target another 9 online payment providers or point-of-sale vendors.

Some of those companies were quoted in the Forbes piece having acknowledged a breach similar to the Web shell attack at Oracle. But none of them have anywhere near the size of Oracle’s MICROS customer base.

GOOD HOSPITALITY, OR SWEPT UNDER THE RUG?

Oracle maintains in its FAQ (PDF) about the MICROS attack that “Oracle’s Corporate network and Oracle’s other cloud and service offerings were not impacted.” But a confidential source within Oracle’s Hospitality Division told KrebsOnSecurity that the breach first started in one of Oracle’s major point-of-sale data centers — specifically the company’s large data center in Manassas, Va.

According to my source, that particular center helps large Oracle hospitality industry clients manage their fleets of MICROS point-of-sale devices.

“Initially, the customer’s network and the internal Oracle network were on the same network,” said my source, who spoke under condition of anonymity because he did not have permission from his employer to speak on the record. “The networking team did a network segmentation of these two networks — ironically for security purposes. However, it seems as if what they have done actually allowed access from the Russian Cybercrime group.”

My source said that in mid-July 2016 Oracle sent out an email alert to employees of its hospitality division that they had to re-image their laptops without backing anything up.

“All of the files and software that were on an employee’s computer were deleted, which was crippling to business operations,” my source recalled. “Project management lost all their schedules, deployment teams lost all the software that they use to install on customer sites. Oracle did not tell the employees in this email that they got hacked but just to re-image everything with no backups. It seems as if Oracle did a pretty good job sweeping this incident under the rug. Most employees don’t know about the hack and it hasn’t been a huge deal to the customers. However, it is estimated that this cost them billions, so it is a really major breach.”

I sent Oracle a litany of questions based on the above, but a spokesperson for the company said Oracle would comment on none of it.

Secure Boot snafu: Microsoft leaks backdoor key, firmware flung wide open (Ars Technica)

Post Syndicated from jake original http://lwn.net/Articles/697059/rss

Ars Techica is reporting on a mistake by Microsoft that resulted in providing a “golden key” to circumvent Secure Boot. The “key” is not really a key at all, but a debugging tool that was inadvertently left in some versions of Windows devices that was found by two security researchers; the details were released on a “rather funky website” (viewing the source of that page is a good way to avoid the visual and audio funkiness).
The key basically allows anyone to bypass the provisions Microsoft has put in place ostensibly to prevent malicious versions of Windows from being installed, on any device running Windows 8.1 and upwards with Secure Boot enabled.

And while this means that enterprising users will be able to install any operating system—Linux, for instance—on their Windows tablet, it also allows bad actors with physical access to a machine to install bootkits and rootkits at deep levels. Worse, according to the security researchers who found the keys, this is a decision Microsoft may be unable to reverse.” As the researchers note, this is perfect example of why backdoors (legally mandated or not) in cryptographic systems are a bad idea.

Update: For some more detail, see Matthew Garrett’s blog post .

Hacking Your Computer Monitor

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/hacking_your_co.html

Here’s an interesting hack against a computer’s monitor:

A group of researchers has found a way to hack directly into the tiny computer that controls your monitor without getting into your actual computer, and both see the pixels displayed on the monitor — effectively spying on you — and also manipulate the pixels to display different images.

I’ve written a lot about the Internet of Things, and how everything is now a computer. But while it’s true for cars and refrigerators and thermostats, it’s also true for all the parts of your computer. Your keyboard, hard drives, and monitor are all individual computers, and what you think of as your computer is actually a collection of computers working together. So just as the NSA directly attacks the computer that is the hard drive, this attack targets the computer that is your monitor.

Road Warriors: Beware of ‘Video Jacking’

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/08/road-warriors-beware-of-video-jacking/

A little-known feature of many modern smartphones is their ability to duplicate video on the device’s screen so that it also shows up on a much larger display — like a TV. However, new research shows that this feature may quietly expose users to a simple and cheap new form of digital eavesdropping.

Dubbed “video jacking” by its masterminds, the attack uses custom electronics hidden inside what appears to be a USB charging station. As soon as you connect a vulnerable phone to the appropriate USB charging cord, the spy machine splits the phone’s video display and records a video of everything you tap, type or view on it as long as it’s plugged in — including PINs, passwords, account numbers, emails, texts, pictures and videos.

The part of the "video jacking" demonstration at the DEF CON security conference last week in Las Vegas.

Some of the equipment used in the “video jacking” demonstration at the DEF CON security conference last week in Las Vegas. Source: Brian Markus.

[Click here if you’re the TL;DR type and just want to know if your phone is at risk from this attack.]

Demonstrations of this simple but effective mobile spying technique were on full display at the DEF CON security conference in Las Vegas last week. I was busy chasing a story at DEF CON unrelated to the conference this year, so I missed many people and talks that I wanted to see. But I’m glad I caught up with the team behind DEF CON’s annual and infamous “Wall of Sheep,” a public shaming exercise aimed at educating people about the dangers of sending email and other plain text online communications over open wireless networks.

Brian Markus, co-founder and chief executive officer for Aries Security, said he and fellow researchers Joseph Mlodzianowski and Robert Rowley came up with the idea for video jacking when they were brainstorming about ways to expand on their “juice jacking” experiments at DEF CON in 2011.

“Juice jacking” refers to the ability to hijack stored data when the user unwittingly plugs his phone into a custom USB charging station filled with computers that are ready to suck down and record said data (both Android and iOS phones now ask users whether they trust the computer before allowing data transfers).

In contrast, video jacking lets the attacker record every key and finger stroke the user makes on the phone, so that the owner of the evil charging station can later replay the videos and see any numbers or keys pressed on the smart phone.

That’s because those numbers or keys will be raised briefly on the victim’s screen with each key press. Here’s an example: While the user may have enabled a special PIN that needs to be entered before the phone unlocks to the home screen, this method captures even that PIN as long as the device is vulnerable and plugged in before the phone is unlocked.

GREAT. IS MY PHONE VULNERABLE?

Most of the phones vulnerable to video jacking are Android or other HDMI-ready smartphones from Asus, Blackberry, HTC, LG, Samsung, and ZTE. This page of HDMI enabled smartphones at phonerated.com should not be considered all-inclusive. Here’s another list. When in doubt, search online for your phone’s make and model to find out if it is HDMI or MHL ready.

Video jacking is a problem for users of HDMI-ready phones mainly because it’s very difficult to tell a USB cord that merely charges the phone versus one that also taps the phone’s video-out capability. Also, there’s generally no warning on the phone to alert the user that the device’s video is being piped to another source, Markus said.

“All of those phones have an HDMI access feature that is turned on by default,” he said. “A few HDMI-ready phones will briefly flash something like ‘HDMI Connected’ whenever they’re plugged into a power connection that is also drawing on the HDMI feature, but most will display no warning at all. This worked on all the phones we tested with no prompting.”

Both Markus and Rowley said they did not test the attack against Apple iPhones prior to DEF CON, but today Markus said he tested it at an Apple store and the video of the iPhone 6’s home screen popped up on the display in the store without any prompt. Getting it to work on the display required a special lightning digital AV adapter from Apple, which could easily be hidden inside an evil charging station and fed an extension adapter and then a regular lightning cable in front of that.

WHAT’S A FAKE CHARGING STATION?

Markus had to explain to curious DEF CON attendees who wandered near the Wall of Sheep this year exactly what would happen if they plugged their phone into his phony charging station. As you can imagine, not a ton of people volunteered but there were enough to prove a point, Markus said.

The demonstration unit that Markus and his team showed at DEF CON (pictured above) was fairly crude. Behind a $40 monitor purchased at a local Vegas pawn shop is a simple device that takes HDMI output from a video splitter. That splitter is connected to two micro USB to HDMI cables that are cheaply available in electronics stores.

Those two cords were connected to standard USB charging cables for mobiles — including the universal micro USB to HDMI adapter (a.k.a. Mobile High Definition Link or MHL connector), and a slimport HDMI adapter. Both look very similar to standard USB charging cables. The raw video files are recorded by a simple inline recording device to a small USB storage device taped to the back of the monitor.

Markus said the entire rig (minus the TV monitor) cost about $220, and that the parts could be bought at hundreds of places online.

Although it's hard to tell the difference at this angle, the USB connector on the left has a set of six extra pins that enable it to read HDMI video and whatever is being viewed on the user's screen. Both cords will charge the same phone.

Although it may be difficult to tell the difference at this angle, the Mobile High Definition Link (MHL) USB connector on the left has a set of six extra pins that enable it to read HDMI video and whatever is being viewed on the user’s screen. Both cords will charge the same phone.

SHOULD YOU CARE?

My take on video jacking? It’s an interesting and very real threat — particularly if you own an HDMI ready phone and are in the habit of connecting it to any old USB port. Do I consider it likely that any of us will have to worry about this in real life? The answer may have a lot to do with what line of work you’re in and how paranoid you are, but it doesn’t strike me as very likely that most mere mortals would have reason to worry about video jacking.

On the other hand, it would be a fairly cheap and reasonably effective (if random) way to gather secrets from a group of otherwise unsuspecting people in a specific location, such as a hotel, airport, pub, or even a workplace.

An evil mobile charging station would be far more powerful when paired with a camera (hidden or not) trained on the charger. Imagine how much data one could hoover up with a fake charging station used to gather intellectual property or trade secrets from, say….attendees of a niche trade show or convention.

Now that I think about it, since access to electric power is not a constraint with these fake charging stations, there’s no reason it couldn’t just beam all of its video wirelessly. That way, the people who planted the spying equipment could retrieve or record the victim videos in real time and never have to return to the scene of the crime to collect any of it. Okay, I’ll stop now.

What can vulnerable users do to protect themselves from video jacking?

Hopefully, your phone came with a 2-prong charging cord that plugs straight into a standard wall jack. If not, look into using a USB phone charger adapter that has a regular AC/DC power plug on one end and a female USB port on the other (just make sure you don’t buy this keystroke logger disguised as a USB phone charger). Carry an extra charging dock for your mobile device when you travel.

Also, check the settings of your mobile and see if it allows you to disable screen mirroring. Note that even if you do this, the mirroring capability might not actually turn off.

What should mobile device makers do to minimize the threat from video jacking? 

“The problem here is that device manufacturers continue to add features and not give us prompting,” Markus said. “With this feature, it automatically connects no matter what. HDMI-out should be off by default, and if turned on it should require prompting the user.”

Update: 4:52 p.m. ET: Updated paragraph about Apple iPhones to clarify that this same attack works against the latest iPhone 6.

Bug Bounties Reaching $500,000 For iOS Exploits

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/Oi2kUyxvVik/

It seems this year bug bounties are getting really serious, especially on the secondary market involving exploit trading firms, not direct to the software producer or owner. $500,000 isn’t chump change and would be a good year for a small security team, especially living somewhere with a weaker currency. Even for a solo security researcher…

Read the full post at darknet.org.uk

Scott Atran on Why People Become Terrorists

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/scott_atran_on_.html

Scott Atran has done some really interesting research on why ordinary people become terrorists.

Academics who study warfare and terrorism typically don’t conduct research just kilometers from the front lines of battle. But taking the laboratory to the fight is crucial for figuring out what impels people to make the ultimate sacrifice to, for example, impose Islamic law on others, says Atran, who is affiliated with the National Center for Scientific Research in Paris.

Atran’s war zone research over the last few years, and interviews during the last decade with members of various groups engaged in militant jihad (or holy war in the name of Islamic law), give him a gritty perspective on this issue. He rejects popular assumptions that people frequently join up, fight and die for terrorist groups due to mental problems, poverty, brainwashing or savvy recruitment efforts by jihadist organizations.

Instead, he argues, young people adrift in a globalized world find their own way to ISIS, looking to don a social identity that gives their lives significance. Groups of dissatisfied young adult friends around the world ­ often with little knowledge of Islam but yearning for lives of profound meaning and glory ­ typically choose to become volunteers in the Islamic State army in Syria and Iraq, Atran contends. Many of these individuals connect via the internet and social media to form a global community of alienated youth seeking heroic sacrifice, he proposes.

Preliminary experimental evidence suggests that not only global terrorism, but also festering state and ethnic conflicts, revolutions and even human rights movements — think of the U.S. civil rights movement in the 1960s — depend on what Atran refers to as devoted actors. These individuals, he argues, will sacrifice themselves, their families and anyone or anything else when a volatile mix of conditions are in play. First, devoted actors adopt values they regard as sacred and nonnegotiable, to be defended at all costs. Then, when they join a like-minded group of nonkin that feels like a family ­ a band of brothers ­ a collective sense of invincibility and special destiny overwhelms feelings of individuality. As members of a tightly bound group that perceives its sacred values under attack, devoted actors will kill and die for each other.

Paper.

EDITED TO ADD (8/13): Related paper, also by Atran.

Study Highlights Serious Security Threat to Many Internet Users (UCR Today)

Post Syndicated from ris original http://lwn.net/Articles/696847/rss

UCR Today reports that
researchers at the University of California, Riverside have identified a
weakness in the Transmission Control Protocol (TCP) in Linux that enables
attackers to hijack users’ internet communications remotely. “The
UCR researchers didn’t rely on chance, though. Instead, they identified a
subtle flaw (in the form of ‘side channels’) in the Linux software that
enables attackers to infer the TCP sequence numbers associated with a
particular connection with no more information than the IP address of the
communicating parties. This means that given any two arbitrary machines on
the internet, a remote blind attacker, without being able to eavesdrop on
the communication, can track users’ online activity, terminate connections
with others and inject false material into their communications.