Tag Archives: privilege escalation

How to use trust policies with IAM roles

Post Syndicated from Jonathan Jenkyn original https://aws.amazon.com/blogs/security/how-to-use-trust-policies-with-iam-roles/

AWS Identity and Access Management (IAM) roles are a significant component in the way customers operate in Amazon Web Service (AWS). In this post, I’ll dive into the details on how Cloud security architects and account administrators can protect IAM roles from misuse by using trust policies. By the end of this post, you’ll know how to use IAM roles to build trust policies that work at scale, providing guardrails to control access to resources in your organization.

In general, there are four different scenarios where you might use IAM roles in AWS:

  • One AWS service accesses another AWS service – When an AWS service needs access to other AWS services or functions, you can create a role that will grant that access.
  • One AWS account accesses another AWS account – This use case is commonly referred to as a cross-account role pattern. This allows human or machine IAM principals from other AWS accounts to assume this role and act on resources in this account.
  • A third-party web identity needs access – This use case allows users with identities in third-party systems like Google and Facebook, or Amazon Cognito, to use a role to access resources in the account.
  • Authentication using SAML2.0 federation – This is commonly used by enterprises with Active Directory that want to connect using an IAM role so that their users can use single sign-on workflows to access AWS accounts.

In all cases, the makeup of an IAM role is the same as that of an IAM user and is only differentiated by the following qualities:

  • An IAM role does not have long term credentials associated with it; rather, a principal (an IAM user, machine, or other authenticated identity) assumes the IAM role and inherits the permissions assigned to that role.
  • The tokens issued when a principal assumes an IAM role are temporary. Their expiration reduces the risks associated with credentials leaking and being reused.
  • An IAM role has a trust policy that defines which conditions must be met to allow other principals to assume it. This trust policy reduces the risks associated with privilege escalation.

Recommendation: You should make extensive use of temporary IAM roles rather than permanent credentials such as IAM users. For more information review this page: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html

While the list of users having access to your AWS accounts can change over time, the roles used to manage your AWS account probably won’t. The use of IAM roles essentially decouples your enterprise identity system (SAML 2.0) from your permission system (AWS IAM policies), simplifying management of each.

Managing access to IAM roles

Let’s dive into how you can create relationships between your enterprise identity system and your permissions system by looking at the policy types you can apply to an IAM role.

An IAM role has three places where it uses policies:

  • Permission policies (inline and attached) – These policies define the permissions that a principal assuming the role is able (or restricted) to perform, and on which resources.
  • Permissions boundary – A permissions boundary is an advanced feature for using a managed policy to set the maximum permissions that an identity-based policy can grant to an IAM entity. An entity’s permissions boundary allows it to perform only the actions that are allowed by both its identity-based permission policies and its permissions boundaries.
  • Trust relationship – This policy defines which principals can assume the role, and under which conditions. This is sometimes referred to as a resource-based policy for the IAM role. We’ll refer to this policy simply as the ‘trust policy’.

A role can be assumed by a human user or a machine principal, such as an Amazon Elastic Computer Cloud (Amazon EC2) instance or an AWS Lambda function. Over the rest of this post, you’ll see how you’re able to reduce the conditions for principals to use roles by configuring their trust policies.

An example of a simple trust policy

A common use case is when you need to provide security audit access to your account, allowing a third party to review the configuration of that account. After attaching the relevant permission policies to an IAM role, you need to add a cross-account trust policy to allow the third-party auditor to make the sts:AssumeRole API call to elevate their access in the audited account. The following trust policy shows an example policy created through the AWS Management Console:


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {}
    }
  ]
}

As you can see, it has the same structure as other IAM policies with Effect, Action, and Condition components. It also has the Principal parameter, but no Resource attribute. This is because the resource, in the context of the trust policy, is the IAM role itself. For the same reason, the Action parameter will only ever be set to one of the following values: sts:AssumeRole, sts:AssumeRoleWithSAML, or sts:AssumeRoleWithWebIdentity.

Note: The suffix root in the policy’s Principal attribute equates to “authenticated and authorized principals in the account,” not the special and all-powerful root user principal that is created when an AWS account is created.

Using the Principal attribute to reduce scope

In a trust policy, the Principal attribute indicates which other principals can assume the IAM role. In the example above, 111122223333 represents the AWS account number for the auditor’s AWS account. In effect, this allows any principal in the 111122223333 AWS account with sts:AssumeRole permissions to assume this role.

To restrict access to a specific IAM user account, you can define the trust policy like the following example, which would allow only the IAM user LiJuan in the 111122223333 account to assume this role. LiJuan would also need to have sts:AssumeRole permissions attached to their IAM user for this to work:


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:user/LiJuan"
      },
      "Action": "sts:AssumeRole",
      "Condition": {}
    }
  ]
}

The principals set in the Principal attribute can be any principal defined by the IAM documentation, and can refer to an AWS or a federated principal. You cannot use a wildcard (“*” or “?”) within a Principal for a trust policy, other than one special condition, which I’ll come back to in a moment: You must define precisely which principal you are referring to because there is a translation that occurs when you submit your trust policy that ties it to each principal’s hidden principal ID, and it can’t do that if there are wildcards in the principal.

The only scenario where you can use a wildcard in the Principal parameter is where the parameter value is only the “*” wildcard. Use of the global wildcard “*” for the Principal isn’t recommended unless you have clearly defined Conditional attributes in the policy statement to restrict use of the IAM role, since doing so without Conditional attributes permits assumption of the role by any principal in any AWS account, regardless of who that is.

Using identity federation on AWS

Federated users from SAML 2.0 compliant enterprise identity services are given permissions to access AWS accounts through the use of IAM roles. While the user-to-role configuration of this connection is established within the SAML 2.0 identity provider, you should also put controls in the trust policy in IAM to reduce any abuse.

Because the Principal attribute contains configuration information about the SAML mapping, in the case of Active Directory, you need to use the Condition attribute in the trust policy to restrict use of the role from the AWS account management perspective. This can be done by restricting the SourceIp address, as demonstrated later, or by using one or more of the SAML-specific Condition keys available. My recommendation here is to be as specific as you can in reducing the set of principals that can use the role as is practical. This is best achieved by adding qualifiers into the Condition attribute of your trust policy.

There’s a very good guide on creating roles for SAML 2.0 federation that contains a basic example trust policy you can use.

Using the Condition attribute in a trust policy to reduce scope

The Condition statement in your trust policy sets additional requirements for the Principal trying to assume the role. If you don’t set a Condition attribute, the IAM engine will rely solely on the Principal attribute of this policy to authorize role assumption. Given that it isn’t possible to use wildcards within the Principal attribute, the Condition attribute is a really flexible way to reduce the set of users that are able to assume the role without necessarily specifying the principals.

Limiting role use based on an identifier

Occasionally teams managing multiple roles can become confused as to which role achieves what and can inadvertently assume the wrong role. This is referred to as the Confused Deputy problem. This next section shows you a way to quickly reduce this risk.

The following trust policy requires that principals from the 111122223333 AWS account have provided a special phrase when making their request to assume the role. Adding this condition reduces the risk that someone from the 111122223333 account will assume this role by mistake. This phrase is configured by specifying an ExternalID conditional context key.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId": "ExampleSpecialPhrase"
        }
      }
    }
  ]
}

In the example trust policy above, the value ExampleSpecialPhrase isn’t a secret or a password. Adding the ExternalID condition limits this role from being assumed using the console. The only way to add this ExternalID argument into the role assumption API call is to use the AWS Command Line Interface (AWS CLI) or a programming interface. Having this condition doesn’t prevent a user who knows about this relationship and the ExternalId from assuming what might be a privileged set of permissions, but does help manage risks like the Confused Deputy problem. I see customers using an ExternalID that matches the name of the AWS account, which works to ensure that an operator is working on the account they believe they’re working on.

Limiting role use based on multi-factor authentication

By using the Condition attribute, you can also require that the principal assuming this role has passed a multi-factor authentication (MFA) check before they’re permitted to use this role. This again limits the risk associated with mistaken use of the role and adds some assurances about the principal’s identity.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "BoolIfExists": { 
          "aws:MultiFactorAuthPresent" : "true" 
		}
      }
    }
  ]
}

In the example trust policy above, I also introduced the MultiFactorAuthPresent conditional context key. Per the AWS global condition context keys documentation, the MultiFactorAuthPresent conditional context key does not apply to sts:AssumeRole requests in the following contexts:

  • When using access keys in the CLI or with the API
  • When using temporary credentials without MFA
  • When a user signs in to the AWS Console
  • When services (like AWS CloudFormation or Amazon Athena) reuse session credentials to call other APIs
  • When authentication has taken place via federation

In the example above, the use of the BoolIfExists qualifier to the MultiFactorAuthPresent conditional context key evaluates the condition as true if:

  • The principal type can have an MFA attached, and does.
    or
  • The principal type cannot have an MFA attached.

This is a subtle difference but makes the use of this conditional key in trust policies much more flexible across all principal types.

Limiting role use based on time

During activities like security audits, it’s quite common for the activity to be time-bound and temporary. There’s a risk that the IAM role could be assumed even after the audit activity concludes, which might be undesirable. You can manage this risk by adding a time condition to the Condition attribute of the trust policy. This means that rather than being concerned with disabling the IAM role created immediately following the activity, customers can build the date restriction into the trust policy. You can do this by using policy attribute statements, like so:


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "DateGreaterThan": {
          "aws:CurrentTime": "2020-09-01T12:00:00Z"
        },
        "DateLessThan": {
          "aws:CurrentTime": "2020-09-07T12:00:00Z"
        }
      }
    }
  ]
}

Limiting role use based on IP addresses or CIDR ranges

If the auditor for a security audit is using a known fixed IP address, you can build that information into the trust policy, further reducing the opportunity for the role to be assumed by unauthorized actors calling the assumeRole API function from another IP address or CIDR range:


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "IpAddress": {
          "aws:SourceIp": "203.0.113.0/24"
        }
      }
    }
  ]
}

Limiting role use based on tags

IAM tagging capabilities can also help to build flexible and adaptive trust policies, too, so that they create an attribute-based access control (ABAC) model for IAM management. You can build trust policies that only permit principals that have already been tagged with a specific key and value to assume a specific role. The following example requires that IAM principals in the AWS account 111122223333 be tagged with department = OperationsTeam for them to assume the IAM role.


tagged with department = OperationsTeam for them to assume the IAM role.
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalTag/department": "OperationsTeam"
        }
      }
    }
  ]
}

If you want to create this effect, I highly recommend the use of the PrincipalTag pattern above, but you must also be cautious about which principals are then also given iam:TagUser, iam:TagRole, iam:UnTagUser, and iam:UnTagRole permissions, perhaps even using the aws:PrincipalTag condition within the permissions boundary policy to restrict their ability to retag their own IAM principal or that of another IAM role they can assume.

Limiting or extending access to a role based on AWS Organizations

Since its announcement in 2016, almost every enterprise customer I work with uses AWS Organizations. This AWS service allows customers to create an organizational structure for their accounts by creating hard boundaries to manage blast-radius risks, among other advantages. You can use the PrincipalOrgID condition to limit assumption of an organization-wide core IAM role.

Caution: As you’ll see in the example below, you need to set the Principal attribute to “*” to do this, which would, without the conditional restriction, allow all role assumption requests to be accepted for this role, irrespective of the source of that assumption request. For that reason, be especially careful about the use of this pattern.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalOrgID": "o-abcd12efg1"
        }
      }
    }
  ]
}

It isn’t practical to write out all the AWS account identifiers into a trust policy, and because of the way policies like this are evaluated, you can’t include wildcard characters for the account number in the principal’s account number field. The use of the PrincipalOrgID global condition context key provides us with a neat and dynamic mechanism to create a short policy statement.

Role chaining

There are instances where a third party might themselves be using IAM roles, or where an AWS service resource that has already assumed a role needs to assume another role (perhaps in another account), and customers might need to allow only specific IAM roles in that remote account to assume the IAM role you create in your account. You can use role chaining to build permitted role escalation routes using role assumption from within the same account or AWS organization, or from third-party AWS accounts.

Consider the following trust policy example where I use a combination of the Principal attribute to scope down to an AWS account, and the aws:UserId global conditional context key to scope down to a specific role using its RoleId. To capture the RoleId for the role you want to be able to assume, you can run the following command using the AWS CLI:


# aws iam get-role --role-name CrossAccountAuditor
{
    "Role": {
        "Path": "/",
        "RoleName": "CrossAccountAuditor",
        "RoleId": "ARO1234567123456D",
        "Arn": "arn:aws:iam:: 111122223333:role/CrossAccountAuditor",
        "CreateDate": "2017-08-31T14:24:20+00:00",
        "AssumeRolePolicyDocument": {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Sid": "",
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": "arn:aws:iam::111122223333:root"
                    },
                    "Action": "sts:AssumeRole",
                    "Condition": {
                        "StringEquals": {
                            "sts:ExternalId": "ExampleSpecialPhrase"
                        }
                    }
                }
            ]
        }
    }
}

Here is the example trust policy that limits to only the CrossAccountAuditor role from AWS Account 111122223333.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringLike": {
          "aws:userId": "ARO1234567123456D:*"
        }
      }
    }
  ]
}

If you’re using an IAM user and have assumed the CrossAccountAuditor IAM role, the policy above will work through the AWS CLI with a call to aws sts assume-role and through the console.

This type of trust policy also works for services like Amazon EC2, allowing those instances using their assigned instance profile role to assume a role in another account to perform actions. We’ll touch on this use case later in the post.

Putting it all together

AWS customers can use combinations of all the above Principal and Condition attributes to hone the trust they’re extending out to any third party, or even within their own organization. They might create an accumulated trust policy for an IAM role which achieves the following effect:

Allows only a user named PauloSantos, in AWS account number 111122223333, to assume the role if they have also authenticated with an MFA, are logging in from an IP address in the 203.0.113.0 to 203.0.113.24 CIDR range, and the date is between noon of September 1, 2020, and noon of September 7, 2020.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": [
          "arn:aws:iam::111122223333:user/PauloSantos"
        ]
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "BoolIfExists": {
          "aws:MultiFactorAuthPresent": "true"
        },
        "IpAddress": {
          "aws:SourceIp": "203.0.113.0/24"
        },
        "DateGreaterThan": {
          "aws:CurrentTime": "2020-09-01T12:00:00Z"
        },
        "DateLessThan": {
          "aws:CurrentTime": "2020-09-07T12:00:00Z"
        }
      }
    }
  ]
}

I’ve seen customers use this to create IAM users who have no permissions attached besides sts:AssumeRole. Trust relationships are then configured between the IAM users and the IAM roles, creating ultimate flexibility in defining who has access to what roles without needing to update the IAM user identity pool at all.

A word on Effect: Deny and NotPrincipal in IAM role trust policies

I have seen some customers make use of an “Effect”: “Deny” clause in their trust policies. This pattern can help manage a wildcard statement in another “Effect”: “Allow” clause of the same trust policy. However, this isn’t the best approach for most scenarios. You will typically be able to define each principal in your policy as being allowed access. An example of where this might not be true is where you have a clause that uses the global wildcard “*” as a principal, in which case it will be necessary to add Deny statements to further filter the access.

Putting a wildcard into the Principal attribute of an Allow policy statement, particularly in relation to trust policies, can be dangerous if you haven’t done a robust job of managing the Condition attribute in the same statement. Be as specific as possible in your Allow statement, and use Principal attributes first, rather than then relying on Deny statements to manage potential security gaps created by your use of wildcards.

The following trust policy allows all IAM principals within the o-abcd12efg1 organization to assume the IAM role, but only if it’s before September 7, 2020:


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalOrgID": "o-abcd12efg1"
        }
      }
    },
    {
      "Effect": "Deny",
      "Principal": {
        "AWS": "*"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "DateGreaterThan": {
          "aws:CurrentTime": "2020-09-07T12:00:00Z"
        }
      }
    }
  ]
}

The use of NotPrincipal in trust policies

You can also build into your trust policies a NotPrincipal condition. Again, this is rarely the best choice, because you can introduce unnecessary complexity and confusion into your policies. Instead, you can avoid that problem by using fairly simple and prescriptive Principal statements.

Statements with NotPrincipal can also use a Deny statement as well, so it can create quite baffling policy logic, which if misunderstood could create unintended opportunities for misuse or abuse.

Here’s an example where you might think to use Deny and NotPrincipal in a trust policy—but notice this has the same effect as adding arn:aws:iam::123456789012:role/CoreAccess in a single Allow statement. In general, Deny with NotPrincipal statements in trust policies create unnecessary complexity, and should be avoided.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Effect": "Deny",
      "NotPrincipal": {
        "AWS": "arn:aws:iam::111122223333:role/CoreAccess"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Remember, your Principal attribute should be very specific, to reduce the set of those able to assume the role, and an IAM role trust policy won’t permit access if a corresponding Allow statement isn’t explicitly present in the trust policy. It’s better to rely on the default deny policy evaluation logic where you’re able, rather than introducing unnecessary complexity into your policy logic.

Creating trust policies for AWS services that assume roles

There are two types of contexts where AWS services need access to IAM roles to function:

  1. Resources managed by an AWS service (like Amazon EC2 or Lambda, for example) need access to an IAM role to execute functions on other AWS resources, and need permissions to do so.
  2. An AWS service that abstracts its functionality from other AWS services, like Amazon Elastic Container Service (Amazon ECS) or Amazon Lex, needs access to execute functions on AWS resources. These are called service-linked roles and are a special case that’s out of the scope of this post.

In both contexts, you have the service itself as an actor. The service is assuming your IAM role so it can provide your credentials to your Lambda function (the first context) or use those credentials to do things (the second context). In the same way that IAM roles are used by human operators to provide an escalation mechanism for users operating with specific functions in the examples above, so, too, do AWS resources, such as Lambda functions, Amazon EC2 instances, and even AWS CloudFormation, require the same mechanism. You can find more information about how to create IAM Roles for AWS Services here.

An IAM role for a human operator and for an AWS service are exactly the same, even though they have a different principal defined in the trust policy. The policy’s Principal will define the AWS service that is permitted to assume the role for its function.

Here’s an example trust policy for a role designed for an Amazon EC2 instance to assume. You can see that the principal provided is the ec2.amazonaws.com service:


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Every configuration of an AWS resource should be passed a specific role unique to its function. So, if you have two Amazon EC2 launch configurations, you should design two separate IAM roles, even if the permissions they require are currently the same. This allows each configuration to grow or shrink the permissions it requires over time, without needing to reattach IAM roles to configurations, which might create a privilege escalation risk. Instead, you update the permissions attached to each IAM role independently, knowing that it will only be used by that one service resource. This helps reduce the potential impact of risks. Automating your management of roles will help here, too.

Several customers have asked if it’s possible to design a trust policy for an IAM role such that it can only be passed to a specific Amazon EC2 instance. This isn’t directly possible. You cannot place the Amazon Resource Name (ARN) for an EC2 instance into the Principal of a trust policy, nor can you use tag-based condition statements in the trust policy to limit the ability for the role to be used by a specific resource.

The only option is to manage access to the iam:PassRole action within the permission policy for those IAM principals you expect to be attaching IAM roles to AWS resources. This special Action is evaluated when a principal tries to attach another IAM role to an AWS service or AWS resource.

You should use restrictions on access to the iam:PassRole action with permission policies and permission boundaries. This means that the ability to attach roles to instance profiles for Amazon EC2 is limited, rather than using the trust policy on the role assumed by the EC2 instance to achieve this. This approach makes it much easier to manage scaling for both those principals attaching roles to EC2 instances, and the instances themselves.

You could use a permission policy to limit the ability for the associated role to attach other roles to Amazon EC2 instances with the following permission policy, unless the role name is prefixed with EC2-Webserver-:


{
    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Allow",
        "Action": [
            "iam:GetRole",
            "iam:PassRole"
        ],
        "Resource": "arn:aws:iam::111122223333:role/EC2-Webserver-*"
    }]
}

Conclusion

You now have all the tools you need to build robust and effective trust policies that work at scale, providing guardrails for your users and those who might want to access resources in your account from outside your organization.

Policy logic isn’t always simple, and I encourage you to use sandbox accounts to try out your ideas. In general, simplicity should win over cleverness. IAM policies and statements that might well be frugal in their use of policy language might also be difficult to read, interpret, and update by other IAM administrators in the future. Keeping your trust policies simple helps to build IAM relationships everyone understands and can manage, and use, effectively.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Jonathan Jenkyn

Jonathan is a Senior Security Growth Strategies Consultant with AWS Professional Services. He’s an active member of the People with Disabilities affinity group, and has built several Amazon initiatives supporting charities and social responsibility causes. Since 1998, he has been involved in IT Security at many levels, from implementation of cryptographic primitives to managing enterprise security governance. Outside of work, he enjoys running, cycling, fund-raising for the BHF and Ipswich Hospital Charity, and spending time with his wife and 5 children.

Privilege escalation via eBPF in Linux 4.9 and beyond

Post Syndicated from jake original https://lwn.net/Articles/742170/rss

Jann Horn has reported eight bugs in the
eBPF verifier, one for the 4.9 kernel and seven introduced in 4.14, to the
oss-security mailing list. Some
of these bugs result in eBPF programs being able to read and write arbitrary
kernel memory, thus can be used for a variety of ill effects, including
privilege escalation. As Ben Hutchings notes,
one mitigation would be to disable unprivileged access to BPF using the
following sysctl:
kernel.unprivileged_bpf_disabled=1. More information can also be found
in this Project
Zero bug entry
. The fixes are not yet in the mainline tree, but are in
the netdev tree. Hutchings goes on to say: “There is a public
exploit that uses several of these bugs to get root privileges. It doesn’t
work as-is on stretch [Debian 9] with the Linux 4.9 kernel, but is easy to adapt. I
recommend applying the above mitigation as soon as possible to all systems
running Linux 4.4 or later.

Join Us at the 10th Annual Hadoop Summit / DataWorks Summit, San Jose (Jun 13-15)

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/160966148886

yahoohadoop:

image

We’re excited to co-host the 10th Annual Hadoop Summit, the leading conference for the Apache Hadoop community, taking place on June 13 – 15 at the San Jose Convention Center. In the last few years, the Hadoop Summit has expanded to cover all things data beyond just Apache Hadoop – such as data science, cloud and operations, IoT and applications – and has been aptly renamed the DataWorks Summit. The three-day program is bursting at the seams! Here are just a few of the reasons why you cannot miss this must-attend event:

  • Familiarize yourself with the cutting edge in Apache project developments from the committers
  • Learn from your peers and industry experts about innovative and real-world use cases, development and administration tips and tricks, success stories and best practices to leverage all your data – on-premise and in the cloud – to drive predictive analytics, distributed deep-learning and artificial intelligence initiatives
  • Attend one of our more than 170 technical deep dive breakout sessions from nearly 200 speakers across eight tracks
  • Check out our keynotes, meetups, trainings, technical crash courses, birds-of-a-feather sessions, Women in Big Data and more
  • Attend the community showcase where you can network with sponsors and industry experts, including a host of startups and large companies like Microsoft, IBM, Oracle, HP, Dell EMC and Teradata

Similar to previous years, we look forward to continuing Yahoo’s decade-long tradition of thought leadership at this year’s summit. Join us for an in-depth look at Yahoo’s Hadoop culture and for the latest in technologies such as Apache Tez, HBase, Hive, Data Highway Rainbow, Mail Data Warehouse and Distributed Deep Learning at the breakout sessions below. Or, stop by Yahoo kiosk #700 at the community showcase.

Also, as a co-host of the event, Yahoo is pleased to offer a 20% discount for the summit with the code MSPO20. Register here for Hadoop Summit, San Jose, California!


DAY 1. TUESDAY June 13, 2017


12:20 – 1:00 P.M. TensorFlowOnSpark – Scalable TensorFlow Learning On Spark Clusters

Andy Feng – VP Architecture, Big Data and Machine Learning

Lee Yang – Sr. Principal Engineer

In this talk, we will introduce a new framework, TensorFlowOnSpark, for scalable TensorFlow learning, that was open sourced in Q1 2017. This new framework enables easy experimentation for algorithm designs, and supports scalable training & inferencing on Spark clusters. It supports all TensorFlow functionalities including synchronous & asynchronous learning, model & data parallelism, and TensorBoard. It provides architectural flexibility for data ingestion to TensorFlow and network protocols for server-to-server communication. With a few lines of code changes, an existing TensorFlow algorithm can be transformed into a scalable application.

2:10 – 2:50 P.M. Handling Kernel Upgrades at Scale – The Dirty Cow Story

Samy Gawande – Sr. Operations Engineer

Savitha Ravikrishnan – Site Reliability Engineer

Apache Hadoop at Yahoo is a massive platform with 36 different clusters spread across YARN, Apache HBase, and Apache Storm deployments, totaling 60,000 servers made up of 100s of different hardware configurations accumulated over generations, presenting unique operational challenges and a variety of unforeseen corner cases. In this talk, we will share methods, tips and tricks to deal with large scale kernel upgrade on heterogeneous platforms within tight timeframes with 100% uptime and no service or data loss through the Dirty COW use case (privilege escalation vulnerability found in the Linux Kernel in late 2016).

5:00 – 5:40 P.M. Data Highway Rainbow –  Petabyte Scale Event Collection, Transport, and Delivery at Yahoo

Nilam Sharma – Sr. Software Engineer

Huibing Yin – Sr. Software Engineer

This talk presents the architecture and features of Data Highway Rainbow, Yahoo’s hosted multi-tenant infrastructure which offers event collection, transport and aggregated delivery as a service. Data Highway supports collection from multiple data centers & aggregated delivery in primary Yahoo data centers which provide a big data computing cluster. From a delivery perspective, Data Highway supports endpoints/sinks such as HDFS, Storm and Kafka; with Storm & Kafka endpoints tailored towards latency sensitive consumers.


DAY 2. WEDNESDAY June 14, 2017


9:05 – 9:15 A.M. Yahoo General Session – Shaping Data Platform for Lasting Value

Sumeet Singh  – Sr. Director, Products

With a long history of open innovation with Hadoop, Yahoo continues to invest in and expand the platform capabilities by pushing the boundaries of what the platform can accomplish for the entire organization. In the last 11 years (yes, it is that old!), the Hadoop platform has shown no signs of giving up or giving in. In this talk, we explore what makes the shared multi-tenant Hadoop platform so special at Yahoo.

12:20 – 1:00 P.M. CaffeOnSpark Update – Recent Enhancements and Use Cases

Mridul Jain – Sr. Principal Engineer

Jun Shi – Principal Engineer

By combining salient features from deep learning framework Caffe and big-data frameworks Apache Spark and Apache Hadoop, CaffeOnSpark enables distributed deep learning on a cluster of GPU and CPU servers. We released CaffeOnSpark as an open source project in early 2016, and shared its architecture design and basic usage at Hadoop Summit 2016. In this talk, we will update audiences about the recent development of CaffeOnSpark. We will highlight new features and capabilities: unified data layer which multi-label datasets, distributed LSTM training, interleave testing with training, monitoring/profiling framework, and docker deployment.

12:20 – 1:00 P.M. Tez Shuffle Handler – Shuffling at Scale with Apache Hadoop

Jon Eagles – Principal Engineer  

Kuhu Shukla – Software Engineer

In this talk we introduce a new Shuffle Handler for Tez, a YARN Auxiliary Service, that addresses the shortcomings and performance bottlenecks of the legacy MapReduce Shuffle Handler, the default shuffle service in Apache Tez. The Apache Tez Shuffle Handler adds composite fetch which has support for multi-partition fetch to mitigate performance slow down and provides deletion APIs to reduce disk usage for long running Tez sessions. As an emerging technology we will outline future roadmap for the Apache Tez Shuffle Handler and provide performance evaluation results from real world jobs at scale.

2:10 – 2:50 P.M. Achieving HBase Multi-Tenancy with RegionServer Groups and Favored Nodes

Thiruvel Thirumoolan – Principal Engineer

Francis Liu – Sr. Principal Engineer

At Yahoo! HBase has been running as a hosted multi-tenant service since 2013. In a single HBase cluster we have around 30 tenants running various types of workloads (ie batch, near real-time, ad-hoc, etc). We will walk through multi-tenancy features explaining our motivation, how they work as well as our experiences running these multi-tenant clusters. These features will be available in Apache HBase 2.0.

2:10 – 2:50 P.M. Data Driving Yahoo Mail Growth and Evolution with a 50 PB Hadoop Warehouse

Nick Huang – Director, Data Engineering, Yahoo Mail  

Saurabh Dixit – Sr. Principal Engineer, Yahoo Mail

Since 2014, the Yahoo Mail Data Engineering team took on the task of revamping the Mail data warehouse and analytics infrastructure in order to drive the continued growth and evolution of Yahoo Mail. Along the way we have built a 50 PB Hadoop warehouse, and surrounding analytics and machine learning programs that have transformed the way data plays in Yahoo Mail. In this session we will share our experience from this 3 year journey, from the system architecture, analytics systems built, to the learnings from development and drive for adoption.

DAY3. THURSDAY June 15, 2017


2:10 – 2:50 P.M. OracleStore – A Highly Performant RawStore Implementation for Hive Metastore

Chris Drome – Sr. Principal Engineer  

Jin Sun – Principal Engineer

Today, Yahoo uses Hive in many different spaces, from ETL pipelines to adhoc user queries. Increasingly, we are investigating the practicality of applying Hive to real-time queries, such as those generated by interactive BI reporting systems. In order for Hive to succeed in this space, it must be performant in all aspects of query execution, from query compilation to job execution. One such component is the interaction with the underlying database at the core of the Metastore. As an alternative to ObjectStore, we created OracleStore as a proof-of-concept. Freed of the restrictions imposed by DataNucleus, we were able to design a more performant database schema that better met our needs. Then, we implemented OracleStore with specific goals built-in from the start, such as ensuring the deduplication of data. In this talk we will discuss the details behind OracleStore and the gains that were realized with this alternative implementation. These include a reduction of 97%+ in the storage footprint of multiple tables, as well as query performance that is 13x faster than ObjectStore with DirectSQL and 46x faster than ObjectStore without DirectSQL.

3:00 P.M. – 3:40 P.M. Bullet – A Real Time Data Query Engine

Akshai Sarma – Sr. Software Engineer

Michael Natkovich – Director, Engineering

Bullet is an open sourced, lightweight, pluggable querying system for streaming data without a persistence layer implemented on top of Storm. It allows you to filter, project, and aggregate on data in transit. It includes a UI and WS. Instead of running queries on a finite set of data that arrived and was persisted or running a static query defined at the startup of the stream, our queries can be executed against an arbitrary set of data arriving after the query is submitted. In other words, it is a look-forward system. Bullet is a multi-tenant system that scales independently of the data consumed and the number of simultaneous queries. Bullet is pluggable into any streaming data source. It can be configured to read from systems such as Storm, Kafka, Spark, Flume, etc. Bullet leverages Sketches to perform its aggregate operations such as distinct, count distinct, sum, count, min, max, and average.

3:00 P.M. – 3:40 P.M. Yahoo – Moving Beyond Running 100% of Apache Pig Jobs on Apache Tez

Rohini Palaniswamy – Sr. Principal Engineer

Last year at Yahoo, we spent great effort in scaling, stabilizing and making Pig on Tez production ready and by the end of the year retired running Pig jobs on Mapreduce. This talk will detail the performance and resource utilization improvements Yahoo achieved after migrating all Pig jobs to run on Tez. After successful migration and the improved performance we shifted our focus to addressing some of the bottlenecks we identified and new optimization ideas that we came up with to make it go even faster. We will go over the new features and work done in Tez to make that happen like custom YARN ShuffleHandler, reworking DAG scheduling order, serialization changes, etc. We will also cover exciting new features that were added to Pig for performance such as bloom join and byte code generation.

4:10 P.M. – 4:50 P.M. Leveraging Docker for Hadoop Build Automation and Big Data Stack Provisioning

Evans Ye,  Software Engineer

Apache Bigtop as an open source Hadoop distribution, focuses on developing packaging, testing and deployment solutions that help infrastructure engineers to build up their own customized big data platform as easy as possible. However, packages deployed in production require a solid CI testing framework to ensure its quality. Numbers of Hadoop component must be ensured to work perfectly together as well. In this presentation, we’ll talk about how Bigtop deliver its containerized CI framework which can be directly replicated by Bigtop users. The core revolution here are the newly developed Docker Provisioner that leveraged Docker for Hadoop deployment and Docker Sandbox for developer to quickly start a big data stack. The content of this talk includes the containerized CI framework, technical detail of Docker Provisioner and Docker Sandbox, a hierarchy of docker images we designed, and several components we developed such as Bigtop Toolchain to achieve build automation.

Register here for Hadoop Summit, San Jose, California with a 20% discount code MSPO20

Questions? Feel free to reach out to us at [email protected] Hope to see you there!

WikiLeaks Releases CIA Hacking Tools

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/wikileaks_relea.html

WikiLeaks just released a cache of 8,761 classified CIA documents from 2012 to 2016, including details of its offensive Internet operations.

I have not read through any of them yet. If you see something interesting, tell us in the comments.

EDITED TO ADD: There’s a lot in here. Many of the hacking tools are redacted, with the tar files and zip archives replaced with messages like:

::: THIS ARCHIVE FILE IS STILL BEING EXAMINED BY WIKILEAKS. :::

::: IT MAY BE RELEASED IN THE NEAR FUTURE. WHAT FOLLOWS IS :::
::: AN AUTOMATICALLY GENERATED LIST OF ITS CONTENTS: :::

Hopefully we’ll get them eventually. The documents say that the CIA — and other intelligence services — can bypass Signal, WhatsApp and Telegram. It seems to be by hacking the end-user devices and grabbing the traffic before and after encryption, not by breaking the encryption.

New York Times article.

EDITED TO ADD: Some details from The Guardian:

According to the documents:

  • CIA hackers targeted smartphones and computers.
  • The Center for Cyber Intelligence is based at the CIA headquarters in Virginia but it has a second covert base in the US consulate in Frankfurt which covers Europe, the Middle East and Africa.
  • A programme called Weeping Angel describes how to attack a Samsung F8000 TV set so that it appears to be off but can still be used for monitoring.

I just noticed this from the WikiLeaks page:

Recently, the CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized “zero day” exploits, malware remote control systems and associated documentation. This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA. The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive.

So it sounds like this cache of documents wasn’t taken from the CIA and given to WikiLeaks for publication, but has been passed around the community for a while — and incidentally some part of the cache was passed to WikiLeaks. So there are more documents out there, and others may release them in unredacted form.

Wired article. Slashdot thread. Two articles from the Washington Post.

EDITED TO ADD: This document talks about Comodo version 5.X and version 6.X. Version 6 was released in Feb 2013. Version 7 was released in Apr 2014. This gives us a time window of that page, and the cache in general. (WikiLeaks says that the documents cover 2013 to 2016.)

If these tools are a few years out of date, it’s similar to the NSA tools released by the “Shadow Brokers.” Most of us thought the Shadow Brokers were the Russians, specifically releasing older NSA tools that had diminished value as secrets. Could this be the Russians as well?

EDITED TO ADD: Nicholas Weaver comments.

EDITED TO ADD (3/8): These documents are interesting:

The CIA’s hand crafted hacking techniques pose a problem for the agency. Each technique it has created forms a “fingerprint” that can be used by forensic investigators to attribute multiple different attacks to the same entity.

This is analogous to finding the same distinctive knife wound on multiple separate murder victims. The unique wounding style creates suspicion that a single murderer is responsible. As soon one murder in the set is solved then the other murders also find likely attribution.

The CIA’s Remote Devices Branch‘s UMBRAGE group collects and maintains a substantial library of attack techniques ‘stolen’ from malware produced in other states including the Russian Federation.

With UMBRAGE and related projects the CIA cannot only increase its total number of attack types but also misdirect attribution by leaving behind the “fingerprints” of the groups that the attack techniques were stolen from.

UMBRAGE components cover keyloggers, password collection, webcam capture, data destruction, persistence, privilege escalation, stealth, anti-virus (PSP) avoidance and survey techniques.

This is being spun in the press as the CIA is pretending to be Russia. I’m not convinced that the documents support these allegations. Can someone else look at the documents. I don’t like my conclusion that WikiLeaks is using this document dump as a way to push their own bias.

Ancient local privilege escalation vulnerability in the kernel announced

Post Syndicated from jake original https://lwn.net/Articles/715429/rss

Andrey Konovalov has announced the discovery and fix of a local privilege escalation in the Linux kernel. Using the syzkaller fuzzer (which LWN looked at around one year ago), he found a double-free in the Datagram Congestion Control Protocol (DCCP) implementation that goes back to at least September 2006 (2.6.18), but probably all the way back to the introduction of DCCP in October 2005 (2.6.14). “[At] this point we have a use-after-free on some_object. An attacker can
control what object that would be and overwrite it’s content with
arbitrary data by using some of the kernel heap spraying techniques.
If the overwritten object has any triggerable function pointers, an
attacker gets to execute arbitrary code within the kernel.

I’ll publish an exploit in a few days, giving people time to update.”

Monday’s security advisories

Post Syndicated from jake original https://lwn.net/Articles/715034/rss

Debian-LTS has updated gst-plugins-bad0.10 (two vulnerabilities), gst-plugins-base0.10 (two vulnerabilities), gst-plugins-good0.10 (two vulnerabilities), gst-plugins-ugly0.10 (two vulnerabilities),
and wireshark (denial of service).

Fedora has updated bind (F24:
denial of service), python-peewee (F25; F24:
largely unspecified), sshrc (F25:
unspecified), and zoneminder (F25;
F24: information disclosure).

Gentoo has updated glibc (multiple vulnerabilities,
most from 2014 and 2015), mupdf (three
vulnerabilities), and ntfs3g (privilege escalation).

Mageia has updated gnutls (multiple vulnerabilities),
gtk-vnc (two vulnerabilities), iceape (multiple vulnerabilities), jitsi (user spoofing), libarchive (denial of service), libgd (multiple vulnerabilities), lynx (URL spoofing), mariadb (multiple vulnerabilities, almost all unspecified), netpbm (multiple vulnerabilities), openjpeg2 (multiple vulnerabilities), tomcat (information disclosure), and viewvc (cross-site scripting).

openSUSE has updated chromium
(42.2, 42.1: multiple vulnerabilities), firebird
(42.2, 42.1: access restriction bypass), java-1_7_0-openjdk (42.2, 42.1: multiple vulnerabilities), mcabber (42.2: user spoofing), mupdf (42.2, 42.1: multiple vulnerabilities), open-vm-tools (42.1: CVE with no description
from 2015), opus (42.2, 42.1: code
execution), tiff (42.2, 42.1: code
execution), and vim (42.1: code execution).

Red Hat has updated openssl
(RHEL7&6: two vulnerabilities).

Scientific Linux has updated openssl (SL7&6: two vulnerabilities).

SUSE has updated kernel (SLE12: denial of service) and kernel (SLE11:
multiple vulnerabilities, some from 2004, 2012, and 2015).

Ubuntu has updated python-crypto
(16.10, 16.04, 14.04: regression in previous update).

Friday’s security updates

Post Syndicated from jake original https://lwn.net/Articles/713554/rss

Arch Linux has updated qt5-webengine (multiple vulnerabilities) and tcpdump (multiple vulnerabilities).

CentOS has updated thunderbird (C7; C6; C5: multiple vulnerabilities).

Debian-LTS has updated ntfs-3g
(privilege escalation) and svgsalamander
(server-side request forgery).

Fedora has updated openldap (F25:
unintended cipher usage from 2015), and wavpack (F25: multiple vulnerabilities).

Mageia has updated openafs
(information leak) and pdns-recursor
(denial of service).

openSUSE has updated java-1_8_0-openjdk (42.2, 42.1: multiple vulnerabilities),
mupdf (42.2; 42.1: three vulnerabilities), phpMyAdmin (42.2, 42.1: multiple vulnerabilities, one from 2015),
and Wireshark (42.2: two denial of service flaws).

Oracle has updated thunderbird (OL7; OL6: multiple vulnerabilities).

Scientific Linux has updated libtiff (SL7&6: multiple vulnerabilities, one from 2015) and thunderbird (multiple vulnerabilities).

Ubuntu has updated kernel (16.10; 14.04;
12.04: multiple vulnerabilities), kernel, linux-raspi2, linux-snapdragon (16.04:
two vulnerabilities), linux-lts-trusty
(12.04: code execution), linux-lts-xenial
(14.04: two vulnerabilities), and tomcat
(14.04, 12.04: regression in previous update).

Thursday’s security advisories

Post Syndicated from jake original https://lwn.net/Articles/713405/rss

Debian has updated ntfs-3g
(privilege escalation).

Debian-LTS has updated openssl
(three vulnerabilities).

Fedora has updated jasper (F25:
code execution), moodle (F24: multiple vulnerabilities), and
percona-xtrabackup (F25; F24: information disclosure).

Mageia has updated libxpm (code
execution), pdns (multiple vulnerabilities), python-pycrypto (denial of service from 2013),
and wireshark (two denial of service flaws).

openSUSE has updated bzrtp (42.2,
42.1: man-in-the-middle vulnerability), firefox (42.2, 42.1: multiple vulnerabilities), nginx (42.2, 42.1; SPH
for SLE12
: denial of service), seamonkey (42.2, 42.1: code execution), and
thunderbird (42.2, 42.1; SPH for SLE12: multiple vulnerabilities).

Red Hat has updated rabbitmq-server (OSP8.0: denial of service
from 2015) and thunderbird (multiple vulnerabilities).

Ubuntu has updated gnutls26,
gnutls28
(multiple vulnerabilities), irssi (multiple vulnerabilities), iucode-tool (16.10, 16.04: code execution), libxpm (code execution), and ntfs-3g (16.10, 16.04: privilege escalation).

Security advisories for Wednesday

Post Syndicated from ris original https://lwn.net/Articles/713266/rss

Arch Linux has updated salt (two vulnerabilities).

CentOS has updated libtiff (C7; C6: multiple vulnerabilities).

Debian has updated libgd2 (multiple vulnerabilities), ruby-archive-tar-minitar (file overwrites), and wordpress (multiple vulnerabilities).

Debian-LTS has updated ikiwiki (three vulnerabilities), libplist (two vulnerabilities), and wordpress (multiple vulnerabilities).

Gentoo has updated pcsc-lite (privilege escalation).

openSUSE has updated openssh
(42.2: multiple vulnerabilities).

Oracle has updated libtiff (OL7; OL6: multiple vulnerabilities).

Red Hat has updated libtiff
(RHEL6,7: multiple vulnerabilities).

SUSE has updated gnutls
(SLE12-SP1,2: multiple vulnerabilities) and java-1_8_0-openjdk (SLE12-SP1,2: multiple vulnerabilities).

Ubuntu has updated openssl (multiple vulnerabilities).

Security advisories for Monday

Post Syndicated from ris original https://lwn.net/Articles/713033/rss

Arch Linux has updated chromium (multiple vulnerabilities), firefox (multiple vulnerabilities), kernel (privilege escalation), lib32-openssl (three vulnerabilities), libimobiledevice (access restriction bypass), linux-lts (privilege escalation), linux-zen (privilege escalation), openssl (three vulnerabilities), and thunderbird (multiple vulnerabilities).

Debian has updated lcms2 (heap memory leak), openssl (three vulnerabilities), and tcpdump (multiple vulnerabilities).

Debian-LTS has updated bind9 (three denial of service flaws), imagemagick (multiple vulnerabilities), libgd2 (three vulnerabilities), tiff3 (invalid tiff files), and zoneminder (information leak, authentication bypass).

Fedora has updated fedmsg (F24:
insufficient signature validation), firefox
(F24: multiple vulnerabilities), flatpak
(F25: sandbox escape), ghostscript (F25; F24:
denial of service), ikiwiki (F25; F24: three vulnerabilities), libXpm (F24: code execution), mapserver (F25; F24: code execution), and pdns (F25; F24: multiple vulnerabilities).

Gentoo has updated a2ps (code
execution from 2014), ark (code execution),
chromium (multiple vulnerabilities), ffmpeg (multiple vulnerabilities), firewalld (authentication bypass), freeimage (two vulnerabilities, one from
2015), libpng (NULL dereference bug), libXpm (code execution), perl (multiple vulnerabilities, two from
2015), and squashfs-tools (two
vulnerabilities from 2015).

Mageia has updated 389-ds-base
(denial of service), libvncserver (two
vulnerabilities), mbedtls (two
vulnerabilities), nvidia-current,
ldetect-lst
(three vulnerabilities), opus (code execution), pcsc-lite (privilege escalation), python-bottle (CRLF attacks), and shadow-utils (two vulnerabilities).

openSUSE has updated gstreamer-0_10-plugins-base (42.1: code
execution), gstreamer-plugins-base (42.2:
code execution), and rabbitmq-server (42.2:
authentication bypass).

SUSE has updated gnutls
(SLE11-SP4: multiple vulnerabilities).

Ubuntu has updated firefox (multiple vulnerabilities) and thunderbird (multiple vulnerabilities).

Security advisories for Wednesday

Post Syndicated from ris original http://lwn.net/Articles/712490/rss

Debian-LTS has updated mysql-5.5
(multiple mostly unspecified vulnerabilities).

Fedora has updated audacious
(F25: multiple vulnerabilities), audacious-plugins (F25; F24:
multiple vulnerabilities), boomaga (F24:
wrong permissions), fedmsg (F25:
insufficient signature validation), groovy
(F24: code execution), pdns-recursor (F25; F24:
multiple vulnerabilities), w3m (F24:
unspecified), and xemacs-packages-extra
(F25: unspecified).

Gentoo has updated graphite2
(multiple vulnerabilities), oracle-jre-bin
(multiple vulnerabilities), and xorg-server
(three vulnerabilities, one from 2013).

Oracle has updated mysql (OL6:
two vulnerabilities), squid (OL7:
information leak), and squid34 (OL6:
information leak).

Red Hat has updated firefox
(RHEL5,6,7: multiple vulnerabilities).

Scientific Linux has updated firefox (SL5,6,7: multiple vulnerabilities).

SUSE has updated systemd
(SLE12-SP2: privilege escalation).

Ubuntu has updated icoutils
(12.04: multiple vulnerabilities).

Security advisories for Wednesday

Post Syndicated from ris original https://lwn.net/Articles/712490/rss

Debian-LTS has updated mysql-5.5
(multiple mostly unspecified vulnerabilities).

Fedora has updated audacious
(F25: multiple vulnerabilities), audacious-plugins (F25; F24:
multiple vulnerabilities), boomaga (F24:
wrong permissions), fedmsg (F25:
insufficient signature validation), groovy
(F24: code execution), pdns-recursor (F25; F24:
multiple vulnerabilities), w3m (F24:
unspecified), and xemacs-packages-extra
(F25: unspecified).

Gentoo has updated graphite2
(multiple vulnerabilities), oracle-jre-bin
(multiple vulnerabilities), and xorg-server
(three vulnerabilities, one from 2013).

Oracle has updated mysql (OL6:
two vulnerabilities), squid (OL7:
information leak), and squid34 (OL6:
information leak).

Red Hat has updated firefox
(RHEL5,6,7: multiple vulnerabilities).

Scientific Linux has updated firefox (SL5,6,7: multiple vulnerabilities).

SUSE has updated systemd
(SLE12-SP2: privilege escalation).

Ubuntu has updated icoutils
(12.04: multiple vulnerabilities).

Security updates for Tuesday

Post Syndicated from ris original http://lwn.net/Articles/712357/rss

Debian-LTS has updated hesiod (two vulnerabilities) and tiff (multiple vulnerabilities).

Fedora has updated gd (F25; F24: two denial of service flaws) and kernel (F25; F24: privilege escalation).

Gentoo has updated adodb (two
vulnerabilities), firejail (three
vulnerabilities), icu (three
vulnerabilities), libraw (two
vulnerabilities from 2015), libwebp
(integer overflows), and t1lib (multiple
vulnerabilities from 2011).

openSUSE has updated python3-sleekxmpp (42.2: two vulnerabilities)
and virtualbox (42.2: multiple unspecified vulnerabilities).

Red Hat has updated mysql (RHEL6:
three vulnerabilities), squid (RHEL7:
information leak), and squid34 (RHEL6:
information leak).

Scientific Linux has updated java-1.8.0-openjdk (SL6,7: multiple
vulnerabilities), mysql (SL6: three
vulnerabilities), squid (SL7: information
leak), and squid34 (SL6: information leak).

Slackware has updated firefox
(multiple vulnerabilities).

Ubuntu has updated pcsc-lite (privilege escalation) and tomcat6, tomcat7, tomcat8 (multiple vulnerabilities).

Security updates for Tuesday

Post Syndicated from ris original https://lwn.net/Articles/712357/rss

Debian-LTS has updated hesiod (two vulnerabilities) and tiff (multiple vulnerabilities).

Fedora has updated gd (F25; F24: two denial of service flaws) and kernel (F25; F24: privilege escalation).

Gentoo has updated adodb (two
vulnerabilities), firejail (three
vulnerabilities), icu (three
vulnerabilities), libraw (two
vulnerabilities from 2015), libwebp
(integer overflows), and t1lib (multiple
vulnerabilities from 2011).

openSUSE has updated python3-sleekxmpp (42.2: two vulnerabilities)
and virtualbox (42.2: multiple unspecified vulnerabilities).

Red Hat has updated mysql (RHEL6:
three vulnerabilities), squid (RHEL7:
information leak), and squid34 (RHEL6:
information leak).

Scientific Linux has updated java-1.8.0-openjdk (SL6,7: multiple
vulnerabilities), mysql (SL6: three
vulnerabilities), squid (SL7: information
leak), and squid34 (SL6: information leak).

Slackware has updated firefox
(multiple vulnerabilities).

Ubuntu has updated pcsc-lite (privilege escalation) and tomcat6, tomcat7, tomcat8 (multiple vulnerabilities).

Security advisories for Monday

Post Syndicated from ris original http://lwn.net/Articles/712296/rss

CentOS has updated java-1.8.0-openjdk (C7; C6: multiple vulnerabilities).

Debian has updated libphp-swiftmailer (code execution), mariadb-10.0 (multiple mostly unspecified vulnerabilities), and openjpeg2 (multiple vulnerabilities).

Debian-LTS has updated groovy (code execution) and opus (code execution).

Fedora has updated docker-latest
(F24: privilege escalation), ed (F25:
denial of service), groovy (F25: code
execution), libnl3 (F25; F24: privilege escalation), opus (F25; F24: code
execution), qemu (F25: multiple
vulnerabilities), squid (F25: two
vulnerabilities), and webkitgtk4 (F25; F24:
multiple vulnerabilities).

Gentoo has updated DBD-mysql
(multiple vulnerabilities), dcraw (denial
of service from 2015), DirectFB (two
vulnerabilities from 2014), libupnp (two
vulnerabilities), lua (code execution from
2014), ppp (denial of service from 2015),
qemu (multiple vulnerabilities), quagga (two vulnerabilities), and zlib (multiple vulnerabilities).

Mageia has updated libpng, libpng12 (NULL dereference bug).

openSUSE has updated perl-DBD-mysql (42.2, 42.1: three vulnerabilities) and xtrabackup (42.2; 42.1: information disclosure).

Oracle has updated java-1.8.0-openjdk (OL7; OL6: multiple vulnerabilities).

SUSE has updated gstreamer-0_10-plugins-good (SLE12-SP1; SLE11-SP4: multiple vulnerabilities).

Security updates for Thursday

Post Syndicated from jake original http://lwn.net/Articles/712056/rss

CentOS has updated kernel (C7:
three vulnerabilities).

Debian has updated mapserver
(code execution).

Debian-LTS has updated libav (multiple vulnerabilities)
and mapserver (code execution).

Fedora has updated ark (F25: code
execution), chicken (F25; F24: two vulnerabilities), and runc (F25: privilege escalation).

openSUSE has updated libgit2 (42.1; SPH for
SLE12
: two vulnerabilities), openjpeg2
(42.1: multiple vulnerabilities), and v8 (42.2: code execution).

Red Hat has updated java-1.6.0-sun (multiple vulnerabilities), java-1.7.0-oracle (multiple vulnerabilities), and java-1.8.0-oracle (RHEL7&6: multiple vulnerabilities).

Slackware has updated mariadb
(multiple unspecified vulnerabilities).

Ubuntu has updated mysql-5.5,
mysql-5.7
(multiple unspecified vulnerabilities).

Wednesday’s security updates

Post Syndicated from ris original http://lwn.net/Articles/711944/rss

Arch Linux has updated webkit2gtk (multiple vulnerabilities).

CentOS has updated qemu-kvm (C7: denial of service).

Debian-LTS has updated icoutils (multiple vulnerabilities).

Fedora has updated icoutils (F25; F24:
three vulnerabilities), mingw-libgsf (F25:
denial of service), and php-PHPMailer (F24:
three vulnerabilities).

openSUSE has updated bind (42.2, 42.1; 13.2: three denial of service flaws), libgit2 (13.2: two vulnerabilities), openjpeg2 (13.2: multiple vulnerabilities), pdns (42.2, 42.1, 13.2: multiple
vulnerabilities), qemu (42.2: multiple
vulnerabilities), and squid (42.2: three
vulnerabilities, one from 2014).

Oracle has updated kernel (OL7:
three vulnerabilities) and qemu-kvm (OL7: denial of service).

Red Hat has updated docker
(RHEL7: privilege escalation), docker-latest (RHEL7: privilege escalation),
kernel (RHEL7: three vulnerabilities),
kernel-rt (RHEL7; RHEMRG2.5: three vulnerabilities), qemu-kvm (RHEL7: denial of service), and runc (RHEL7: privilege escalation).

Scientific Linux has updated kernel (SL7: three vulnerabilities) and qemu-kvm (SL7: denial of service).

SUSE has updated kernel
(SLE12-SP2: multiple vulnerabilities).

Ubuntu has updated nvidia-graphics-drivers-304 and nvidia-graphics-drivers-340 (denial of service).

Monday’s security updates

Post Syndicated from ris original http://lwn.net/Articles/711773/rss

Arch Linux has updated libgit2 (multiple vulnerabilities), nginx (privilege escalation), nginx-mainline (privilege escalation), and wordpress (multiple vulnerabilities).

Debian has updated icoutils
(three vulnerabilities), pdns (multiple
vulnerabilities), pdns-recursor (denial of
service), python-bottle (regression in
previous update), and tiff (multiple vulnerabilities).

Debian-LTS has updated botan1.10
(integer overflow), gcc-mozilla (update to
GCC 4.8), icedove (multiple
vulnerabilities), libx11 (denial of
service), otrs2 (code execution), python-bottle (regression in previous update),
wireless-regdb (radio regulations updates), and xen (two vulnerabilities).

Fedora has updated bind (F25:
three denial of service flaws), bind99
(F25: three denial of service flaws), ca-certificates (F25; F24:
certificate update), docker-latest (F25:
privilege escalation), gnutls (F24:
multiple vulnerabilities), libgit2 (F25: multiple vulnerabilities), and onionshare (F25; F24: file injection).

Gentoo has updated apache
(multiple vulnerabilities, one from 2014).

Mageia has updated golang (denial of service) and irssi (multiple vulnerabilities).

Red Hat has updated bind (RHEL7; RHEL5,6: denial of service) and bind97 (RHEL5: denial of service).

Scientific Linux has updated java-1.6.0-openjdk (SL5,6,7: multiple vulnerabilities).

SUSE has updated qemu (SLE12-SP2:
multiple vulnerabilities).

Security advisories for Friday

Post Syndicated from jake original http://lwn.net/Articles/711577/rss

Arch Linux has updated ark (code
execution), bind (multiple vulnerabilities), docker (privilege escalation), flashplugin (multiple vulnerabilities), irssi (multiple vulnerabilities), lib32-flashplugin (multiple vulnerabilities), and libvncserver (two vulnerabilities).

CentOS has updated java-1.6.0-openjdk (C7; C6; C5: multiple vulnerabilities) and kernel (three vulnerabilities).

Debian has updated rabbitmq-server (authentication bypass).

Debian-LTS has updated asterisk
(two vulnerabilities, one from 2014).

Fedora has updated docker (F25:
privilege escalation), libgit2 (F24: multiple vulnerabilities),
and pcsc-lite (F24: privilege escalation).

Gentoo has updated postgresql
(multiple vulnerabilities, two from 2015), runc (privilege escalation), and seamonkey (multiple vulnerabilities).

Mageia has updated flash-player-plugin (multiple vulnerabilities), php-ZendFramework2 (parameter injection), unzip (two vulnerabilities, one from 2014),
and webmin (largely unspecified).

Oracle has updated java-1.6.0-openjdk (OL7; OL6; OL5: multiple vulnerabilities) kernel 2.6.39 (OL6; OL5:multiple vulnerabilities), kernel
3.8.13
(OL7; OL6: multiple vulnerabilities), and kernel 4.1.12 (OL7; OL6: multiple vulnerabilities).

Red Hat has updated java-1.6.0-openjdk (multiple vulnerabilities).

Scientific Linux has updated kernel (SL6: three vulnerabilities).

Security updates for Wednesday

Post Syndicated from ris original http://lwn.net/Articles/711316/rss

Debian has updated icedove (multiple vulnerabilities).

Debian-LTS has updated tomcat7 (information disclosure).

Gentoo has updated bind (denial
of service), botan (two vulnerabilities),
c-ares (code execution), dbus (denial of service), expat (multiple vulnerabilities, one from
2012), flex (code execution), nginx (privilege escalation), ntfs3g (privilege escalation from 2015), p7zip (two code execution flaws), pgbouncer (two vulnerabilities), phpBB (two vulnerabilities), phpmyadmin (multiple vulnerabilities), vim (code execution), and vzctl (insecure ploop-based containers from 2015).

openSUSE has updated jasper
(42.2, 42.1: multiple vulnerabilities).

Oracle has updated kernel (OL6: three vulnerabilities).

Red Hat has updated flash-plugin
(RHEL6: multiple vulnerabilities), kernel
(RHEL6.7: code execution), and kernel
(RHEL6: three vulnerabilities).

SUSE has updated freeradius-server (SLE12-SP1,2: insufficient
certificate verification) and LibVNCServer
(SLE11-SP4: two vulnerabilities).

Ubuntu has updated kernel (16.10; 16.04;
14.04; 12.04: multiple vulnerabilities), linux-lts-trusty (12.04: multiple
vulnerabilities), linux-lts-xenial (14.04:
three vulnerabilities), linux-raspi2 (16.10; 16.04:
two vulnerabilities), linux-snapdragon
(16.04: two vulnerabilities), linux-ti-omap4 (12.04: two vulnerabilities),
and webkit2gtk (16.04: multiple vulnerabilities).