Tag Archives: C5

Parallel Processing in Python with AWS Lambda

Post Syndicated from Oz Akan original https://aws.amazon.com/blogs/compute/parallel-processing-in-python-with-aws-lambda/

If you develop an AWS Lambda function with Node.js, you can call multiple web services without waiting for a response due to its asynchronous nature.  All requests are initiated almost in parallel, so you can get results much faster than a series of sequential calls to each web service. Considering the maximum execution duration for Lambda, it is beneficial for I/O bound tasks to run in parallel.

If you develop a Lambda function with Python, parallelism doesn’t come by default. Lambda supports Python 2.7 and Python 3.6, both of which have multiprocessing and threading modules. The multiprocessing module supports multiple cores so it is a better choice, especially for CPU intensive workloads. With the threading module, all threads are going to run on a single core though performance difference is negligible for network-bound tasks.

In this post, I demonstrate how the Python multiprocessing module can be used within a Lambda function to run multiple I/O bound tasks in parallel.

Example use case

In this example, you call Amazon EC2 and Amazon EBS API operations to find the total EBS volume size for all your EC2 instances in a region.

This is a two-step process:

  • The Lambda function calls EC2 to list all EC2 instances
  • The function calls EBS for each instance to find attached EBS volumes

Sequential Execution

If you make these calls sequentially, during the second step, your code has to loop over all the instances and wait for each response before moving to the next request.

The class named VolumesSequential has the following methods:

  • __init__ creates an EC2 resource.
  • total_size returns all EC2 instances and passes these to the instance_volumes method.
  • instance_volumes finds the total size of EBS volumes for the instance.
  • total_size adds all sizes from all instances to find total size for the EBS volumes.

Source Code for Sequential Execution

import time
import boto3

class VolumesSequential(object):
    """Finds total volume size for all EC2 instances"""
    def __init__(self):
        self.ec2 = boto3.resource('ec2')

    def instance_volumes(self, instance):
        """
        Finds total size of the EBS volumes attached
        to an EC2 instance
        """
        instance_total = 0
        for volume in instance.volumes.all():
            instance_total += volume.size
        return instance_total

    def total_size(self):
        """
        Lists all EC2 instances in the default region
        and sums result of instance_volumes
        """
        print "Running sequentially"
        instances = self.ec2.instances.all()
        instances_total = 0
        for instance in instances:
            instances_total += self.instance_volumes(instance)
        return instances_total

def lambda_handler(event, context):
    volumes = VolumesSequential()
    _start = time.time()
    total = volumes.total_size()
    print "Total volume size: %s GB" % total
    print "Sequential execution time: %s seconds" % (time.time() - _start)

Parallel Execution

The multiprocessing module that comes with Python 2.7 lets you run multiple processes in parallel. Due to the Lambda execution environment not having /dev/shm (shared memory for processes) support, you can’t use multiprocessing.Queue or multiprocessing.Pool.

If you try to use multiprocessing.Queue, you get an error similar to the following:

[Errno 38] Function not implemented: OSError
…
    sl = self._semlock = _multiprocessing.SemLock(kind, value, maxvalue)
OSError: [Errno 38] Function not implemented

On the other hand, you can use multiprocessing.Pipe instead of multiprocessing.Queue to accomplish what you need without getting any errors during the execution of the Lambda function.

The class named VolumeParallel has the following methods:

  • __init__ creates an EC2 resource
  • instance_volumes finds the total size of EBS volumes attached to an instance
  • total_size finds all instances and runs instance_volumes for each to find the total size of all EBS volumes attached to all EC2 instances.

Source Code for Parallel Execution

import time
from multiprocessing import Process, Pipe
import boto3

class VolumesParallel(object):
    """Finds total volume size for all EC2 instances"""
    def __init__(self):
        self.ec2 = boto3.resource('ec2')

    def instance_volumes(self, instance, conn):
        """
        Finds total size of the EBS volumes attached
        to an EC2 instance
        """
        instance_total = 0
        for volume in instance.volumes.all():
            instance_total += volume.size
        conn.send([instance_total])
        conn.close()

    def total_size(self):
        """
        Lists all EC2 instances in the default region
        and sums result of instance_volumes
        """
        print "Running in parallel"

        # get all EC2 instances
        instances = self.ec2.instances.all()
        
        # create a list to keep all processes
        processes = []

        # create a list to keep connections
        parent_connections = []
        
        # create a process per instance
        for instance in instances:            
            # create a pipe for communication
            parent_conn, child_conn = Pipe()
            parent_connections.append(parent_conn)

            # create the process, pass instance and connection
            process = Process(target=self.instance_volumes, args=(instance, child_conn,))
            processes.append(process)

        # start all processes
        for process in processes:
            process.start()

        # make sure that all processes have finished
        for process in processes:
            process.join()

        instances_total = 0
        for parent_connection in parent_connections:
            instances_total += parent_connection.recv()[0]

        return instances_total


def lambda_handler(event, context):
    volumes = VolumesParallel()
    _start = time.time()
    total = volumes.total_size()
    print "Total volume size: %s GB" % total
    print "Sequential execution time: %s seconds" % (time.time() - _start)

Performance

There are a few differences between two Lambda functions when it comes to the execution environment. The parallel function requires more memory than the sequential one. You may run the parallel Lambda function with a relatively large memory setting to see how much memory it uses. The amount of memory required by the Lambda function depends on what the function does and how many processes it runs in parallel. To restrict maximum memory usage, you may want to limit the number of parallel executions.

In this case, when you give 1024 MB for both Lambda functions, the parallel function runs about two times faster than the sequential function. I have a handful of EC2 instances and EBS volumes in my account so the test ran way under the maximum execution limit for Lambda. Remember that parallel execution doesn’t guarantee that the runtime for the Lambda function will be under the maximum allowed duration but does speed up the overall execution time.

Sequential Run Time Output

START RequestId: 4c370b12-f9d3-11e6-b46b-b5d41afd648e Version: $LATEST
Running sequentially
Total volume size: 589 GB
Sequential execution time: 3.80066084862 seconds
END RequestId: 4c370b12-f9d3-11e6-b46b-b5d41afd648e
REPORT RequestId: 4c370b12-f9d3-11e6-b46b-b5d41afd648e Duration: 4091.59 ms Billed Duration: 4100 ms  Memory Size: 1024 MB Max Memory Used: 46 MB

Parallel Run Time Output

START RequestId: 4f1328ed-f9d3-11e6-8cd1-c7381c5c078d Version: $LATEST
Running in parallel
Total volume size: 589 GB
Sequential execution time: 1.89170885086 seconds
END RequestId: 4f1328ed-f9d3-11e6-8cd1-c7381c5c078d
REPORT RequestId: 4f1328ed-f9d3-11e6-8cd1-c7381c5c078d Duration: 2069.33 ms Billed Duration: 2100 ms  Memory Size: 1024 MB Max Memory Used: 181 MB 

Summary

In this post, I demonstrated how to run multiple I/O bound tasks in parallel by developing a Lambda function with the Python multiprocessing module. With the help of this module, you freed the CPU from waiting for I/O and fired up several tasks to fit more I/O bound operations into a given time frame. This might be the trick to reduce the overall runtime of a Lambda function especially when you have to run so many and don’t want to split the work into smaller chunks.

Kernel prepatch 4.12-rc6

Post Syndicated from corbet original https://lwn.net/Articles/725787/rss

The 4.12-rc6 kernel prepatch is out for
testing. “The good news is that rc6 is smaller than rc5 was, and I think we’re
back on track and rc5 really was big just due to random timing. We’ll
see. Next weekend when I’m back home and do rc7, I’ll see how I feel
about things. I’m still hopeful that this would be a normal release
cycle where rc7 is the last rc.

4.12-rc5 kernel prepatch has been released

Post Syndicated from jake original https://lwn.net/Articles/725099/rss

The 4.12-rc5 prepatch is out; it is rather
larger than others in this cycle, Linus Torvalds said. “It’s not like rc5 is *huge*, but it definitely isn’t the nice and
small one I was hoping for. There’s nothing in [particular] that looks
very worrisome, and it may well just be random timing – the rc sizes
do fluctuate a lot depending on just which subsystem gets synced up
that particular rc, and we may just have hit that “everybody happened
to sync up this week” case.

How to Deploy Local Administrator Password Solution with AWS Microsoft AD

Post Syndicated from Dragos Madarasan original https://aws.amazon.com/blogs/security/how-to-deploy-local-administrator-password-solution-with-aws-microsoft-ad/

Local Administrator Password Solution (LAPS) from Microsoft simplifies password management by allowing organizations to use Active Directory (AD) to store unique passwords for computers. Typically, an organization might reuse the same local administrator password across the computers in an AD domain. However, this approach represents a security risk because it can be exploited during lateral escalation attacks. LAPS solves this problem by creating unique, randomized passwords for the Administrator account on each computer and storing it encrypted in AD.

Deploying LAPS with AWS Microsoft AD requires the following steps:

  1. Install the LAPS binaries on instances joined to your AWS Microsoft AD domain. The binaries add additional client-side extension (CSE) functionality to the Group Policy client.
  2. Extend the AWS Microsoft AD schema. LAPS requires new AD attributes to store an encrypted password and its expiration time.
  3. Configure AD permissions and delegate the ability to retrieve the local administrator password for IT staff in your organization.
  4. Configure Group Policy on instances joined to your AWS Microsoft AD domain to enable LAPS. This configures the Group Policy client to process LAPS settings and uses the binaries installed in Step 1.

The following diagram illustrates the setup that I will be using throughout this post and the associated tasks to set up LAPS. Note that the AWS Directory Service directory is deployed across multiple Availability Zones, and monitoring automatically detects and replaces domain controllers that fail.

Diagram illustrating this blog post's solution

In this blog post, I explain the prerequisites to set up Local Administrator Password Solution, demonstrate the steps involved to update the AD schema on your AWS Microsoft AD domain, show how to delegate permissions to IT staff and configure LAPS via Group Policy, and demonstrate how to retrieve the password using the graphical user interface or with Windows PowerShell.

This post assumes you are familiar with Lightweight Directory Access Protocol Data Interchange Format (LDIF) files and AWS Microsoft AD. If you need more of an introduction to Directory Service and AWS Microsoft AD, see How to Move More Custom Applications to the AWS Cloud with AWS Directory Service, which introduces working with schema changes in AWS Microsoft AD.

Prerequisites

In order to implement LAPS, you must use AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as AWS Microsoft AD. Any instance on which you want to configure LAPS must be joined to your AWS Microsoft AD domain. You also need a Management instance on which you install the LAPS management tools.

In this post, I use an AWS Microsoft AD domain called example.com that I have launched in the EU (London) region. To see which the regions in which Directory Service is available, see AWS Regions and Endpoints.

Screenshot showing the AWS Microsoft AD domain example.com used in this blog post

In addition, you must have at least two instances launched in the same region as the AWS Microsoft AD domain. To join the instances to your AWS Microsoft AD domain, you have two options:

  1. Use the Amazon EC2 Systems Manager (SSM) domain join feature. To learn more about how to set up domain join for EC2 instances, see joining a Windows Instance to an AWS Directory Service Domain.
  2. Manually configure the DNS server addresses in the Internet Protocol version 4 (TCP/IPv4) settings of the network card to use the AWS Microsoft AD DNS addresses (172.31.9.64 and 172.31.16.191, for this blog post) and perform a manual domain join.

For the purpose of this post, my two instances are:

  1. A Management instance on which I will install the management tools that I have tagged as Management.
  2. A Web Server instance on which I will be deploying the LAPS binary.

Screenshot showing the two EC2 instances used in this post

Implementing the solution

 

1. Install the LAPS binaries on instances joined to your AWS Microsoft AD domain by using EC2 Run Command

LAPS binaries come in the form of an MSI installer and can be downloaded from the Microsoft Download Center. You can install the LAPS binaries manually, with an automation service such as EC2 Run Command, or with your existing software deployment solution.

For this post, I will deploy the LAPS binaries on my Web Server instance (i-0b7563d0f89d3453a) by using EC2 Run Command:

  1. While signed in to the AWS Management Console, choose EC2. In the Systems Manager Services section of the navigation pane, choose Run Command.
  2. Choose Run a command, and from the Command document list, choose AWS-InstallApplication.
  3. From Target instances, choose the instance on which you want to deploy the LAPS binaries. In my case, I will be selecting the instance tagged as Web Server. If you do not see any instances listed, make sure you have met the prerequisites for Amazon EC2 Systems Manager (SSM) by reviewing the Systems Manager Prerequisites.
  4. For Action, choose Install, and then stipulate the following values:
    • Parameters: /quiet
    • Source: https://download.microsoft.com/download/C/7/A/C7AAD914-A8A6-4904-88A1-29E657445D03/LAPS.x64.msi
    • Source Hash: f63ebbc45e2d080630bd62a195cd225de734131a56bb7b453c84336e37abd766
    • Comment: LAPS deployment

Leave the other options with the default values and choose Run. The AWS Management Console will return a Command ID, which will initially have a status of In Progress. It should take less than 5 minutes to download and install the binaries, after which the Command ID will update its status to Success.

Status showing the binaries have been installed successfully

If the Command ID runs for more than 5 minutes or returns an error, it might indicate a problem with the installer. To troubleshoot, review the steps in Troubleshooting Systems Manager Run Command.

To verify the binaries have been installed successfully, open Control Panel and review the recently installed applications in Programs and Features.

Screenshot of Control Panel that confirms LAPS has been installed successfully

You should see an entry for Local Administrator Password Solution with a version of 6.2.0.0 or newer.

2. Extend the AWS Microsoft AD schema

In the previous section, I used EC2 Run Command to install the LAPS binaries on an EC2 instance. Now, I am ready to extend the schema in an AWS Microsoft AD domain. Extending the schema is a requirement because LAPS relies on new AD attributes to store the encrypted password and its expiration time.

In an on-premises AD environment, you would update the schema by running the Update-AdmPwdADSchema Windows PowerShell cmdlet with schema administrator credentials. Because AWS Microsoft AD is a managed service, I do not have permissions to update the schema directly. Instead, I will update the AD schema from the Directory Service console by importing an LDIF file. If you are unfamiliar with schema updates or LDIF files, see How to Move More Custom Applications to the AWS Cloud with AWS Directory Service.

To make things easier for you, I am providing you with a sample LDIF file that contains the required AD schema changes. Using Notepad or a similar text editor, open the SchemaChanges-0517.ldif file and update the values of dc=example,dc=com with your own AWS Microsoft AD domain and suffix.

After I update the LDIF file with my AWS Microsoft AD details, I import it by using the AWS Management Console:

  1. On the Directory Service console, select from the list of directories in the Microsoft AD directory by choosing its identifier (it will look something like d-534373570ea).
  2. On the Directory details page, choose the Schema extensions tab and choose Upload and update schema.
    Screenshot showing the "Upload and update schema" option
  3. When prompted for the LDIF file that contains the changes, choose the sample LDIF file.
  4. In the background, the LDIF file is validated for errors and a backup of the directory is created for recovery purposes. Updating the schema might take a few minutes and the status will change to Updating Schema. When the process has completed, the status of Completed will be displayed, as shown in the following screenshot.

Screenshot showing the schema updates in progress
When the process has completed, the status of Completed will be displayed, as shown in the following screenshot.

Screenshot showing the process has completed

If the LDIF file contains errors or the schema extension fails, the Directory Service console will generate an error code and additional debug information. To help troubleshoot error messages, see Schema Extension Errors.

The sample LDIF file triggers AWS Microsoft AD to perform the following actions:

  1. Create the ms-Mcs-AdmPwd attribute, which stores the encrypted password.
  2. Create the ms-Mcs-AdmPwdExpirationTime attribute, which stores the time of the password’s expiration.
  3. Add both attributes to the Computer class.

3. Configure AD permissions

In the previous section, I updated the AWS Microsoft AD schema with the required attributes for LAPS. I am now ready to configure the permissions for administrators to retrieve the password and for computer accounts to update their password attribute.

As part of configuring AD permissions, I grant computers the ability to update their own password attribute and specify which security groups have permissions to retrieve the password from AD. As part of this process, I run Windows PowerShell cmdlets that are not installed by default on Windows Server.

Note: To learn more about Windows PowerShell and the concept of a cmdlet (pronounced “command-let”), go to Getting Started with Windows PowerShell.

Before getting started, I need to set up the required tools for LAPS on my Management instance, which must be joined to the AWS Microsoft AD domain. I will be using the same LAPS installer that I downloaded from the Microsoft LAPS website. In my Management instance, I have manually run the installer by clicking the LAPS.x64.msi file. On the Custom Setup page of the installer, under Management Tools, for each option I have selected Install on local hard drive.

Screenshot showing the required management tools

In the preceding screenshot, the features are:

  • The fat client UI – A simple user interface for retrieving the password (I will use it at the end of this post).
  • The Windows PowerShell module – Needed to run the commands in the next sections.
  • The GPO Editor templates – Used to configure Group Policy objects.

The next step is to grant computers in the Computers OU the permission to update their own attributes. While connected to my Management instance, I go to the Start menu and type PowerShell. In the list of results, right-click Windows PowerShell and choose Run as administrator and then Yes when prompted by User Account Control.

In the Windows PowerShell prompt, I type the following command.

Import-module AdmPwd.PS

Set-AdmPwdComputerSelfPermission –OrgUnit “OU=Computers,OU=MyMicrosoftAD,DC=example,DC=com

To grant the administrator group called Admins the permission to retrieve the computer password, I run the following command in the Windows PowerShell prompt I previously started.

Import-module AdmPwd.PS

Set-AdmPwdReadPasswordPermission –OrgUnit “OU=Computers, OU=MyMicrosoftAD,DC=example,DC=com” –AllowedPrincipals “Admins”

4. Configure Group Policy to enable LAPS

In the previous section, I deployed the LAPS management tools on my management instance, granted the computer accounts the permission to self-update their local administrator password attribute, and granted my Admins group permissions to retrieve the password.

Note: The following section addresses the Group Policy Management Console and Group Policy objects. If you are unfamiliar with or wish to learn more about these concepts, go to Get Started Using the GPMC and Group Policy for Beginners.

I am now ready to enable LAPS via Group Policy:

  1. On my Management instance (i-03b2c5d5b1113c7ac), I have installed the Group Policy Management Console (GPMC) by running the following command in Windows PowerShell.
Install-WindowsFeature –Name GPMC
  1. Next, I have opened the GPMC and created a new Group Policy object (GPO) called LAPS GPO.
  2. In the Local Group Policy Editor, I navigate to Computer Configuration > Policies > Administrative Templates > LAPS. I have configured the settings using the values in the following table.

Setting

State

Options

Password Settings

Enabled

Complexity: large letters, small letters, numbers, specials

Do not allow password expiration time longer than required by policy

Enabled

N/A

Enable local admin password management

Enabled

N/A

  1. Next, I need to link the GPO to an organizational unit (OU) in which my machine accounts sit. In your environment, I recommend testing the new settings on a test OU and then deploying the GPO to production OUs.

Note: If you choose to create a new test organizational unit, you must create it in the OU that AWS Microsoft AD delegates to you to manage. For example, if your AWS Microsoft AD directory name were example.com, the test OU path would be example.com/example/Computers/Test.

  1. To test that LAPS works, I need to make sure the computer has received the new policy by forcing a Group Policy update. While connected to the Web Server instance (i-0b7563d0f89d3453a) using Remote Desktop, I open an elevated administrative command prompt and run the following command: gpupdate /force. I can check if the policy is applied by running the command: gpresult /r | findstr LAPS GPO, where LAPS GPO is the name of the GPO created in the second step.
  2. Back on my Management instance, I can then launch the LAPS interface from the Start menu and use it to retrieve the password (as shown in the following screenshot). Alternatively, I can run the Get-ADComputer Windows PowerShell cmdlet to retrieve the password.
Get-ADComputer [YourComputerName] -Properties ms-Mcs-AdmPwd | select name, ms-Mcs-AdmPwd

Screenshot of the LAPS UI, which you can use to retrieve the password

Summary

In this blog post, I demonstrated how you can deploy LAPS with an AWS Microsoft AD directory. I then showed how to install the LAPS binaries by using EC2 Run Command. Using the sample LDIF file I provided, I showed you how to extend the schema, which is a requirement because LAPS relies on new AD attributes to store the encrypted password and its expiration time. Finally, I showed how to complete the LAPS setup by configuring the necessary AD permissions and creating the GPO that starts the LAPS password change.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, please start a new thread on the Directory Service forum.

– Dragos

AWS and the General Data Protection Regulation (GDPR)

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-and-the-general-data-protection-regulation/

European Union image

Just over a year ago, the European Commission approved and adopted the new General Data Protection Regulation (GDPR). The GDPR is the biggest change in data protection laws in Europe since the 1995 introduction of the European Union (EU) Data Protection Directive, also known as Directive 95/46/EC. The GDPR aims to strengthen the security and protection of personal data in the EU and will replace the Directive and all local laws relating to it.

AWS welcomes the arrival of the GDPR. The new, robust requirements raise the bar for data protection, security, and compliance, and will push the industry to follow the most stringent controls, helping to make everyone more secure. I am happy to announce today that all AWS services will comply with the GDPR when it becomes enforceable on May 25, 2018.

In this blog post, I explain the work AWS is doing to help customers with the GDPR as part of our continued commitment to help ensure they can comply with EU Data Protection requirements.

What has AWS been doing?

AWS continually maintains a high bar for security and compliance across all of our regions around the world. This has always been our highest priority—truly “job zero.” The AWS Cloud infrastructure has been architected to offer customers the most powerful, flexible, and secure cloud-computing environment available today. AWS also gives you a number of services and tools to enable you to build GDPR-compliant infrastructure on top of AWS.

One tool we give you is a Data Processing Agreement (DPA). I’m happy to announce today that we have a DPA that will meet the requirements of the GDPR. This GDPR DPA is available now to all AWS customers to help you prepare for May 25, 2018, when the GDPR becomes enforceable. For additional information about the new GDPR DPA or to obtain a copy, contact your AWS account manager.

In addition to account managers, we have teams of compliance experts, data protection specialists, and security experts working with customers across Europe to answer their questions and help them prepare for running workloads in the AWS Cloud after the GDPR comes into force. To further answer customers’ questions, we have updated our EU Data Protection website. This website includes information about what the GDPR is, the changes it brings to organizations operating in the EU, the services AWS offers to help you comply with the GDPR, and advice about how you can prepare.

Another topic we cover on the EU Data Protection website is AWS’s compliance with the CISPE Code of Conduct. The CISPE Code of Conduct helps cloud customers ensure that their cloud infrastructure provider is using appropriate data protection standards to protect their data in a manner consistent with the GDPR. AWS has declared that Amazon EC2, Amazon S3, Amazon RDS, AWS Identity and Access Management (IAM), AWS CloudTrail, and Amazon Elastic Block Storage (Amazon EBS) are fully compliant with the CISPE Code of Conduct. This declaration provides customers with assurances that they fully control their data in a safe, secure, and compliant environment when they use AWS. For more information about AWS’s compliance with the CISPE Code of Conduct, go to the CISPE website.

As well as giving customers a number of tools and services to build GDPR-compliant environments, AWS has achieved a number of internationally recognized certifications and accreditations. In the process, AWS has demonstrated compliance with third-party assurance frameworks such as ISO 27017 for cloud security, ISO 27018 for cloud privacy, PCI DSS Level 1, and SOC 1, SOC 2, and SOC 3. AWS also helps customers meet local security standards such as BSI’s Common Cloud Computing Controls Catalogue (C5) that is important in Germany. We will continue to pursue certifications and accreditations that are important to AWS customers.

What can you do?

Although the GDPR will not be enforceable until May 25, 2018, we are encouraging our customers and partners to start preparing now. If you have already implemented a high bar for compliance, security, and data privacy, the move to GDPR should be simple. However, if you have yet to start your journey to GDPR compliance, we urge you to start reviewing your security, compliance, and data protection processes now to ensure a smooth transition in May 2018.

You should consider the following key points in preparation for GDPR compliance:

  • Territorial reach – Determining whether the GDPR applies to your organization’s activities is essential to ensuring your organization’s ability to satisfy its compliance obligations.
  • Data subject rights – The GDPR enhances the rights of data subjects in a number of ways. You will need to make sure you can accommodate the rights of data subjects if you are processing their personal data.
  • Data breach notifications – If you are a data controller, you must report data breaches to the data protection authorities without undue delay and in any event within 72 hours of you becoming aware of a data breach.
  • Data protection officer (DPO) – You may need to appoint a DPO who will manage data security and other issues related to the processing of personal data.
  • Data protection impact assessment (DPIA) – You may need to conduct and, in some circumstances, you might be required to file with the supervisory authority a DPIA for your processing activities.
  • Data processing agreement (DPA) – You may need a DPA that will meet the requirements of the GDPR, particularly if personal data is transferred outside the European Economic Area.

AWS offers a wide range of services and features to help customers meet requirements of the GDPR, including services for access controls, monitoring, logging, and encryption. For more information about these services and features, see EU Data Protection.

At AWS, security, data protection, and compliance are our top priorities, and we will continue to work vigilantly to ensure that our customers are able to enjoy the benefits of AWS securely, compliantly, and without disruption in Europe and around the world. As we head toward May 2018, we will share more news and resources with you to help you comply with the GDPR.

– Steve

Analyze Security, Compliance, and Operational Activity Using AWS CloudTrail and Amazon Athena

Post Syndicated from Sai Sriparasa original https://aws.amazon.com/blogs/big-data/aws-cloudtrail-and-amazon-athena-dive-deep-to-analyze-security-compliance-and-operational-activity/

As organizations move their workloads to the cloud, audit logs provide a wealth of information on the operations, governance, and security of assets and resources. As the complexity of the workloads increases, so does the volume of audit logs being generated. It becomes increasingly difficult for organizations to analyze and understand what is happening in their accounts without a significant investment of time and resources.

AWS CloudTrail and Amazon Athena help make it easier by combining the detailed CloudTrail log files with the power of the Athena SQL engine to easily find, analyze, and respond to changes and activities in an AWS account.

AWS CloudTrail records API calls and account activities and publishes the log files to Amazon S3. Account activity is tracked as an event in the CloudTrail log file. Each event carries information such as who performed the action, when the action was done, which resources were impacted, and many more details. Multiple events are stitched together and structured in a JSON format within the CloudTrail log files.

Amazon Athena uses Apache Hive’s data definition language (DDL) to create tables and Presto, a distributed SQL engine, to run queries. Apache Hive does not natively support files in JSON, so we’ll have to use a SerDe to help Hive understand how the records should be processed. A SerDe interface is a combination of a serializer and deserializer. A deserializer helps take data and convert it into a Java object while the serializer helps convert the Java object into a usable representation.

In this blog post, we will walk through how to set up and use the recently released Amazon Athena CloudTrail SerDe to query CloudTrail log files for EC2 security group modifications, console sign-in activity, and operational account activity. This post assumes that customers already have AWS CloudTrail configured. For more information about configuring CloudTrail, see Getting Started with AWS CloudTrail in the AWS CloudTrail User Guide.

Setting up Amazon Athena

Let’s start by signing in to the Amazon Athena console and performing the following steps.

o_athena-cloudtrail_1

Create a table in the default sampledb database using the CloudTrail SerDe. The easiest way to create the table is to copy and paste the following query into the Athena query editor, modify the LOCATION value, and then run the query.

Replace:

LOCATION 's3://<Your CloudTrail s3 bucket>/AWSLogs/<optional – AWS_Account_ID>/'

with the S3 bucket where your CloudTrail log files are delivered. For example, if your CloudTrail S3 bucket is named “aws -sai-sriparasa” and you set up a log file prefix of  “/datalake/cloudtrail/” you would edit the LOCATION statement as follows:

LOCATION 's3://aws-sai-sriparasa/datalake/cloudtrail/'

CREATE EXTERNAL TABLE cloudtrail_logs (
eventversion STRING,
userIdentity STRUCT<
  type:STRING,
  principalid:STRING,
  arn:STRING,
  accountid:STRING,
  invokedby:STRING,
  accesskeyid:STRING,
  userName:STRING,
  sessioncontext:STRUCT<
    attributes:STRUCT<
      mfaauthenticated:STRING,
      creationdate:STRING>,
    sessionIssuer:STRUCT<
      type:STRING,
      principalId:STRING,
      arn:STRING,
      accountId:STRING,
      userName:STRING>>>,
eventTime STRING,
eventSource STRING,
eventName STRING,
awsRegion STRING,
sourceIpAddress STRING,
userAgent STRING,
errorCode STRING,
errorMessage STRING,
requestParameters STRING,
responseElements STRING,
additionalEventData STRING,
requestId STRING,
eventId STRING,
resources ARRAY<STRUCT<
  ARN:STRING,accountId:
  STRING,type:STRING>>,
eventType STRING,
apiVersion STRING,
readOnly STRING,
recipientAccountId STRING,
serviceEventDetails STRING,
sharedEventID STRING,
vpcEndpointId STRING
)
ROW FORMAT SERDE 'com.amazon.emr.hive.serde.CloudTrailSerde'
STORED AS INPUTFORMAT 'com.amazon.emr.cloudtrail.CloudTrailInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION 's3://<Your CloudTrail s3 bucket>/AWSLogs/<optional – AWS_Account_ID>/';

After the query has been executed, a new table named cloudtrail_logs will be added to Athena with the following table properties.

Table_properties_sai3

Athena charges you by the amount of data scanned per query.  You can save on costs and get better performance when querying CloudTrail log files by partitioning the data to the time ranges you are interested in.  For more information on pricing, see Athena pricing.  To better understand how to partition data for use in Athena, see Analyzing Data in S3 using Amazon Athena.

Popular use cases

These use cases focus on:

  • Amazon EC2 security group modifications
  • Console Sign-in activity
  • Operational account activity

EC2 security group modifications

When reviewing an operational issue or security incident for an EC2 instance, the ability to see any associated security group change is a vital part of the analysis.

For example, if an EC2 instance triggers a CloudWatch metric alarm for high CPU utilization, we can first look to see if there have been any security group changes (the addition of new security groups or the addition of ingress rules to an existing security group) that potentially create more traffic or load on the instance. To start the investigation, we need to look in the EC2 console for the network interface ID and security groups of the impacted EC2 instance. Here is an example:

Network interface ID = eni-6c5ca5a8

Security group(s) = sg-5887f224, sg-e214609e

The following query can help us dive deep into the security group analysis. We’ll configure the query to filter for our network interface ID, security groups, and a time range starting 12 hours before the alarm occurred so we’re aware of recent changes. (CloudTrail log files use the ISO 8601 data elements and interchange format for date and time representation.)

Identify any security group changes for our EC2 instance:

select eventname, useridentity.username, sourceIPAddress, eventtime, requestparameters from cloudtrail_logs
where (requestparameters like '%sg-5887f224%' or requestparameters like '%sg-e214609e%' or requestparameters like '%eni-6c5ca5a8%')
and eventtime > '2017-02-15T00:00:00Z'
order by eventtime asc;

This query returned the following results:

eventname username sourceIPAddress eventtime requestparameters
DescribeInstances 72.21.196.68 2017-02-15T00:57:23Z {“instancesSet”:{},”filterSet”:{“items”:[{“name”:”instance.group-id”,”valueSet”:{“items”:[{“value”:”sg-5887f224″}]}}]}}
DescribeInstances 72.21.196.68 2017-02-15T00:57:24Z {“instancesSet”:{},”filterSet”:{“items”:[{“name”:”instance.group-id”,”valueSet”:{“items”:[{“value”:”sg-e214609e”}]}}]}}
DescribeInstances 72.21.196.68 2017-02-15T17:06:01Z {“instancesSet”:{},”filterSet”:{“items”:[{“name”:”instance.group-id”,”valueSet”:{“items”:[{“value”:”sg-e214609e”}]}}]}}
DescribeInstances 72.21.196.68 2017-02-15T17:06:01Z {“instancesSet”:{},”filterSet”:{“items”:[{“name”:”instance.group-id”,”valueSet”:{“items”:[{“value”:”sg-5887f224″}]}}]}}
DescribeSecurityGroups 72.21.196.70 2017-02-15T23:28:20Z {“securityGroupSet”:{},”securityGroupIdSet”:{“items”:[{“groupId”:”sg-e214609e”}]},”filterSet”:{}}
DescribeInstances 72.21.196.69 2017-02-16T11:25:23Z {“instancesSet”:{},”filterSet”:{“items”:[{“name”:”instance.group-id”,”valueSet”:{“items”:[{“value”:”sg-e214609e”}]}}]}}
DescribeInstances 72.21.196.69 2017-02-16T11:25:23Z {“instancesSet”:{},”filterSet”:{“items”:[{“name”:”instance.group-id”,”valueSet”:{“items”:[{“value”:”sg-5887f224″}]}}]}}
ModifyNetworkInterfaceAttribute bobodell 72.21.196.64 2017-02-16T19:09:55Z {“networkInterfaceId”:”eni-6c5ca5a8″,”groupSet”:{“items”:[{“groupId”:”sg-e214609e”},{“groupId”:”sg-5887f224″}]}}
AuthorizeSecurityGroupIngress bobodell 72.21.196.64 2017-02-16T19:42:02Z {“groupId”:”sg-5887f224″,”ipPermissions”:{“items”:[{“ipProtocol”:”tcp”,”fromPort”:143,”toPort”:143,”groups”:{},”ipRanges”:{“items”:[{“cidrIp”:”0.0.0.0/0″}]},”ipv6Ranges”:{},”prefixListIds”:{}},{“ipProtocol”:”tcp”,”fromPort”:143,”toPort”:143,”groups”:{},”ipRanges”:{},”ipv6Ranges”:{“items”:[{“cidrIpv6″:”::/0″}]},”prefixListIds”:{}}]}}

The results show that the ModifyNetworkInterfaceAttribute and AuthorizedSecurityGroupIngress API calls may have impacted the EC2 instance. The first call was initiated by user bobodell and set two security groups to the EC2 instance. The second call, also initiated by user bobodell,  was made approximately 33 minutes later, and successfully opened TCP port 143 (IMAP) up to the world (cidrip:0.0.0.0/0).

Although these changes may have been authorized, these details can be used to piece together a timeline of activity leading up to the alarm.

Console Sign-in activity

Whether it’s to help meet a compliance standard such as PCI, adhering to a best practice security framework such as NIST, or just wanting to better understand who is accessing your assets, auditing your login activity is vital.

The following query can help identify the AWS Management Console logins that occurred over a 24-hour period. It returns details such as user name, IP address, time of day, whether the login was from a mobile console version, and whether multi-factor authentication was used.

select useridentity.username, sourceipaddress, eventtime, additionaleventdata
from default.cloudtrail_logs
where eventname = 'ConsoleLogin'
and eventtime >= '2017-02-17T00:00:00Z'
and eventtime < '2017-02-18T00:00:00Z';

Because potentially hundreds of logins occur every day, it’s important to identify those that seem to be outside the normal course of business. The following query returns logins that occurred outside our network (72.21.0.0/24), those that occurred using a mobile console version, and those that occurred between midnight and 5:00 A.M.

select useridentity.username, sourceipaddress, json_extract_scalar(additionaleventdata, '$.MobileVersion') as MobileVersion, eventtime, additionaleventdata
from default.cloudtrail_logs 
where eventname = 'ConsoleLogin' 
and (json_extract_scalar(additionaleventdata, '$.MobileVersion') = 'Yes' 
or sourceipaddress not like '72.21.%' 
and eventtime >= '2017-02-17T00:00:00Z'
and eventtime < '2017-02-17T05:00:00Z');

Operational account activity

An important part of running workloads in AWS is understanding recurring errors, how administrators and employees are interacting with your workloads, and who or what is using root privileges in your account.

AWS event errors

Recurring error messages can be a sign of an incorrectly configured policy, the wrong permissions applied to an application, or an unknown change in your workloads. The following query shows the top 10 errors that have occurred from the start of the year.

select count (*) as TotalEvents, eventname, errorcode, errormessage 
from cloudtrail_logs
where errorcode is not null
and eventtime >= '2017-01-01T00:00:00Z' 
group by eventname, errorcode, errormessage
order by TotalEvents desc
limit 10;

The results show:

TotalEvents eventname errorcode errormessage
1098 DescribeAlarms ValidationException 1 validation error detected: Value ‘INVALID_FOR_SUMMARY’ at ‘stateValue’ failed to satisfy constraint: Member must satisfy enum value set: [INSUFFICIENT_DATA, ALARM, OK]
182 GetBucketPolicy NoSuchBucketPolicy The bucket policy does not exist
179 HeadBucket AccessDenied Access Denied
48 GetAccountPasswordPolicy NoSuchEntityException The Password Policy with domain name 341277845616 cannot be found.
36 GetBucketTagging NoSuchTagSet The TagSet does not exist
36 GetBucketReplication ReplicationConfigurationNotFoundError The replication configuration was not found
36 GetBucketWebsite NoSuchWebsiteConfiguration The specified bucket does not have a website configuration
32 DescribeNetworkInterfaces Client.RequestLimitExceeded Request limit exceeded.
30 GetBucketCors NoSuchCORSConfiguration The CORS configuration does not exist
30 GetBucketLifecycle NoSuchLifecycleConfiguration The lifecycle configuration does not exist

These errors might indicate an incorrectly configured CloudWatch alarm or S3 bucket policy.

Top IAM users

The following query shows the top IAM users and activities by eventname from the beginning of the year.

select count (*) as TotalEvents, useridentity.username, eventname
from cloudtrail_logs
where eventtime >= '2017-01-01T00:00:00Z' 
and useridentity.type = 'IAMUser'
group by useridentity.username, eventname
order by TotalEvents desc;

The results will show the total activities initiated by each IAM user and the eventname for those activities.

Like the Console sign-in activity query in the previous section, this query could be modified to filter the activity to view only events that occurred outside of the known network or after hours.

Root activity

Another useful query is to understand how the root account and credentials are being used and which activities are being performed by root.

The following query will look at the top events initiated by root from the beginning of the year. It will show whether these were direct root activities or whether they were invoked by an AWS service (and, if so, which one) to perform an activity.

select count (*) as TotalEvents, eventname, useridentity.invokedby
from cloudtrail_logs
where eventtime >= '2017-01-01T00:00:00Z' 
and useridentity.type = 'Root'
group by useridentity.username, eventname, useridentity.invokedby
order by TotalEvents desc;

Summary

 AWS CloudTrail and Amazon Athena are a powerful combination that can help organizations better understand the operations, governance, and security of assets and resources in their AWS accounts without a significant investment of time and resources.


About the Authors

 

Sai_Author_pic_resizeSai Sriparasa is a consultant with AWS Professional Services. He works with our customers to provide strategic and tactical big data solutions with an emphasis on automation, operations & security on AWS. In his spare time, he follows sports and current affairs.

 

 

 

BobO_Author_pic2_resizeBob O’Dell is a Sr. Product Manager for AWS CloudTrail. AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of AWS accounts.  Bob enjoys working with customers to understand how CloudTrail can meet their needs and continue to be an integral part of their solutions going forward.  In his spare time, he enjoys spending time with HRB exploring the new world of yoga and adventuring through the Pacific Northwest.


Related

Analyzing Data in S3 using Amazon Athena

Sai_related_image

Wednesday’s security advisories

Post Syndicated from ris original https://lwn.net/Articles/715261/rss

CentOS has updated firefox (C7; C6; C5: multiple vulnerabilities).

Debian has updated tomcat7
(regression in previous update) and tomcat8
(regression in previous update).

Gentoo has updated archive-tar-minitar (file overwrites) and ghostscript-gpl (multiple vulnerabilities).

openSUSE has updated profanity
(42.2, 42.1: user impersonation).

SUSE has updated php7 (SLE12: multiple vulnerabilities).

Ubuntu has updated kernel (14.04:
three vulnerabilities), linux, linux-raspi2
(16.10: three vulnerabilities), linux,
linux-snapdragon
(16.04: multiple vulnerabilities), linux, linux-ti-omap4 (12.04: three
vulnerabilities), linux-lts-trusty (12.04:
three vulnerabilities), linux-lts-xenial
(14.04: multiple vulnerabilities), and tcpdump (multiple vulnerabilities).

Extending AWS CodeBuild with Custom Build Environments

Post Syndicated from John Pignata original https://aws.amazon.com/blogs/devops/extending-aws-codebuild-with-custom-build-environments/

AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. CodeBuild provides curated build environments for programming languages and runtimes such as Java, Ruby, Python, Go, Node.js, Android, and Docker. It can be extended through the use of custom build environments to support many more.

Build environments are Docker images that include a complete file system with everything required to build and test your project. To use a custom build environment in a CodeBuild project, you build a container image for your platform that contains your build tools, push it to a Docker container registry such as Amazon EC2 Container Registry (ECR), and reference it in the project configuration. When building your application, CodeBuild will retrieve the Docker image from the container registry specified in the project configuration and use the environment to compile your source code, run your tests, and package your application.

In this post, we’ll create a build environment for PHP applications and walk through the steps to configure CodeBuild to use this environment.

Requirements

In order to follow this tutorial and build the Docker container image, you need to have the Docker platform, the AWS Command Line Interface, and Git installed.

Create the demo resources

To begin, we’ll clone codebuild-images from GitHub. It contains an AWS CloudFormation template that we’ll use to create resources for our demo: a source code repository in AWS CodeCommit and a Docker image repository in Amazon ECR. The repository also includes PHP sample code and tests that we’ll use to demonstrate our custom build environment.

  1. Clone the Git repository:
    git clone https://github.com/awslabs/codebuild-images.git
    cd codebuild-images

  2. Create the CloudFormation stack using the template.yml file. You can use the CloudFormation console to create the stack or you can use the AWS Command Line Interface:
    aws cloudformation create-stack \
     --stack-name codebuild-php \
     --parameters ParameterKey=EnvName,ParameterValue=php \
     --template-body file://template.yml > /dev/null && \
    aws cloudformation wait stack-create-complete \
     --stack-name codebuild-php && \
    aws cloudformation describe-stacks \
     --stack-name codebuild-php \
     --output table \
     --query Stacks[0].Outputs

After the stack has been created, CloudFormation will return two outputs:

  • BuildImageRepositoryUri: the URI of the Docker repository that will host our build environment image.
  • SourceCodeRepositoryCloneUrl: the clone URL of the Git repository that will host our sample PHP code.

Build and push the Docker image

Docker images are specified using a Dockerfile, which contains the instructions for assembling the image. The Dockerfile included in the PHP build environment contains these instructions:

FROM php:7

ARG composer_checksum=55d6ead61b29c7bdee5cccfb50076874187bd9f21f65d8991d46ec5cc90518f447387fb9f76ebae1fbbacf329e583e30
ARG composer_url=https://raw.githubusercontent.com/composer/getcomposer.org/ba0141a67b9bd1733409b71c28973f7901db201d/web/installer

ENV COMPOSER_ALLOW_SUPERUSER=1
ENV PATH=$PATH:vendor/bin

RUN apt-get update && apt-get install -y --no-install-recommends \
      curl \
      git \
      python-dev \
      python-pip \
      zlib1g-dev \
 && pip install awscli \
 && docker-php-ext-install zip \
 && curl -o installer "$composer_url" \
 && echo "$composer_checksum *installer" | shasum –c –a 384 \
 && php installer --install-dir=/usr/local/bin --filename=composer \
 && rm -rf /var/lib/apt/lists/*

This Dockerfile inherits all of the instructions from the official PHP Docker image, which installs the PHP runtime. On top of that base image, the build process will install Python, Git, the AWS CLI, and Composer, a dependency management tool for PHP. We’ve installed the AWS CLI and Git as tools we can use during builds. For example, using the AWS CLI, we could trigger a notification from Amazon Simple Notification Service (SNS) when a build is complete or we could use Git to create a new tag to mark a successful build. Finally, the build process cleans up files created by the packaging tools, as recommended in Best practices for writing Dockerfiles.

Next, we’ll build and push the custom build environment.

  1. Provide authentication details for our registry to the local Docker engine by executing the output of the login helper provided by the AWS CLI:
    aws ecr get-login
    

  2. Build and push the Docker image. We’ll use the repository URI returned in the CloudFormation stack output (BuildImageRepositoryUri) as the image tag:
    cd php
    docker build -t [BuildImageRepositoryUri] .
    docker push [BuildImageRepositoryUri]

After running these commands, your Docker image is pushed into Amazon ECR and ready to build your project.

Configure the Git repository

The repository we cloned includes a small PHP sample that we can use to test our PHP build environment. The sample function converts Roman numerals to Arabic numerals. The repository also includes a sample test to exercise this function. The sample also includes a YAML file called a build spec that contains commands and related settings that CodeBuild uses to run a build:

version: 0.1
phases:
  pre_build:
    commands:
      - composer install
  build:
    commands:
      - phpunit tests

This build spec configures CodeBuild to run two commands during the build:

We will push the sample application to the CodeCommit repo created by the CloudFormation stack. You’ll need to grant your IAM user the required level of access to the AWS services required for CodeCommit and you’ll need to configure your Git client with the appropriate credentials. See Setup for HTTPS Users Using Git Credentials in the CodeCommit documentation for detailed steps.

We’re going to initialize a Git repository for our sample, configure our origin, and push the sample to the master branch in CodeCommit.

  1. Initialize a new Git repository in the sample directory:
    cd sample
    git init

  2. Add and commit the sample files to the repository:
    git add .
    git commit -m "Initial commit"

  3. Configure the git remote and push the sample to it. We’ll use the repository clone URL returned in the CloudFormation stack output (SourceCodeRepositoryCloneUrl) as the remote URL:
    git remote add origin [SourceCodeRepositoryCloneUrl]
    git push origin master

Now that our sample application has been pushed into source control and our build environment image has been pushed into our Docker registry, we’re ready to create a CodeBuild project and start our first build.

Configure the CodeBuild project

In this section, we’ll walk through the steps for configuring CodeBuild to use the custom build environment.

Screenshot of the build configuration

  1. In the AWS Management Console, open the AWS CodeBuild console, and then choose Create project.
  2. In Project name, type php-demo.
  3. From Source provider, choose AWS CodeCommit.  From Repository, choose codebuild-sample-php.
  4. In Environment image, select Specify a Docker image. From Custom image type, choose Amazon ECR. From Amazon ECR Repository, choose codebuild/php.  From Amazon ECR image, choose latest.
  5. In Build specification, select Use the buildspec.yml in the source code root directory.
  6. In Artifacts type, choose No artifacts.
  7. Choose Continue and then choose Save and Build.
  8. On the next page, from Branch, choose master and then choose Start build.

CodeBuild will pull the build environment image from Amazon ECR and use it to test our application. CodeBuild will show us the status of each build step, the last 20 lines of log messages generated by the build process, and provide a link to Amazon CloudWatch Logs for more debugging output.

Screenshot of the build output

Summary

CodeBuild supports a number of platforms and languages out of the box. By using custom build environments, it can be extended to other runtimes. In this post, we built a PHP environment and demonstrated how to use it to test PHP applications.

We’re excited to see how customers extend and use CodeBuild to enable continuous integration and continuous delivery for their applications. Please leave questions or suggestions in the comments or share what you’ve learned extending CodeBuild for your own projects.

Security updates for Tuesday

Post Syndicated from ris original https://lwn.net/Articles/714499/rss

CentOS has updated java-1.7.0-openjdk (C7; C6; C5: multiple vulnerabilities).

Debian has updated tomcat7 (denial of service), tomcat8 (denial of service), and vim (buffer overflow).

Debian-LTS has updated tomcat7 (denial of service).

Fedora has updated bind (F25:
denial of service), kernel (F25; F24: two vulnerabilities), netpbm (F25: three vulnerabilities), tcpdump (F25: multiple vulnerabilities), vim (F25: buffer overflow), and w3m (F25: unspecified).

Gentoo has updated openssl (multiple vulnerabilities) and virtualbox (multiple vulnerabilities).

openSUSE has updated kernel (42.2; 42.1: multiple vulnerabilities).

Oracle has updated java-1.7.0-openjdk (OL7; OL6; OL5: multiple vulnerabilities).

Tuesday’s security advisories

Post Syndicated from ris original https://lwn.net/Articles/713877/rss

Debian-LTS has updated tiff (can’t write files).

Fedora has updated kernel (F25; F24:
denial of service), moodle (F25: multiple vulnerabilities), and phpMyAdmin (F25; F24: multiple vulnerabilities).

Mageia has updated icoutils (multiple vulnerabilities) and irssi-otr (information leak).

openSUSE has updated libgit2
(SPH for SLE12: multiple vulnerabilities) and libressl (42.2, 42.1: local timing attack).

Oracle has updated kernel 4.1.12 (OL7; OL6: multiple vulnerabilities) and ntp (OL7; OL6: multiple vulnerabilities).

SUSE has updated mysql (SOSC5,
SMP2.1, SM2.1, SLE11-SP3,4: multiple vulnerabilities) and kernel (SLERTE12-SP1: multiple vulnerabilities).

Ubuntu has updated nettle
(information leak), squid3 (two
vulnerabilities), firefox (regression in
previous update), and webkit2gtk (16.10,
16.04: multiple vulnerabilities).

Friday’s security updates

Post Syndicated from jake original https://lwn.net/Articles/713554/rss

Arch Linux has updated qt5-webengine (multiple vulnerabilities) and tcpdump (multiple vulnerabilities).

CentOS has updated thunderbird (C7; C6; C5: multiple vulnerabilities).

Debian-LTS has updated ntfs-3g
(privilege escalation) and svgsalamander
(server-side request forgery).

Fedora has updated openldap (F25:
unintended cipher usage from 2015), and wavpack (F25: multiple vulnerabilities).

Mageia has updated openafs
(information leak) and pdns-recursor
(denial of service).

openSUSE has updated java-1_8_0-openjdk (42.2, 42.1: multiple vulnerabilities),
mupdf (42.2; 42.1: three vulnerabilities), phpMyAdmin (42.2, 42.1: multiple vulnerabilities, one from 2015),
and Wireshark (42.2: two denial of service flaws).

Oracle has updated thunderbird (OL7; OL6: multiple vulnerabilities).

Scientific Linux has updated libtiff (SL7&6: multiple vulnerabilities, one from 2015) and thunderbird (multiple vulnerabilities).

Ubuntu has updated kernel (16.10; 14.04;
12.04: multiple vulnerabilities), kernel, linux-raspi2, linux-snapdragon (16.04:
two vulnerabilities), linux-lts-trusty
(12.04: code execution), linux-lts-xenial
(14.04: two vulnerabilities), and tomcat
(14.04, 12.04: regression in previous update).

Friday’s security updates

Post Syndicated from jake original http://lwn.net/Articles/712800/rss

CentOS has updated firefox (C7; C6; C5: multiple vulnerabilities), mysql
(C6: three vulnerabilities), squid (C7:
information leak), and squid34 (C6:
information leak).

Debian has updated libxpm (code execution).

Debian-LTS has updated asterisk
(denial of service from 2014), firefox-esr
(multiple vulnerabilities), lcms2 (denial of service), and libxpm (code execution).

Mageia has updated firefox (multiple vulnerabilities),
gstreamer (code execution), and php-phpmailer (two vulnerabilities).

openSUSE has updated apache2
(42.2: denial of service) and gstreamer-0_10-plugins-good (42.1: multiple vulnerabilities).

Red Hat has updated chromium-browser (RHEL6: multiple vulnerabilities) and puppet-swift (OSP10.0: information disclosure).

Slackware has updated mozilla-thunderbird (multiple vulnerabilities).

Friday’s security updates

Post Syndicated from jake original https://lwn.net/Articles/712800/rss

CentOS has updated firefox (C7; C6; C5: multiple vulnerabilities), mysql
(C6: three vulnerabilities), squid (C7:
information leak), and squid34 (C6:
information leak).

Debian has updated libxpm (code execution).

Debian-LTS has updated asterisk
(denial of service from 2014), firefox-esr
(multiple vulnerabilities), lcms2 (denial of service), and libxpm (code execution).

Mageia has updated firefox (multiple vulnerabilities),
gstreamer (code execution), and php-phpmailer (two vulnerabilities).

openSUSE has updated apache2
(42.2: denial of service) and gstreamer-0_10-plugins-good (42.1: multiple vulnerabilities).

Red Hat has updated chromium-browser (RHEL6: multiple vulnerabilities) and puppet-swift (OSP10.0: information disclosure).

Slackware has updated mozilla-thunderbird (multiple vulnerabilities).

Security advisories for Tuesday

Post Syndicated from ris original http://lwn.net/Articles/711855/rss

Arch Linux has updated python-crypto (code execution) and python2-crypto (code execution).

CentOS has updated bind (C7; C6; C5: denial of service) and bind97 (C5: denial of service).

Debian-LTS has updated pdns-recursor (code execution).

Fedora has updated bind (F24:
three denial of service flaws), bind99
(F24: three denial of service flaws), and SimGear (F25: file overwrites).

Gentoo has updated file (multiple vulnerabilities), libxml2 (multiple vulnerabilities), miniupnpc (denial of service), pidgin (multiple vulnerabilities), vlc (code execution), and xdelta (code execution).

openSUSE has updated ark (42.2, 42.1; SPH for SLE12: code execution), encfs (42.2, 42.1, 13.2: code execution from
2014), gstreamer-0_10-plugins-bad (13.2:
code execution), gstreamer-0_10-plugins-base (13.2: code
execution), gstreamer-0_10-plugins-good
(13.2: multiple vulnerabilities), gstreamer-plugins-bad (42.1; 13.2:
three vulnerabilities), gstreamer-plugins-base (42.1; 13.2:
code execution), gstreamer-plugins-good (42.1; 13.2:
multiple vulnerabilities), icinga (14.2,
14.1: two vulnerabilities), icoutils (42.2; 42.1; 13.2: multiple vulnerabilities), openjpeg2 (42.2: multiple vulnerabilities), pcsc-lite (42.2, 42.1, 13.2: privilege
escalation), and python-pycrypto (14.2,
14.1, 13.2: denial of service).

Oracle has updated bind (OL7; OL6; OL5: denial of service), bind97 (OL5: denial of service), and docker-engine docker-engine-selinux (OL7; OL6: two vulnerabilities).

Red Hat has updated kernel
(RHEL6.5: code execution).

Scientific Linux has updated bind (SL7; SL5,6:
denial of service) and bind97 (SL5: denial of service).