Tag Archives: Compliance

AWS GDPR Data Processing Addendum – Now Part of Service Terms

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/aws-gdpr-data-processing-addendum/

Today, we’re happy to announce that the AWS GDPR Data Processing Addendum (GDPR DPA) is now part of our online Service Terms. This means all AWS customers globally can rely on the terms of the AWS GDPR DPA which will apply automatically from May 25, 2018, whenever they use AWS services to process personal data under the GDPR. The AWS GDPR DPA also includes EU Model Clauses, which were approved by the European Union (EU) data protection authorities, known as the Article 29 Working Party. This means that AWS customers wishing to transfer personal data from the European Economic Area (EEA) to other countries can do so with the knowledge that their personal data on AWS will be given the same high level of protection it receives in the EEA.

As we approach the GDPR enforcement date this week, this announcement is an important GDPR compliance component for us, our customers, and our partners. All customers which that are using cloud services to process personal data will need to have a data processing agreement in place between them and their cloud services provider if they are to comply with GDPR. As early as April 2017, AWS announced that AWS had a GDPR-ready DPA available for its customers. In this way, we started offering our GDPR DPA to customers over a year before the May 25, 2018 enforcement date. Now, with the DPA terms included in our online service terms, there is no extra engagement needed by our customers and partners to be compliant with the GDPR requirement for data processing terms.

The AWS GDPR DPA also provides our customers with a number of other important assurances, such as the following:

  • AWS will process customer data only in accordance with customer instructions.
  • AWS has implemented and will maintain robust technical and organizational measures for the AWS network.
  • AWS will notify its customers of a security incident without undue delay after becoming aware of the security incident.
  • AWS will make available certificates issued in relation to the ISO 27001 certification, the ISO 27017 certification, and the ISO 27018 certification to further help customers and partners in their own GDPR compliance activities.

Customers who have already signed an offline version of the AWS GDPR DPA can continue to rely on that GDPR DPA. By incorporating our GDPR DPA into the AWS Service Terms, we are simply extending the terms of our GDPR DPA to all customers globally who will require it under GDPR.

AWS GDPR DPA is only part of the story, however. We are continuing to work alongside our customers and partners to help them on their journey towards GDPR compliance.

If you have any questions about the GDPR or the AWS GDPR DPA, please contact your account representative, or visit the AWS GDPR Center at: https://aws.amazon.com/compliance/gdpr-center/

-Chad

Interested in AWS Security news? Follow the AWS Security Blog on Twitter.

The Software Freedom Conservancy on Tesla’s GPL compliance

Post Syndicated from corbet original https://lwn.net/Articles/754919/rss

The Software Freedom Conservancy has put out a
blog posting
on the history and current status of Tesla’s GPL
compliance issues. “We’re thus glad that, this week, Tesla has acted
publicly regarding its current GPL violations and has announced that
they’ve taken their first steps toward compliance. While Tesla acknowledges
that they still have more work to do, their recent actions show progress
toward compliance and a commitment to getting all the way there.

Pirate IPTV Service Gave Customer Details to Premier League, But What’s the Risk?

Post Syndicated from Andy original https://torrentfreak.com/pirate-iptv-service-gave-customer-details-to-premier-league-but-whats-the-risk-180515/

In a report last weekend, we documented what appear to be the final days of pirate IPTV provider Ace Hosting.

From information provided by several sources including official liquidation documents, it became clear that a previously successful and profitable Ace had succumbed to pressure from the Premier League, which accused the service of copyright infringement.

The company had considerable funds in the bank – £255,472.00 to be exact – but it also had debts of £717,278.84, including £260,000 owed to HMRC and £100,000 to the Premier League as part of a settlement agreement.

Information received by TF late Sunday suggested that £100K was the tip of the iceberg as far as the Premier League was concerned and in a statement yesterday, the football outfit confirmed that was the case.

“A renowned pirate of Premier League content to consumers has been forced to liquidate after agreeing to pay £600,000 for breaching the League’s copyright,” the Premier League announced.

“Ace IPTV, run by Craig Driscoll and Ian Isaac, was selling subscriptions to illegal Premier League streams directly to consumers which allowed viewing on a range of devices, including notorious Kodi-type boxes, as well as to smaller resellers in the UK and abroad.”

Sources familiar with the case suggest that while Ace Hosting Limited didn’t have the funds to pay the Premier League the full £600K, Ace’s operators agreed to pay (and have already paid, to some extent at least) what were essentially their own funds to cover amounts above the final £100K, which is due to be paid next year.

But that’s not the only thing that’s been handed over to the Premier League.

“Ace voluntarily disclosed the personal details of their customers, which the League will now review in compliance with data protection legislation. Further investigations will be conducted, and action taken where appropriate,” the Premier League added.

So, the big question now is how exposed Ace’s former subscribers are.

The truth is that only the Premier League knows for sure but TF has been able to obtain information from several sources which indicate that former subscribers probably aren’t the Premier League’s key interest and even if they were, information obtained on them would be of limited use.

According to a source with knowledge of how a system like Ace’s works, there is a separation of data which appears to help (at least to some degree) with the subscriber’s privacy.

“The system used to manage accounts and take payment is actually completely separate from the software used to manage streams and the lines themselves. They are never usually even on the same server so are two very different databases,” he told TF.

“So at best the only information that has voluntarily been provided to the [Premier League], is just your email, name and address (assuming you even used real details) and what hosting package or credits you bought.”

While this information is bad enough, the action against Ace is targeted, in that it focuses on the Premier League’s content and how Ace (and therefore its users) infringed on the football outfit’s copyrights. So, proving that subscribers actually watched any Premier League content would be an ideal position but it’s not straightforward, despite the potential for detailed logging.

“The management system contains no history of what you watched, when you watched it, when you signed in and so on. That is all contained in a different database on a different server.

“Because every connection is recorded [on the second server], it can create some two million entries a day and as such most providers either turn off this feature or delete the logs daily as having so many entries slows down the system down used for actual streams,” he explains.

Our source says that this data would likely to have been the first to be deleted and is probably “long gone” by now. However, even if the Premier League had obtained it, it’s unlikely they would be able to do much with it due to data protection laws.

“The information was passed to the [Premier League] voluntarily by ACE which means this information has been given from one entity to another without the end users’ consent, not part of the [creditors’ voluntary liquidation] and without a court order to support it. Data Protection right now is taken very seriously in the EU,” he notes.

At this point, it’s probably worth noting that while the word “voluntarily” has been used several times to explain the manner in which Ace handed over its subscribers’ details to the Premier League, the same word can be used to describe the manner in which the £600K settlement amount will be paid.

No one forces someone to pay or hand something over, that’s what the courts are for, and the aim here was to avoid that eventuality.

Other pieces of information culled from various sources suggest that PayPal payment information, limited to amounts only, was also handed over to the Premier League. And, perhaps most importantly (and perhaps predictably) as far as former subscribers are concerned, the football group was more interested in Ace’s upwards supplier chain (the ‘wholesale’ stream suppliers used, for example) than those buying the service.

Finally, while the Premier League is now seeking to send a message to customers that these services are risky to use, it’s difficult to argue with the assertion that it’s unsafe to hand over personal details to an illegal service.

“Ace IPTV’s collapse also highlighted the risk consumers take with their personal data when they sign up to illegal streaming services,” Premier League notes.

TF spoke with three IPTV providers who all confirmed that they don’t care what names and addresses people use to sign up with and that no checks are carried out to make sure they’re correct. However, one concedes that in order to run as a business, this information has to be requested and once a customer types it in, it’s possible that it could be handed over as part of a settlement.

“I’m not going to tell people to put in dummy details, how can I? It’s up to people to use their common sense. If they’re still worried they should give Sky their money because if our backs are against the wall, what do you think is going to happen?” he concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Court Orders Pirate IPTV Linker to Shut Down or Face Penalties Up to €1.25m

Post Syndicated from Andy original https://torrentfreak.com/court-orders-pirate-iptv-linker-to-shut-down-or-face-penalties-up-to-e1-25m-180911/

There are few things guaranteed in life. Death, taxes, and lawsuits filed regularly by Dutch anti-piracy outfit BREIN.

One of its most recent targets was Netherlands-based company Leaper Beheer BV, which also traded under the names Flickstore, Dump Die Deal and Live TV Store. BREIN filed a complaint at the Limburg District Court in Maastricht, claiming that Leaper provides access to unlicensed live TV streams and on-demand movies.

The anti-piracy outfit claimed that around 4,000 live channels were on offer, including Fox Sports, movie channels, commercial and public channels. These could be accessed after the customer made a payment which granted access to a unique activation code which could be entered into a set-top box.

BREIN told the court that the code returned an .M3U playlist, which was effectively a hyperlink to IPTV channels and more than 1,000 movies being made available without permission from their respective copyright holders. As such, this amounted to a communication to the public in contravention of the EU Copyright Directive, BREIN argued.

In its defense, Leaper said that it effectively provided a convenient link-shortening service for content that could already be found online in other ways. The company argued that it is not a distributor of content itself and did not make available anything that wasn’t already public. The company added that it was completely down to the consumer whether illegal content was viewed or not.

The key question for the Court was whether Leaper did indeed make a new “communication to the public” under the EU Copyright Directive, a standard the Court of Justice of the European Union (CJEU) says should be interpreted in a manner that provides a high level of protection for rightsholders.

The Court took a three-point approach in arriving at its decision.

  • Did Leaper act in a deliberate manner when providing access to copyright content, especially when its intervention provided access to consumers who would not ordinarily have access to that content?
  • Did Leaper communicate the works via a new method to a new audience?
  • Did Leaper have a profit motive when it communicated works to the public?
  • The Court found that Leaper did communicate works to the public and intervened “with full knowledge of the consequences of its conduct” when it gave its customers access to protected works.

    “Access to [the content] in a different way would be difficult for those customers, if Leaper were not to provide its services in question,” the Court’s decision reads.

    “Leaper reaches an indeterminate number of potential recipients who can take cognizance of the protected works and form a new audience. The purchasers who register with Leaper are to be regarded as recipients who were not taken into account by the rightful claimants when they gave permission for the original communication of their work to the public.”

    With that, the Court ordered Leaper to cease-and-desist facilitating access to unlicensed streams within 48 hours of the judgment, with non-compliance penalties of 5,000 euros per IPTV subscription sold, link offered, or days exceeded, to a maximum of one million euros.

    But the Court didn’t stop there.

    “Leaper must submit a statement audited by an accountant, supported by (clear, readable copies of) all relevant documents, within 12 days of notification of this judgment of all the relevant (contact) details of the (person or legal persons) with whom the company has had contact regarding the provision of IPTV subscriptions and/or the provision of hyperlinks to sources where films and (live) broadcasts are evidently offered without the permission of the entitled parties,” the Court ruled.

    Failure to comply with this aspect of the ruling will lead to more penalties of 5,000 euros per day up to a maximum of 250,000 euros. Leaper was also ordered to pay BREIN’s costs of 20,700 euros.

    Describing the people behind Leaper as “crooks” who previously sold media boxes with infringing addons (as previously determined to be illegal in the Filmspeler case), BREIN chief Tim Kuik says that a switch of strategy didn’t help them evade the law.

    “[Leaper] sold a link to consumers that gave access to unauthorized content, i.e. pay-TV channels as well as video-on-demand films and series,” BREIN chief Tim Kuik informs TorrentFreak.

    “They did it for profit and should have checked whether the content was authorized. They did not and in fact were aware the content was unauthorized. Which means they are clearly infringing copyright.

    “This is evident from the CJEU case law in GS Media as well as Filmspeler and The Pirate Bay, aka the Dutch trilogy because the three cases came from the Netherlands, but these rulings are applicable throughout the EU.

    “They just keep at it knowing they’re cheating and we’ll take them to the cleaners,” Kuik concludes.

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

    timeShift(GrafanaBuzz, 1w) Issue 44

    Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/05/11/timeshiftgrafanabuzz-1w-issue-44/

    Welcome to TimeShift Grafana v5.1.2 is available and includes an important bug fix for MySQL, plus an update for GDPR compliance. See below for more details and the full release notes.
    Also, KubeCon + CloudNativeCon Europe 2018 videos are now available including talks from members of the Grafana Labs team! Check out these talks below.
    If you would like your article highlighted in our weekly roundup, feel free to send me an email at [email protected]

    Spring 2018 AWS SOC Reports are Now Available with 11 Services Added in Scope

    Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/spring-2018-aws-soc-reports-are-now-available-with-11-services-added-in-scope/

    Since our last System and Organization Control (SOC) audit, our service and compliance teams have been working to increase the number of AWS Services in scope prioritized based on customer requests. Today, we’re happy to report 11 services are newly SOC compliant, which is a 21 percent increase in the last six months.

    With the addition of the following 11 new services, you can now select from a total of 62 SOC-compliant services. To see the full list, go to our Services in Scope by Compliance Program page:

    • Amazon Athena
    • Amazon QuickSight
    • Amazon WorkDocs
    • AWS Batch
    • AWS CodeBuild
    • AWS Config
    • AWS OpsWorks Stacks
    • AWS Snowball
    • AWS Snowball Edge
    • AWS Snowmobile
    • AWS X-Ray

    Our latest SOC 1, 2, and 3 reports covering the period from October 1, 2017 to March 31, 2018 are now available. The SOC 1 and 2 reports are available on-demand through AWS Artifact by logging into the AWS Management Console. The SOC 3 report can be downloaded here.

    Finally, prospective customers can read our SOC 1 and 2 reports by reaching out to AWS Compliance.

    Want more AWS Security news? Follow us on Twitter.

    AWS Online Tech Talks – May and Early June 2018

    Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-may-and-early-june-2018/

    AWS Online Tech Talks – May and Early June 2018  

    Join us this month to learn about some of the exciting new services and solution best practices at AWS. We also have our first re:Invent 2018 webinar series, “How to re:Invent”. Sign up now to learn more, we look forward to seeing you.

    Note – All sessions are free and in Pacific Time.

    Tech talks featured this month:

    Analytics & Big Data

    May 21, 2018 | 11:00 AM – 11:45 AM PT Integrating Amazon Elasticsearch with your DevOps Tooling – Learn how you can easily integrate Amazon Elasticsearch Service into your DevOps tooling and gain valuable insight from your log data.

    May 23, 2018 | 11:00 AM – 11:45 AM PTData Warehousing and Data Lake Analytics, Together – Learn how to query data across your data warehouse and data lake without moving data.

    May 24, 2018 | 11:00 AM – 11:45 AM PTData Transformation Patterns in AWS – Discover how to perform common data transformations on the AWS Data Lake.

    Compute

    May 29, 2018 | 01:00 PM – 01:45 PM PT – Creating and Managing a WordPress Website with Amazon Lightsail – Learn about Amazon Lightsail and how you can create, run and manage your WordPress websites with Amazon’s simple compute platform.

    May 30, 2018 | 01:00 PM – 01:45 PM PTAccelerating Life Sciences with HPC on AWS – Learn how you can accelerate your Life Sciences research workloads by harnessing the power of high performance computing on AWS.

    Containers

    May 24, 2018 | 01:00 PM – 01:45 PM PT – Building Microservices with the 12 Factor App Pattern on AWS – Learn best practices for building containerized microservices on AWS, and how traditional software design patterns evolve in the context of containers.

    Databases

    May 21, 2018 | 01:00 PM – 01:45 PM PTHow to Migrate from Cassandra to Amazon DynamoDB – Get the benefits, best practices and guides on how to migrate your Cassandra databases to Amazon DynamoDB.

    May 23, 2018 | 01:00 PM – 01:45 PM PT5 Hacks for Optimizing MySQL in the Cloud – Learn how to optimize your MySQL databases for high availability, performance, and disaster resilience using RDS.

    DevOps

    May 23, 2018 | 09:00 AM – 09:45 AM PT.NET Serverless Development on AWS – Learn how to build a modern serverless application in .NET Core 2.0.

    Enterprise & Hybrid

    May 22, 2018 | 11:00 AM – 11:45 AM PTHybrid Cloud Customer Use Cases on AWS – Learn how customers are leveraging AWS hybrid cloud capabilities to easily extend their datacenter capacity, deliver new services and applications, and ensure business continuity and disaster recovery.

    IoT

    May 31, 2018 | 11:00 AM – 11:45 AM PTUsing AWS IoT for Industrial Applications – Discover how you can quickly onboard your fleet of connected devices, keep them secure, and build predictive analytics with AWS IoT.

    Machine Learning

    May 22, 2018 | 09:00 AM – 09:45 AM PTUsing Apache Spark with Amazon SageMaker – Discover how to use Apache Spark with Amazon SageMaker for training jobs and application integration.

    May 24, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS DeepLens – Learn how AWS DeepLens provides a new way for developers to learn machine learning by pairing the physical device with a broad set of tutorials, examples, source code, and integration with familiar AWS services.

    Management Tools

    May 21, 2018 | 09:00 AM – 09:45 AM PTGaining Better Observability of Your VMs with Amazon CloudWatch – Learn how CloudWatch Agent makes it easy for customers like Rackspace to monitor their VMs.

    Mobile

    May 29, 2018 | 11:00 AM – 11:45 AM PT – Deep Dive on Amazon Pinpoint Segmentation and Endpoint Management – See how segmentation and endpoint management with Amazon Pinpoint can help you target the right audience.

    Networking

    May 31, 2018 | 09:00 AM – 09:45 AM PTMaking Private Connectivity the New Norm via AWS PrivateLink – See how PrivateLink enables service owners to offer private endpoints to customers outside their company.

    Security, Identity, & Compliance

    May 30, 2018 | 09:00 AM – 09:45 AM PT – Introducing AWS Certificate Manager Private Certificate Authority (CA) – Learn how AWS Certificate Manager (ACM) Private Certificate Authority (CA), a managed private CA service, helps you easily and securely manage the lifecycle of your private certificates.

    June 1, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS Firewall Manager – Centrally configure and manage AWS WAF rules across your accounts and applications.

    Serverless

    May 22, 2018 | 01:00 PM – 01:45 PM PTBuilding API-Driven Microservices with Amazon API Gateway – Learn how to build a secure, scalable API for your application in our tech talk about API-driven microservices.

    Storage

    May 30, 2018 | 11:00 AM – 11:45 AM PTAccelerate Productivity by Computing at the Edge – Learn how AWS Snowball Edge support for compute instances helps accelerate data transfers, execute custom applications, and reduce overall storage costs.

    June 1, 2018 | 11:00 AM – 11:45 AM PTLearn to Build a Cloud-Scale Website Powered by Amazon EFS – Technical deep dive where you’ll learn tips and tricks for integrating WordPress, Drupal and Magento with Amazon EFS.

     

     

     

     

    How AWS Meets a Physical Separation Requirement with a Logical Separation Approach

    Post Syndicated from Min Hyun original https://aws.amazon.com/blogs/security/how-aws-meets-a-physical-separation-requirement-with-a-logical-separation-approach/

    We have a new resource available to help you meet a requirement for physically-separated infrastructure using logical separation in the AWS cloud. Our latest guide, Logical Separation: An Evaluation of the U.S. Department of Defense Cloud Security Requirements for Sensitive Workloads outlines how AWS meets the U.S. Department of Defense’s (DoD) stringent physical separation requirement by pioneering a three-pronged logical separation approach that leverages virtualization, encryption, and deploying compute to dedicated hardware.

    This guide will help you understand logical separation in the cloud and demonstrates its advantages over a traditional physical separation model. Embracing this approach can help organizations confidently meet or exceed security requirements found in traditional on-premises environments, while also providing increased security control and flexibility.

    Logical Separation is the second guide in the AWS Government Handbook Series, which examines cybersecurity policy initiatives and identifies best practices.

    If you have questions or want to learn more, contact your account executive or AWS Support.

    [$] Containers and license compliance

    Post Syndicated from jake original https://lwn.net/Articles/752982/rss

    Containers are, of course, all the rage these days; in fact, during his
    2018 Legal and
    Licensing Workshop
    (LLW) talk, Dirk Hohndel
    said with a grin that he hears “containers may take off”. But, while
    containers are
    easy to set up and use, license compliance for containers is “incredibly
    hard”. He has been spending “way too much time” thinking about container
    compliance recently and, beyond the standard “let’s go shopping” solution
    to hard problems, has come up with some ideas.
    Hohndel is a longtime member of the FOSS community who is now the chief
    open source officer at VMware—a company that ships some container images.

    CI/CD with Data: Enabling Data Portability in a Software Delivery Pipeline with AWS Developer Tools, Kubernetes, and Portworx

    Post Syndicated from Kausalya Rani Krishna Samy original https://aws.amazon.com/blogs/devops/cicd-with-data-enabling-data-portability-in-a-software-delivery-pipeline-with-aws-developer-tools-kubernetes-and-portworx/

    This post is written by Eric Han – Vice President of Product Management Portworx and Asif Khan – Solutions Architect

    Data is the soul of an application. As containers make it easier to package and deploy applications faster, testing plays an even more important role in the reliable delivery of software. Given that all applications have data, development teams want a way to reliably control, move, and test using real application data or, at times, obfuscated data.

    For many teams, moving application data through a CI/CD pipeline, while honoring compliance and maintaining separation of concerns, has been a manual task that doesn’t scale. At best, it is limited to a few applications, and is not portable across environments. The goal should be to make running and testing stateful containers (think databases and message buses where operations are tracked) as easy as with stateless (such as with web front ends where they are often not).

    Why is state important in testing scenarios? One reason is that many bugs manifest only when code is tested against real data. For example, we might simply want to test a database schema upgrade but a small synthetic dataset does not exercise the critical, finer corner cases in complex business logic. If we want true end-to-end testing, we need to be able to easily manage our data or state.

    In this blog post, we define a CI/CD pipeline reference architecture that can automate data movement between applications. We also provide the steps to follow to configure the CI/CD pipeline.

     

    Stateful Pipelines: Need for Portable Volumes

    As part of continuous integration, testing, and deployment, a team may need to reproduce a bug found in production against a staging setup. Here, the hosting environment is comprised of a cluster with Kubernetes as the scheduler and Portworx for persistent volumes. The testing workflow is then automated by AWS CodeCommit, AWS CodePipeline, and AWS CodeBuild.

    Portworx offers Kubernetes storage that can be used to make persistent volumes portable between AWS environments and pipelines. The addition of Portworx to the AWS Developer Tools continuous deployment for Kubernetes reference architecture adds persistent storage and storage orchestration to a Kubernetes cluster. The example uses MongoDB as the demonstration of a stateful application. In practice, the workflow applies to any containerized application such as Cassandra, MySQL, Kafka, and Elasticsearch.

    Using the reference architecture, a developer calls CodePipeline to trigger a snapshot of the running production MongoDB database. Portworx then creates a block-based, writable snapshot of the MongoDB volume. Meanwhile, the production MongoDB database continues serving end users and is uninterrupted.

    Without the Portworx integrations, a manual process would require an application-level backup of the database instance that is outside of the CI/CD process. For larger databases, this could take hours and impact production. The use of block-based snapshots follows best practices for resilient and non-disruptive backups.

    As part of the workflow, CodePipeline deploys a new MongoDB instance for staging onto the Kubernetes cluster and mounts the second Portworx volume that has the data from production. CodePipeline triggers the snapshot of a Portworx volume through an AWS Lambda function, as shown here

     

     

     

    AWS Developer Tools with Kubernetes: Integrated Workflow with Portworx

    In the following workflow, a developer is testing changes to a containerized application that calls on MongoDB. The tests are performed against a staging instance of MongoDB. The same workflow applies if changes were on the server side. The original production deployment is scheduled as a Kubernetes deployment object and uses Portworx as the storage for the persistent volume.

    The continuous deployment pipeline runs as follows:

    • Developers integrate bug fix changes into a main development branch that gets merged into a CodeCommit master branch.
    • Amazon CloudWatch triggers the pipeline when code is merged into a master branch of an AWS CodeCommit repository.
    • AWS CodePipeline sends the new revision to AWS CodeBuild, which builds a Docker container image with the build ID.
    • AWS CodeBuild pushes the new Docker container image tagged with the build ID to an Amazon ECR registry.
    • Kubernetes downloads the new container (for the database client) from Amazon ECR and deploys the application (as a pod) and staging MongoDB instance (as a deployment object).
    • AWS CodePipeline, through a Lambda function, calls Portworx to snapshot the production MongoDB and deploy a staging instance of MongoDB• Portworx provides a snapshot of the production instance as the persistent storage of the staging MongoDB
      • The MongoDB instance mounts the snapshot.

    At this point, the staging setup mimics a production environment. Teams can run integration and full end-to-end tests, using partner tooling, without impacting production workloads. The full pipeline is shown here.

     

    Summary

    This reference architecture showcases how development teams can easily move data between production and staging for the purposes of testing. Instead of taking application-specific manual steps, all operations in this CodePipeline architecture are automated and tracked as part of the CI/CD process.

    This integrated experience is part of making stateful containers as easy as stateless. With AWS CodePipeline for CI/CD process, developers can easily deploy stateful containers onto a Kubernetes cluster with Portworx storage and automate data movement within their process.

    The reference architecture and code are available on GitHub:

    ● Reference architecture: https://github.com/portworx/aws-kube-codesuite
    ● Lambda function source code for Portworx additions: https://github.com/portworx/aws-kube-codesuite/blob/master/src/kube-lambda.py

    For more information about persistent storage for containers, visit the Portworx website. For more information about Code Pipeline, see the AWS CodePipeline User Guide.

    Secure Build with AWS CodeBuild and LayeredInsight

    Post Syndicated from Asif Khan original https://aws.amazon.com/blogs/devops/secure-build-with-aws-codebuild-and-layeredinsight/

    This post is written by Asif Awan, Chief Technology Officer of Layered InsightSubin Mathew – Software Development Manager for AWS CodeBuild, and Asif Khan – Solutions Architect

    Enterprises adopt containers because they recognize the benefits: speed, agility, portability, and high compute density. They understand how accelerating application delivery and deployment pipelines makes it possible to rapidly slipstream new features to customers. Although the benefits are indisputable, this acceleration raises concerns about security and corporate compliance with software governance. In this blog post, I provide a solution that shows how Layered Insight, the pioneer and global leader in container-native application protection, can be used with seamless application build and delivery pipelines like those available in AWS CodeBuild to address these concerns.

    Layered Insight solutions

    Layered Insight enables organizations to unify DevOps and SecOps by providing complete visibility and control of containerized applications. Using the industry’s first embedded security approach, Layered Insight solves the challenges of container performance and protection by providing accurate insight into container images, adaptive analysis of running containers, and automated enforcement of container behavior.

     

    AWS CodeBuild

    AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue. You can get started quickly by using prepackaged build environments, or you can create custom build environments that use your own build tools.

     

    Problem Definition

    Security and compliance concerns span the lifecycle of application containers. Common concerns include:

    Visibility into the container images. You need to verify the software composition information of the container image to determine whether known vulnerabilities associated with any of the software packages and libraries are included in the container image.

    Governance of container images is critical because only certain open source packages/libraries, of specific versions, should be included in the container images. You need support for mechanisms for blacklisting all container images that include a certain version of a software package/library, or only allowing open source software that come with a specific type of license (such as Apache, MIT, GPL, and so on). You need to be able to address challenges such as:

    ·       Defining the process for image compliance policies at the enterprise, department, and group levels.

    ·       Preventing the images that fail the compliance checks from being deployed in critical environments, such as staging, pre-prod, and production.

    Visibility into running container instances is critical, including:

    ·       CPU and memory utilization.

    ·       Security of the build environment.

    ·       All activities (system, network, storage, and application layer) of the application code running in each container instance.

    Protection of running container instances that is:

    ·       Zero-touch to the developers (not an SDK-based approach).

    ·       Zero touch to the DevOps team and doesn’t limit the portability of the containerized application.

    ·       This protection must retain the option to switch to a different container stack or orchestration layer, or even to a different Container as a Service (CaaS ).

    ·       And it must be a fully automated solution to SecOps, so that the SecOps team doesn’t have to manually analyze and define detailed blacklist and whitelist policies.

     

    Solution Details

    In AWS CodeCommit, we have three projects:
    ●     “Democode” is a simple Java application, with one buildspec to build the app into a Docker container (run by build-demo-image CodeBuild project), and another to instrument said container (instrument-image CodeBuild project). The resulting container is stored in ECR repo javatestasjavatest:20180415-layered. This instrumented container is running in AWS Fargate cluster demo-java-appand can be seen in the Layered Insight runtime console as the javatestapplication in us-east-1.
    ●     aws-codebuild-docker-imagesis a clone of the official aws-codebuild-docker-images repo on GitHub . This CodeCommit project is used by the build-python-builder CodeBuild project to build the python 3.3.6 codebuild image and is stored at the codebuild-python ECR repo. We then manually instructed the Layered Insight console to instrument the image.
    ●     scan-java-imagecontains just a buildspec.yml file. This file is used by the scan-java-image CodeBuild project to instruct Layered Assessment to perform a vulnerability scan of the javatest container image built previously, and then run the scan results through a compliance policy that states there should be no medium vulnerabilities. This build fails — but in this case that is a success: the scan completes successfully, but compliance fails as there are medium-level issues found in the scan.

    This build is performed using the instrumented version of the Python 3.3.6 CodeBuild image, so the activity of the processes running within the build are recorded each time within the LI console.

    Build container image

    Create or use a CodeCommit project with your application. To build this image and store it in Amazon Elastic Container Registry (Amazon ECR), add a buildspec file to the project and build a container image and create a CodeBuild project.

    Scan container image

    Once the image is built, create a new buildspec in the same project or a new one that looks similar to below (update ECR URL as necessary):

    version: 0.2
    phases:
      pre_build:
        commands:
          - echo Pulling down LI Scan API client scripts
          - git clone https://github.com/LayeredInsight/scan-api-example-python.git
          - echo Setting up LI Scan API client
          - cd scan-api-example-python
          - pip install layint_scan_api
          - pip install -r requirements.txt
      build:
        commands:
          - echo Scanning container started on `date`
          - IMAGEID=$(./li_add_image --name <aws-region>.amazonaws.com/javatest:20180415)
          - ./li_wait_for_scan -v --imageid $IMAGEID
          - ./li_run_image_compliance -v --imageid $IMAGEID --policyid PB15260f1acb6b2aa5b597e9d22feffb538256a01fbb4e5a95

    Add the buildspec file to the git repo, push it, and then build a CodeBuild project using with the instrumented Python 3.3.6 CodeBuild image at <aws-region>.amazonaws.com/codebuild-python:3.3.6-layered. Set the following environment variables in the CodeBuild project:
    ●     LI_APPLICATIONNAME – name of the build to display
    ●     LI_LOCATION – location of the build project to display
    ●     LI_API_KEY – ApiKey:<key-name>:<api-key>
    ●     LI_API_HOST – location of the Layered Insight API service

    Instrument container image

    Next, to instrument the new container image:

    1. In the Layered Insight runtime console, ensure that the ECR registry and credentials are defined (click the Setup icon and the ‘+’ sign on the top right of the screen to add a new container registry). Note the name given to the registry in the console, as this needs to be referenced in the li_add_imagecommand in the script, below.
    2. Next, add a new buildspec (with a new name) to the CodeCommit project, such as the one shown below. This code will download the Layered Insight runtime client, and use it to instruct the Layered Insight service to instrument the image that was just built:
      version: 0.2
      phases:
      pre_build:
      commands:
      echo Pulling down LI API Runtime client scripts
      git clone https://github.com/LayeredInsight/runtime-api-example-python
      echo Setting up LI API client
      cd runtime-api-example-python
      pip install layint-runtime-api
      pip install -r requirements.txt
      build:
      commands:
      echo Instrumentation started on `date`
      ./li_add_image --registry "Javatest ECR" --name IMAGE_NAME:TAG --description "IMAGE DESCRIPTION" --policy "Default Policy" --instrument --wait --verbose
    3. Commit and push the new buildspec file.
    4. Going back to CodeBuild, create a new project, with the same CodeCommit repo, but this time select the new buildspec file. Use a Python 3.3.6 builder – either the AWS or LI Instrumented version.
    5. Click Continue
    6. Click Save
    7. Run the build, again on the master branch.
    8. If everything runs successfully, a new image should appear in the ECR registry with a -layered suffix. This is the instrumented image.

    Run instrumented container image

    When the instrumented container is now run — in ECS, Fargate, or elsewhere — it will log data back to the Layered Insight runtime console. It’s appearance in the console can be modified by setting the LI_APPLICATIONNAME and LI_LOCATION environment variables when running the container.

    Conclusion

    In the above blog we have provided you steps needed to embed governance and runtime security in your build pipelines running on AWS CodeBuild using Layered Insight.

     

     

     

    Tips for Success: GDPR Lessons Learned

    Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/tips-for-success-gdpr-lessons-learned/

    Security is our top priority at AWS, and from the beginning we have built security into the fabric of our services. With the introduction of GDPR (which becomes enforceable on May 25 of 2018), privacy and data protection have become even more ingrained into our security-centered culture. Three weeks ago, well ahead of the deadline, we announced that all AWS services are compliant with GDPR, meaning you can use AWS as a data processor as a way to help solve your GDPR challenges (be sure to visit our GDPR Center for additional information).

    When it comes to GDPR compliance, many customers are progressing nicely and much of the initial trepidation is gone. In my interactions with customers on this topic, a few themes have emerged as universal:

    • GDPR is important. You need to have a plan in place if you process personal data of EU data subjects, not only because it’s good governance, but because GDPR does carry significant penalties for non-compliance.
    • Solving this can be complex, potentially involving a lot of personnel and multiple tools. Your GDPR process will also likely span across disciplines – impacting people, processes, and technology.
    • Each customer is unique, and there are many methodologies around assessing your compliance with GDPR. It’s important to be aware of your own individual business attributes.

    I thought it might be helpful to share some of our own lessons learned. In our experience in solving the GDPR challenge, the following were keys to our success:

    1. Get your senior leadership involved. We have a regular cadence of detailed status conversations about GDPR with our CEO, Andy Jassy. GDPR is high stakes, and the AWS leadership team knows it. If GDPR doesn’t have the attention it needs with the visibility of top management today, it’s time to escalate.
    2. Centralize the GDPR efforts. Driving all work streams centrally is key. This may sound obvious, but managing this in a distributed manner may result in duplicative effort and/or team members moving in a different direction.
    3. The most important single partner in solving GDPR is your legal team. Having non-legal people make assumptions about how to interpret GDPR for your unique environment is both risky and a potential waste of time and resources. You want to avoid analysis paralysis by getting proper legal advice, collaborating on a direction, and then moving forward with the proper urgency.
    4. Collaborate closely with tech leadership. The “process” people in your organization, the ones who already know how to approach governance problems, are typically comfortable jumping right in to GDPR. But technical teams, including data owners, have set up their software for business application. They may not even know what kind of data they are storing, processing, or transferring to other parts of the business. In the GDPR exercise they need to be aware of (or at least help facilitate) the tracking of data and data elements between systems. This isn’t a typical ask for technical teams, so be prepared to educate and to fully understand data flow.
    5. Don’t live by the established checklists. There are multiple methodologies to solving the compliance challenges of GDPR. At AWS, we ended up establishing core requirements, mapped out by data controller and data processor functions and then, in partnership with legal, decided upon a group of projects based on our known current state. Be careful about using a set methodology, tool or questionnaire to govern your efforts. These generic assessments can help educate, but letting them drive or limit your work could lead to missing something that is key to your own compliance. In this sense, a generic, “one size fits all” solution might not be helpful.
    6. Don’t be afraid to challenge prior orthodoxy. Many times we changed course based on new information. You shouldn’t be afraid to scrap an effort if you determine it’s not working. You should also not be afraid to escalate issues to senior leadership when needed. This is an executive issue.
    7. Look for ways to leverage your work beyond this compliance activity. GDPR requires serious effort, but are the results limited to GDPR compliance? Certainly not. You can use GDPR workflows as a way to ensure better governance moving forward. Privacy and security will require work for the foreseeable future, so make your governance program scalable and usable for other purposes.

    One last tip that has made all the difference: think about protecting data subjects and work backwards from there. Customer focus drives us to ask, “what would customers and data subjects want and expect us to do?” Taking GDPR from a pure legal or compliance standpoint may be technically sufficient, but we believe the objectives of security and personal data protection require a more comprehensive view, and you can most effectively shape that view by starting with the individuals GDPR was meant to protect.

    If you would like to find out more about our experiences, as well as how we can help you in your efforts, please reach out to us today.

    -Chad Woolf

    Vice President, AWS Security Assurance

    Interested in additional AWS Security news? Follow the AWS Security Blog on Twitter.

    [$] A successful defense against a copyright troll

    Post Syndicated from jake original https://lwn.net/Articles/752485/rss

    At the 2018 Legal and
    Licensing Workshop
    (LLW), which is a yearly gathering
    of lawyers and technical folks organized by the Free Software Foundation
    Europe (FSFE), attendees got more details on a recent hearing in a German GPL
    enforcement case. Marcus von Welser is a lawyer who represented the
    defendant, Geniatech,
    in a case that was brought by Patrick
    McHardy
    . In the presentation, von
    Welser was joined by
    Armijn Hemel, who helped
    Geniatech in its compliance efforts. The hearing
    was of interest for a number of reasons, not least because McHardy
    withdrew his request for an injunction once it became clear that the judge
    was leaning in
    favor of the defendants
    —effectively stopping this case dead in its tracks.

    Announcing the new AWS Certified Security – Specialty exam

    Post Syndicated from Janna Pellegrino original https://aws.amazon.com/blogs/architecture/announcing-the-new-aws-certified-security-specialty-exam/

    Good news for cloud security experts: following our most popular beta exam ever, the AWS Certified Security – Specialty exam is here. This new exam allows experienced cloud security professionals to demonstrate and validate their knowledge of how to secure the AWS platform.

    About the exam
    The security exam covers incident response, logging and monitoring, infrastructure security, identity and access management, and data protection. The exam is open to anyone who currently holds a Cloud Practitioner or Associate-level certification. We recommend candidates have five years of IT security experience designing and implementing security solutions, and at least two years of hands-on experience securing AWS workloads.

    The exam validates:

    • An understanding of specialized data classifications and AWS data protection mechanisms.
    • An understanding of data encryption methods and AWS mechanisms to implement them.
    • An understanding of secure Internet protocols and AWS mechanisms to implement them.
    • A working knowledge of AWS security services and features of services to provide a secure production environment.
    • Competency gained from two or more years of production deployment experience using AWS security services and features.
    • Ability to make trade-off decisions with regard to cost, security, and deployment complexity given a set of application requirements.
    • An understanding of security operations and risk.

    Learn more and register >>

    How to prepare
    We have training and other resources to help you prepare for the exam:

    AWS Training (aws.amazon.com/training)

    Additional Resources

    Learn more and register >>

    Please contact us if you have questions about exam registration.

    Good luck!

    Announcing the new AWS Certified Security – Specialty exam

    Post Syndicated from Ozlem Yilmaz original https://aws.amazon.com/blogs/security/announcing-the-new-aws-certified-security-specialty-exam/

    Good news for cloud security experts: the AWS Certified Security — Specialty exam is here. This new exam allows experienced cloud security professionals to demonstrate and validate their knowledge of how to secure the AWS platform.

    About the exam

    The security exam covers incident response, logging and monitoring, infrastructure security, identity and access management, and data protection. The exam is open to anyone who currently holds a Cloud Practitioner or Associate-level certification. We recommend candidates have five years of IT security experience designing and implementing security solutions, and at least two years of hands-on experience securing AWS workloads.

    The exam validates your understanding of:

    • Specialized data classifications and AWS data protection mechanisms
    • Data encryption methods and AWS mechanisms to implement them
    • Secure Internet protocols and AWS mechanisms to implement them
    • AWS security services and features of services to provide a secure production environment
    • Making tradeoff decisions with regard to cost, security, and deployment complexity given a set of application requirements
    • Security operations and risk

    How to prepare

    We have training and other resources to help you prepare for the exam.

    AWS Training that includes:

    Additional Resources

    Learn more and register here, and please contact us if you have questions about exam registration.

    Want more AWS Security news? Follow us on Twitter.

    Russia’s Encryption War: 1.8m Google & Amazon IPs Blocked to Silence Telegram

    Post Syndicated from Andy original https://torrentfreak.com/russias-encryption-war-1-8m-google-amazon-ips-blocked-to-silence-telegram-180417/

    The rules in Russia are clear. Entities operating an encrypted messaging service need to register with the authorities. They also need to hand over their encryption keys so that if law enforcement sees fit, users can be spied on.

    Free cross-platform messaging app Telegram isn’t playing ball. An impressive 200,000,000 people used the software in March (including a growing number for piracy purposes) and founder Pavel Durov says he will not compromise their security, despite losing a lawsuit against the Federal Security Service which compels him to do so.

    “Telegram doesn’t have shareholders or advertisers to report to. We don’t do deals with marketers, data miners or government agencies. Since the day we launched in August 2013 we haven’t disclosed a single byte of our users’ private data to third parties,” Durov said.

    “Above all, we at Telegram believe in people. We believe that humans are inherently intelligent and benevolent beings that deserve to be trusted; trusted with freedom to share their thoughts, freedom to communicate privately, freedom to create tools. This philosophy defines everything we do.”

    But by not handing over its keys, Telegram is in trouble with Russia. The FSB says it needs access to Telegram messages to combat terrorism so, in response to its non-compliance, telecoms watchdog Rozcomnadzor filed a lawsuit to degrade Telegram via web-blocking. Last Friday, that process ended in the state’s favor.

    After an 18-minute hearing, a Moscow court gave the go-ahead for Telegram to be banned in Russia. The hearing was scheduled just the day before, giving Telegram little time to prepare. In protest, its lawyers didn’t even turn up to argue the company’s position.

    Instead, Durov took to his VKontakte account to announce that Telegram would take counter-measures.

    “Telegram will use built-in methods to bypass blocks, which do not require actions from users, but 100% availability of the service without a VPN is not guaranteed,” Durov wrote.

    Telegram can appeal the blocking decision but Russian authorities aren’t waiting around for a response. They are clearly prepared to match Durov’s efforts, no matter what the cost.

    In instructions sent out yesterday nationwide, Rozomnadzor ordered ISPs to block Telegram. The response was immediate and massive. Telegram was using both Amazon and Google to provide service to its users so, within hours, huge numbers of IP addresses belonging to both companies were targeted.

    Initially, 655,352 Amazon IP addresses were placed on Russia’s nationwide blacklist. It was later reported that a further 131,000 IP addresses were added to that total. But the Russians were just getting started.

    Servers.ru reports that a further 1,048,574 IP addresses belonging to Google were also targeted Monday. Rozcomnadzor said the court ruling against Telegram compelled it to take whatever action is needed to take Telegram down but with at least 1,834,996 addresses now confirmed blocked, it remains unclear what effect it’s had on the service.

    Friday’s court ruling states that restrictions against Telegram can be lifted provided that the service hands over its encryption keys to the FSB. However, Durov responded by insisting that “confidentiality is not for sale, and human rights should not be compromised because of fear or greed.”

    But of course, money is still part of the Telegram equation. While its business model in terms of privacy stands in stark contrast to that of Facebook, Telegram is also involved in the world’s biggest initial coin offering (ICO). According to media reports, it has raised $1.7 billion in pre-sales thus far.

    This week’s action against Telegram is the latest in Russia’s war on ‘unauthorized’ encryption.

    At the end of March, authorities suggested that around 15 million IP addresses (13.5 million belonging to Amazon) could be blocked to target chat software Zello. While those measures were averted, a further 500 domains belonging to Google were caught in the dragnet.

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.