Tag Archives: ccc

Накратко за киберсигурността

Post Syndicated from Bozho original https://blog.bozho.net/blog/3063

През уикенда се проведе събитие в рамките на „Български манифест за Европа“ на тема „Европейски съюз за отбрана и сигурност и неговите черноморски измерения“

Тъй като не успях да присъствам, записах кратко видео, с което да обясня какво е и какво не е киберсигурност. Разбира се, 5-минутно видео няма как да обхване сложната тема, но все пак целта беше да дам базова представа.

Основната ми теза е, че киберсигурността не е просто активна отбранителна дейност — тя е набор от много мерки, които в голямата си част са пасивни — добри практики, политики за сигурност, квалифициран персонал и то както в публичния, така и в частния сектор.

Защото кибератаките не са само атаки по държавните системи (напр. изборни системи, публични регистри, уебсайтове на институции и др.) а и атаки по ключови частни компании — банки, мобилни оператори. Например преди няколко години БОРИКА имаше технически проблем, който доведе до пълно спиране на работа на банкомати и ПОС-терминали в цялата страна. И докато банкоматите не са чак толкова критична инфраструктура, то например електропреносната мрежа е. В случай, че нейни части, управлявани от софтуер, биват „ударени“, това може да значи спиране на електричеството (както предупреждава, например, Washington Post). Да не говорим за оборудване, използване в ядрената енергетика, което може да бъде увредено от вирус (като известният вирус Stuxnet, забавил значително иранската ядрена програма).

Но дори да няма реални щети, атаките могат да имат сериозен имиджов ефект. Например при атаките срещу уебсайтове на институции (вкл. ЦИК) преди няколко години реално нямаше нанесени щети — просто сайтовете не бяха достъпни. Но самият факт, че институции бяха атакувани в деня на референдума за електронно гласуване пося (или поля) семето на несигурността от технологиите в изборния процес.

И защитата от всички тези атаки изобщо не е тривиална. „Дупки“ в сигурността на най-различни системи се появяват постоянно (а понякога разбираме, че ги има чак когато някой ги използва, т.нар. 0-day exploits). Ако човек гледа няколко лекции на DefCon или CCC (хакерски конференции) му идва да изхвърли цялата си техника, да отиде в планината, да си изкопае дупка и да живее спокойно там, далеч от всички „пробити“ технологии на света. Нещата не са чак толкова страшни (най-вече защото няма практическа полза от злоупотребата с немалко от техническите уязвимости), но все пак киберзащитата е набор от много, много мерки — технологични, организационни, правни.

Но ако всичко това трябва да се обобщи — трябва да инвестираме доста повече — и като държава, и като бизнес — в информационна сигурност. И лесно и бързо решение за киберсигурността няма.

Надявам се видеото да е интересно (спецификата на осветлението ме прави да изглеждам като „хакер в мазе“, което не е търсен ефект)

2017-12-27 34c3 ден 1

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3373

Успявам да гледам малко лекции от 34c3 (програма, streaming).

Откриването на Charlie Stross (който ми е от любимите автори) беше доста интересно, с наблюдението, че корпорациите могат да се разглеждат като начална форма на изкуствените интелекти и всякакви интересни следствия от това, струва си да се отдели малко време и да се гледа (не знам дали ще го качи в блога си).

Лекцията за геймифицираната система за социален кредит в Китай не ми каза нещо ново и не беше особено добре представена, но е добре човек да почете за ситуацията.

Харалд Велте разказа за internet-а и BBS-ите от едно време (само че в Германия), като цяло все неща, с които едно време сме си играли. Иво ме пита дали не можем да направим някаква такава лекция или да намерим история на случвалите се неща в България. Мислех си, че вече има такова нещо, ама не мога да го намеря, някой да се сеща за хубава история на ония времена?

Лекцията за Иран имаше малко полезна информация в нея, но основно не си заслужаваше. Лекцията за Саудитска Арабия също нямаше много съдържание.

Лекцията за “Low Cost Non-Invasive Biomedical Imaging” за момента ми е любима, и трябва да си вземем едно такова нещо за в лаба. Звучи като технология, с която си струва да си играем и която може много да подобри работата на всякакви лекари.

“Defeating (Not)Petya’s Cryptography” имаше полезни моменти.

Като успея да изгледам още някакви неща, ще пиша и за тях. Който иска, може директно да ходи в initLab да гледа, тъкмо ще има с кой да коментира 🙂

Update: “The Ultimate Apollo Guidance Computer Talk” се оказа страхотно, особено архитектурата на нещото, която има вид на скалъпена с тел и тиксо.

All Systems Go! 2017 Videos Online!

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/all-systems-go-2017-videos-online.html

For those living under a rock, the videos from everybody’s favourite
Userspace Linux Conference All Systems Go!
2017
are now available online.

All videos

The videos for my own two talks are available here:

Synchronizing Images with
casync

(Slides)

Containers without a Container Manager, with
systemd

(Slides)

Of course, this is the stellar work of the CCC
VOC
folks, who are hard to beat when it comes to
videotaping of community conferences.

FSFE: Public Money? Public Code!

Post Syndicated from ris original https://lwn.net/Articles/733604/rss

The Free Software Foundation Europe has joined several
organizations
in publishing an open letter urging lawmakers
to advance legislation requiring publicly financed software developed for
the public sector be made available under a Free and Open Source Software
license. “The initial signatories include CCC, EDRi, Free Software
Foundation Europe, KDE, Open Knowledge Foundation Germany, openSUSE, Open
Source Business Alliance, Open Source Initiative, The Document Foundation,
Wikimedia Deutschland, as well as several others; they ask individuals and
other organisation to sign the open letter. The open letter will be sent to candidates for the German Parliament election and, during the coming months, until the 2019 EU parliament elections, to other representatives of the EU and EU member states.

More notes on US-CERTs IOCs

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/06/more-notes-on-us-certs-iocs.html

Yet another Russian attack against the power grid, and yet more bad IOCs from the DHS US-CERT.

IOCs are “indicators of compromise“, things you can look for in order to order to see if you, too, have been hacked by the same perpetrators. There are several types of IOCs, ranging from the highly specific to the uselessly generic.

A uselessly generic IOC would be like trying to identify bank robbers by the fact that their getaway car was “white” in color. It’s worth documenting, so that if the police ever show up in a suspected cabin in the woods, they can note that there’s a “white” car parked in front.

But if you work bank security, that doesn’t mean you should be on the lookout for “white” cars. That would be silly.

This is what happens with US-CERT’s IOCs. They list some potentially useful things, but they also list a lot of junk that waste’s people’s times, with little ability to distinguish between the useful and the useless.

An example: a few months ago was the GRIZZLEYBEAR report published by US-CERT. Among other things, it listed IP addresses used by hackers. There was no description which would be useful IP addresses to watch for, and which would be useless.

Some of these IP addresses were useful, pointing to servers the group has been using a long time as command-and-control servers. Other IP addresses are more dubious, such as Tor exit nodes. You aren’t concerned about any specific Tor exit IP address, because it changes randomly, so has no relationship to the attackers. Instead, if you cared about those Tor IP addresses, what you should be looking for is a dynamically updated list of Tor nodes updated daily.

And finally, they listed IP addresses of Yahoo, because attackers passed data through Yahoo servers. No, it wasn’t because those Yahoo servers had been compromised, it’s just that everyone passes things though them, like email.

A Vermont power-plant blindly dumped all those IP addresses into their sensors. As a consequence, the next morning when an employee checked their Yahoo email, the sensors triggered. This resulted in national headlines about the Russians hacking the Vermont power grid.

Today, the US-CERT made similar mistakes with CRASHOVERRIDE. They took a report from Dragos Security, then mutilated it. Dragos’s own IOCs focused on things like hostile strings and file hashes of the hostile files. They also included filenames, but similar to the reason you’d noticed a white car — because it happened, not because you should be on the lookout for it. In context, there’s nothing wrong with noting the file name.

But the US-CERT pulled the filenames out of context. One of those filenames was, humorously, “svchost.exe”. It’s the name of an essential Windows service. Every Windows computer is running multiple copies of “svchost.exe”. It’s like saying “be on the lookout for Windows”.

Yes, it’s true that viruses use the same filenames as essential Windows files like “svchost.exe”. That’s, generally, something you should be aware of. But that CRASHOVERRIDE did this is wholly meaningless.

What Dragos Security was actually reporting was that a “svchost.exe” with the file hash of 79ca89711cdaedb16b0ccccfdcfbd6aa7e57120a was the virus — it’s the hash that’s the important IOC. Pulling the filename out of context is just silly.

Luckily, the DHS also provides some of the raw information provided by Dragos. But even then, there’s problems: they provide it in formatted form, for HTML, PDF, or Excel documents. This corrupts the original data so that it’s no longer machine readable. For example, from their webpage, they have the following:

import “pe”
import “hash”

Among the problems are the fact that the quote marks have been altered, probably by Word’s “smart quotes” feature. In other cases, I’ve seen PDF documents get confused by the number 0 and the letter O, as if the raw data had been scanned in from a printed document and OCRed.

If this were a “threat intel” company,  we’d call this snake oil. The US-CERT is using Dragos Security’s reports to promote itself, but ultimate providing negative value, mutilating the content.

This, ultimately, causes a lot of harm. The press trusted their content. So does the network of downstream entities, like municipal power grids. There are tens of thousands of such consumers of these reports, often with less expertise than even US-CERT. There are sprinklings of smart people in these organizations, I meet them at hacker cons, and am fascinated by their stories. But institutionally, they are dumbed down the same level as these US-CERT reports, with the smart people marginalized.

There are two solutions to this problem. The first is that when the stupidity of what you do causes everyone to laugh at you, stop doing it. The second is to value technical expertise, empowering those who know what they are doing. Examples of what not to do are giving power to people like Obama’s cyberczar, Michael Daniels, who once claimed his lack of technical knowledge was a bonus, because it allowed him to see the strategic picture instead of getting distracted by details.

AFL experiments, or please eat your brötli

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2017/04/afl-experiments-or-please-eat-your.html

When messing around with AFL, you sometimes stumble upon something unexpected or amusing. Say,
having the fuzzer spontaneously synthesize JPEG files,
come up with non-trivial XML syntax,
or discover SQL semantics.

It is also fun to challenge yourself to employ fuzzers in non-conventional ways. Two canonical examples are having your fuzzing target call abort() whenever two libraries that are supposed to implement the same algorithm produce different outputs when given identical input data; or when a library produces different outputs when asked to encode or decode the same data several times in a row.

Such tricks may sound fanciful, but they actually find interesting bugs. In one case, AFL-based equivalence fuzzing revealed a
bunch of fairly rudimentary flaws in common bignum libraries,
with some theoretical implications for crypto apps. Another time, output stability checks revealed long-lived issues in
IJG jpeg and other widely-used image processing libraries, leaking
data across web origins.

In one of my recent experiments, I decided to fuzz
brotli, an innovative compression library used in Chrome. But since it’s been
already fuzzed for many CPU-years, I wanted to do it with a twist:
stress-test the compression routines, rather than the usually targeted decompression side. The latter is a far more fruitful
target for security research, because decompression normally involves dealing with well-formed inputs, whereas compression code is meant to
accept arbitrary data and not think about it too hard. That said, the low likelihood of flaws also means that the compression bits are a relatively unexplored surface that may be worth
poking with a stick every now and then.

In this case, the library held up admirably – spare for a handful of computationally intensive plaintext inputs
(that are now easy to spot due to the recent improvements to AFL).
But the output corpus synthesized by AFL, after being seeded just with a single file containing just “0”, featured quite a few peculiar finds:

  • Strings that looked like viable bits of HTML or XML:
    <META HTTP-AAA IDEAAAA,
    DATA="IIA DATA="IIA DATA="IIADATA="IIA,
    </TD>.

  • Non-trivial numerical constants:
    1000,1000,0000000e+000000,
    0,000 0,000 0,0000 0x600,
    0000,$000: 0000,$000:00000000000000.

  • Nonsensical but undeniably English sentences:
    them with them m with them with themselves,
    in the fix the in the pin th in the tin,
    amassize the the in the in the [email protected] in,
    he the themes where there the where there,
    size at size at the tie.

  • Bogus but semi-legible URLs:
    CcCdc.com/.com/m/ /00.com/.com/m/ /00(0(000000CcCdc.com/.com/.com

  • Snippets of Lisp code:
    )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))).

The results are quite unexpected, given that they are just a product of randomly mutating a single-byte input file and observing the code coverage in a simple compression tool. The explanation is that brotli, in addition to more familiar binary coding methods, uses a static dictionary constructed by analyzing common types of web content. Somehow, by observing the behavior of the program, AFL was able to incrementally reconstruct quite a few of these hardcoded keywords – and then put them together in various semi-interesting ways. Not bad.

Extending AWS CodeBuild with Custom Build Environments

Post Syndicated from John Pignata original https://aws.amazon.com/blogs/devops/extending-aws-codebuild-with-custom-build-environments/

AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. CodeBuild provides curated build environments for programming languages and runtimes such as Java, Ruby, Python, Go, Node.js, Android, and Docker. It can be extended through the use of custom build environments to support many more.

Build environments are Docker images that include a complete file system with everything required to build and test your project. To use a custom build environment in a CodeBuild project, you build a container image for your platform that contains your build tools, push it to a Docker container registry such as Amazon EC2 Container Registry (ECR), and reference it in the project configuration. When building your application, CodeBuild will retrieve the Docker image from the container registry specified in the project configuration and use the environment to compile your source code, run your tests, and package your application.

In this post, we’ll create a build environment for PHP applications and walk through the steps to configure CodeBuild to use this environment.

Requirements

In order to follow this tutorial and build the Docker container image, you need to have the Docker platform, the AWS Command Line Interface, and Git installed.

Create the demo resources

To begin, we’ll clone codebuild-images from GitHub. It contains an AWS CloudFormation template that we’ll use to create resources for our demo: a source code repository in AWS CodeCommit and a Docker image repository in Amazon ECR. The repository also includes PHP sample code and tests that we’ll use to demonstrate our custom build environment.

  1. Clone the Git repository:
    git clone https://github.com/awslabs/codebuild-images.git
    cd codebuild-images

  2. Create the CloudFormation stack using the template.yml file. You can use the CloudFormation console to create the stack or you can use the AWS Command Line Interface:
    aws cloudformation create-stack \
     --stack-name codebuild-php \
     --parameters ParameterKey=EnvName,ParameterValue=php \
     --template-body file://template.yml > /dev/null && \
    aws cloudformation wait stack-create-complete \
     --stack-name codebuild-php && \
    aws cloudformation describe-stacks \
     --stack-name codebuild-php \
     --output table \
     --query Stacks[0].Outputs

After the stack has been created, CloudFormation will return two outputs:

  • BuildImageRepositoryUri: the URI of the Docker repository that will host our build environment image.
  • SourceCodeRepositoryCloneUrl: the clone URL of the Git repository that will host our sample PHP code.

Build and push the Docker image

Docker images are specified using a Dockerfile, which contains the instructions for assembling the image. The Dockerfile included in the PHP build environment contains these instructions:

FROM php:7

ARG composer_checksum=55d6ead61b29c7bdee5cccfb50076874187bd9f21f65d8991d46ec5cc90518f447387fb9f76ebae1fbbacf329e583e30
ARG composer_url=https://raw.githubusercontent.com/composer/getcomposer.org/ba0141a67b9bd1733409b71c28973f7901db201d/web/installer

ENV COMPOSER_ALLOW_SUPERUSER=1
ENV PATH=$PATH:vendor/bin

RUN apt-get update && apt-get install -y --no-install-recommends \
      curl \
      git \
      python-dev \
      python-pip \
      zlib1g-dev \
 && pip install awscli \
 && docker-php-ext-install zip \
 && curl -o installer "$composer_url" \
 && echo "$composer_checksum *installer" | shasum –c –a 384 \
 && php installer --install-dir=/usr/local/bin --filename=composer \
 && rm -rf /var/lib/apt/lists/*

This Dockerfile inherits all of the instructions from the official PHP Docker image, which installs the PHP runtime. On top of that base image, the build process will install Python, Git, the AWS CLI, and Composer, a dependency management tool for PHP. We’ve installed the AWS CLI and Git as tools we can use during builds. For example, using the AWS CLI, we could trigger a notification from Amazon Simple Notification Service (SNS) when a build is complete or we could use Git to create a new tag to mark a successful build. Finally, the build process cleans up files created by the packaging tools, as recommended in Best practices for writing Dockerfiles.

Next, we’ll build and push the custom build environment.

  1. Provide authentication details for our registry to the local Docker engine by executing the output of the login helper provided by the AWS CLI:
    aws ecr get-login
    

  2. Build and push the Docker image. We’ll use the repository URI returned in the CloudFormation stack output (BuildImageRepositoryUri) as the image tag:
    cd php
    docker build -t [BuildImageRepositoryUri] .
    docker push [BuildImageRepositoryUri]

After running these commands, your Docker image is pushed into Amazon ECR and ready to build your project.

Configure the Git repository

The repository we cloned includes a small PHP sample that we can use to test our PHP build environment. The sample function converts Roman numerals to Arabic numerals. The repository also includes a sample test to exercise this function. The sample also includes a YAML file called a build spec that contains commands and related settings that CodeBuild uses to run a build:

version: 0.1
phases:
  pre_build:
    commands:
      - composer install
  build:
    commands:
      - phpunit tests

This build spec configures CodeBuild to run two commands during the build:

We will push the sample application to the CodeCommit repo created by the CloudFormation stack. You’ll need to grant your IAM user the required level of access to the AWS services required for CodeCommit and you’ll need to configure your Git client with the appropriate credentials. See Setup for HTTPS Users Using Git Credentials in the CodeCommit documentation for detailed steps.

We’re going to initialize a Git repository for our sample, configure our origin, and push the sample to the master branch in CodeCommit.

  1. Initialize a new Git repository in the sample directory:
    cd sample
    git init

  2. Add and commit the sample files to the repository:
    git add .
    git commit -m "Initial commit"

  3. Configure the git remote and push the sample to it. We’ll use the repository clone URL returned in the CloudFormation stack output (SourceCodeRepositoryCloneUrl) as the remote URL:
    git remote add origin [SourceCodeRepositoryCloneUrl]
    git push origin master

Now that our sample application has been pushed into source control and our build environment image has been pushed into our Docker registry, we’re ready to create a CodeBuild project and start our first build.

Configure the CodeBuild project

In this section, we’ll walk through the steps for configuring CodeBuild to use the custom build environment.

Screenshot of the build configuration

  1. In the AWS Management Console, open the AWS CodeBuild console, and then choose Create project.
  2. In Project name, type php-demo.
  3. From Source provider, choose AWS CodeCommit.  From Repository, choose codebuild-sample-php.
  4. In Environment image, select Specify a Docker image. From Custom image type, choose Amazon ECR. From Amazon ECR Repository, choose codebuild/php.  From Amazon ECR image, choose latest.
  5. In Build specification, select Use the buildspec.yml in the source code root directory.
  6. In Artifacts type, choose No artifacts.
  7. Choose Continue and then choose Save and Build.
  8. On the next page, from Branch, choose master and then choose Start build.

CodeBuild will pull the build environment image from Amazon ECR and use it to test our application. CodeBuild will show us the status of each build step, the last 20 lines of log messages generated by the build process, and provide a link to Amazon CloudWatch Logs for more debugging output.

Screenshot of the build output

Summary

CodeBuild supports a number of platforms and languages out of the box. By using custom build environments, it can be extended to other runtimes. In this post, we built a PHP environment and demonstrated how to use it to test PHP applications.

We’re excited to see how customers extend and use CodeBuild to enable continuous integration and continuous delivery for their applications. Please leave questions or suggestions in the comments or share what you’ve learned extending CodeBuild for your own projects.

Managing Secrets for Amazon ECS Applications Using Parameter Store and IAM Roles for Tasks

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/managing-secrets-for-amazon-ecs-applications-using-parameter-store-and-iam-roles-for-tasks/

Thanks to my colleague Stas Vonholsky  for a great blog on managing secrets with Amazon ECS applications.

—–

As containerized applications and microservice-oriented architectures become more popular, managing secrets, such as a password to access an application database, becomes more challenging and critical.

Some examples of the challenges include:

  • Support for various access patterns across container environments such as dev, test, and prod
  • Isolated access to secrets on a container/application level rather than at the host level
  • Multiple decoupled services with their own needs for access, both as services and as clients of other services

This post focuses on newly released features that support further improvements to secret management for containerized applications running on Amazon ECS. My colleague, Matthew McClean, also published an excellent post on the AWS Security Blog, How to Manage Secrets for Amazon EC2 Container Service–Based Applications by Using Amazon S3 and Docker, which discusses some of the limitations of passing and storing secrets with container parameter variables.

Most secret management tools provide the following functionality:

  • Highly secured storage system
  • Central management capabilities
  • Secure authorization and authentication mechanisms
  • Integration with key management and encryption providers
  • Secure introduction mechanisms for access
  • Auditing
  • Secret rotation and revocation

Amazon EC2 Systems Manager Parameter Store

Parameter Store is a feature of Amazon EC2 Systems Manager. It provides a centralized, encrypted store for sensitive information and has many advantages when combined with other capabilities of Systems Manager, such as Run Command and State Manager. The service is fully managed, highly available, and highly secured.

Because Parameter Store is accessible using the Systems Manager API, AWS CLI, and AWS SDKs, you can also use it as a generic secret management store. Secrets can be easily rotated and revoked. Parameter Store is integrated with AWS KMS so that specific parameters can be encrypted at rest with the default or custom KMS key. Importing KMS keys enables you to use your own keys to encrypt sensitive data.

Access to Parameter Store is enabled by IAM policies and supports resource level permissions for access. An IAM policy that grants permissions to specific parameters or a namespace can be used to limit access to these parameters. CloudTrail logs, if enabled for the service, record any attempt to access a parameter.

While Amazon S3 has many of the above features and can also be used to implement a central secret store, Parameter Store has the following added advantages:

  • Easy creation of namespaces to support different stages of the application lifecycle.
  • KMS integration that abstracts parameter encryption from the application while requiring the instance or container to have access to the KMS key and for the decryption to take place locally in memory.
  • Stored history about parameter changes.
  • A service that can be controlled separately from S3, which is likely used for many other applications.
  • A configuration data store, reducing overhead from implementing multiple systems.
  • No usage costs.

Note: At the time of publication, Systems Manager doesn’t support VPC private endpoint functionality. To enforce stricter access to a Parameter Store endpoint from a private VPC, use a NAT gateway with a set Elastic IP address together with IAM policy conditions that restrict parameter access to a limited set of IP addresses.

IAM roles for tasks

With IAM roles for Amazon ECS tasks, you can specify an IAM role to be used by the containers in a task. Applications interacting with AWS services must sign their API requests with AWS credentials. This feature provides a strategy for managing credentials for your applications to use, similar to the way that Amazon EC2 instance profiles provide credentials to EC2 instances.

Instead of creating and distributing your AWS credentials to the containers or using the EC2 instance role, you can associate an IAM role with an ECS task definition or the RunTask API operation. For more information, see IAM Roles for Tasks.

You can use IAM roles for tasks to securely introduce and authenticate the application or container with the centralized Parameter Store. Access to the secret manager should include features such as:

  • Limited TTL for credentials used
  • Granular authorization policies
  • An ID to track the requests in the logs of the central secret manager
  • Integration support with the scheduler that could map between the container or task deployed and the relevant access privileges

IAM roles for tasks support this use case well, as the role credentials can be accessed only from within the container for which the role is defined. The role exposes temporary credentials and these are rotated automatically. Granular IAM policies are supported with optional conditions about source instances, source IP addresses, time of day, and other options.

The source IAM role can be identified in the CloudTrail logs based on a unique Amazon Resource Name and the access permissions can be revoked immediately at any time with the IAM API or console. As Parameter Store supports resource level permissions, a policy can be created to restrict access to specific keys and namespaces.

Dynamic environment association

In many cases, the container image does not change when moving between environments, which supports immutable deployments and ensures that the results are reproducible. What does change is the configuration: in this context, specifically the secrets. For example, a database and its password might be different in the staging and production environments. There’s still the question of how do you point the application to retrieve the correct secret? Should it retrieve prod.app1.secret, test.app1.secret or something else?

One option can be to pass the environment type as an environment variable to the container. The application then concatenates the environment type (prod, test, etc.) with the relative key path and retrieves the relevant secret. In most cases, this leads to a number of separate ECS task definitions.

When you describe the task definition in a CloudFormation template, you could base the entry in the IAM role that provides access to Parameter Store, KMS key, and environment property on a single CloudFormation parameter, such as “environment type.” This approach could support a single task definition type that is based on a generic CloudFormation template.

Walkthrough: Securely access Parameter Store resources with IAM roles for tasks

This walkthrough is configured for the North Virginia region (us-east-1). I recommend using the same region.

Step 1: Create the keys and parameters

First, create the following KMS keys with the default security policy to be used to encrypt various parameters:

  • prod-app1 –used to encrypt any secrets for app1.
  • license-key –used to encrypt license-related secrets.
aws kms create-key --description prod-app1 --region us-east-1
aws kms create-key --description license-code --region us-east-1

Note the KeyId property in the output of both commands. You use it throughout the walkthrough to identify the KMS keys.

The following commands create three parameters in Parameter Store:

  • prod.app1.db-pass (encrypted with the prod-app1 KMS key)
  • general.license-code (encrypted with the license-key KMS key)
  • prod.app2.user-name (stored as a standard string without encryption)
aws ssm put-parameter --name prod.app1.db-pass --value "AAAAAAAAAAA" --type SecureString --key-id "<key-id-for-prod-app1-key>" --region us-east-1
aws ssm put-parameter --name general.license-code --value "CCCCCCCCCCC" --type SecureString --key-id "<key-id-for-license-code-key>" --region us-east-1
aws ssm put-parameter --name prod.app2.user-name --value "BBBBBBBBBBB" --type String --region us-east-1

Step 2: Create the IAM role and policies

Now, create a role and an IAM policy to be associated later with the ECS task that you create later on.
The trust policy for the IAM role needs to allow the ecs-tasks entity to assume the role.

{
   "Version": "2012-10-17",
   "Statement": [
     {
       "Sid": "",
       "Effect": "Allow",
       "Principal": {
         "Service": "ecs-tasks.amazonaws.com"
       },
       "Action": "sts:AssumeRole"
     }
   ]
 }

Save the above policy as a file in the local directory with the name ecs-tasks-trust-policy.json.

aws iam create-role --role-name prod-app1 --assume-role-policy-document file://ecs-tasks-trust-policy.json

The following policy is attached to the role and later associated with the app1 container. Access is granted to the prod.app1.* namespace parameters, the encryption key required to decrypt the prod.app1.db-pass parameter and the license code parameter. The namespace resource permission structure is useful for building various hierarchies (based on environments, applications, etc.).

Make sure to replace <key-id-for-prod-app1-key> with the key ID for the relevant KMS key and <account-id> with your account ID in the following policy.

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Effect": "Allow",
             "Action": [
                 "ssm:DescribeParameters"
             ],
             "Resource": "*"
         },
         {
             "Sid": "Stmt1482841904000",
             "Effect": "Allow",
             "Action": [
                 "ssm:GetParameters"
             ],
             "Resource": [
                 "arn:aws:ssm:us-east-1:<account-id>:parameter/prod.app1.*",
                 "arn:aws:ssm:us-east-1:<account-id>:parameter/general.license-code"
             ]
         },
         {
             "Sid": "Stmt1482841948000",
             "Effect": "Allow",
             "Action": [
                 "kms:Decrypt"
             ],
             "Resource": [
                 "arn:aws:kms:us-east-1:<account-id>:key/<key-id-for-prod-app1-key>"
             ]
         }
     ]
 }

Save the above policy as a file in the local directory with the name app1-secret-access.json:

aws iam create-policy --policy-name prod-app1 --policy-document file://app1-secret-access.json

Replace <account-id> with your account ID in the following command:

aws iam attach-role-policy --role-name prod-app1 --policy-arn "arn:aws:iam::<account-id>:policy/prod-app1"

Step 3: Add the testing script to an S3 bucket

Create a file with the script below, name it access-test.sh and add it to an S3 bucket in your account. Make sure the object is publicly accessible and note down the object link, for example https://s3-eu-west-1.amazonaws.com/my-new-blog-bucket/access-test.sh

#!/bin/bash
#This is simple bash script that is used to test access to the EC2 Parameter store.
# Install the AWS CLI
apt-get -y install python2.7 curl
curl -O https://bootstrap.pypa.io/get-pip.py
python2.7 get-pip.py
pip install awscli
# Getting region
EC2_AVAIL_ZONE=`curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone`
EC2_REGION="`echo \"$EC2_AVAIL_ZONE\" | sed -e 's:\([0-9][0-9]*\)[a-z]*\$:\\1:'`"
# Trying to retrieve parameters from the EC2 Parameter Store
APP1_WITH_ENCRYPTION=`aws ssm get-parameters --names prod.app1.db-pass --with-decryption --region $EC2_REGION --output text 2>&1`
APP1_WITHOUT_ENCRYPTION=`aws ssm get-parameters --names prod.app1.db-pass --no-with-decryption --region $EC2_REGION --output text 2>&1`
LICENSE_WITH_ENCRYPTION=`aws ssm get-parameters --names general.license-code --with-decryption --region $EC2_REGION --output text 2>&1`
LICENSE_WITHOUT_ENCRYPTION=`aws ssm get-parameters --names general.license-code --no-with-decryption --region $EC2_REGION --output text 2>&1`
APP2_WITHOUT_ENCRYPTION=`aws ssm get-parameters --names prod.app2.user-name --no-with-decryption --region $EC2_REGION --output text 2>&1`
# The nginx server is started after the script is invoked, preparing folder for HTML.
if [ ! -d /usr/share/nginx/html/ ]; then
mkdir -p /usr/share/nginx/html/;
fi
chmod 755 /usr/share/nginx/html/

# Creating an HTML file to be accessed at http://<public-instance-DNS-name>/ecs.html
cat > /usr/share/nginx/html/ecs.html <<EOF
<!DOCTYPE html>
<html>
<head>
<title>App1</title>
<style>
body {padding: 20px;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}
code {white-space: pre-wrap;}
result {background: hsl(220, 80%, 90%);}
</style>
</head>
<body>
<h1>Hi there!</h1>
<p style="padding-bottom: 0.8cm;">Following are the results of different access attempts as expirienced by "App1".</p>

<p><b>Access to prod.app1.db-pass:</b><br/>
<pre><code>aws ssm get-parameters --names prod.app1.db-pass --with-decryption</code><br/>
<code><result>$APP1_WITH_ENCRYPTION</result></code><br/>
<code>aws ssm get-parameters --names prod.app1.db-pass --no-with-decryption</code><br/>
<code><result>$APP1_WITHOUT_ENCRYPTION</result></code></pre><br/>
</p>

<p><b>Access to general.license-code:</b><br/>
<pre><code>aws ssm get-parameters --names general.license-code --with-decryption</code><br/>
<code><result>$LICENSE_WITH_ENCRYPTION</result></code><br/>
<code>aws ssm get-parameters --names general.license-code --no-with-decryption</code><br/>
<code><result>$LICENSE_WITHOUT_ENCRYPTION</result></code></pre><br/>
</p>

<p><b>Access to prod.app2.user-name:</b><br/>
<pre><code>aws ssm get-parameters --names prod.app2.user-name --no-with-decryption</code><br/>
<code><result>$APP2_WITHOUT_ENCRYPTION</result></code><br/>
</p>

<p><em>Thanks for visiting</em></p>
</body>
</html>
EOF

Step 4: Create a test cluster

I recommend creating a new ECS test cluster with the latest ECS AMI and ECS agent on the instance. Use the following field values:

  • Cluster name: access-test
  • EC2 instance type: t2.micro
  • Number of instances: 1
  • Key pair: No EC2 key pair is required, unless you’d like to SSH to the instance and explore the running container.
  • VPC: Choose the default VPC. If unsure, you can find the VPC ID with the IP range 172.31.0.0/16 in the Amazon VPC console.
  • Subnets: Pick a subnet in the default VPC.
  • Security group: Create a new security group with CIDR block 0.0.0.0/0 and port 80 for inbound access.

Leave other fields with the default settings.

Create a simple task definition that relies on the public NGINX container and the role that you created for app1. Specify the properties such as the available container resources and port mappings. Note the command option is used to download and invoke a test script that installs the AWS CLI on the container, runs a number of get-parameter commands, and creates an HTML file with the results.

Replace <account-id> with your account ID, <your-S3-URI> with a link to the S3 object created in step 3 in the following commands:

aws ecs register-task-definition --family access-test --task-role-arn "arn:aws:iam::<account-id>:role/prod-app1" --container-definitions name="access-test",image="nginx",portMappings="[{containerPort=80,hostPort=80,protocol=tcp}]",readonlyRootFilesystem=false,cpu=512,memory=490,essential=true,entryPoint="sh,-c",command="\"/bin/sh -c \\\"apt-get update ; apt-get -y install curl ; curl -O <your-S3-URI> ; chmod +x access-test.sh ; ./access-test.sh ; nginx -g 'daemon off;'\\\"\"" --region us-east-1

aws ecs run-task --cluster access-test --task-definition access-test --count 1 --region us-east-1

Verifying access

After the task is in a running state, check the public DNS name of the instance and navigate to the following page:

http://<ec2-instance-public-DNS-name>/ecs.html

You should see the results of running different access tests from the container after a short duration.

If the test results don’t appear immediately, wait a few seconds and refresh the page.
Make sure that inbound traffic for port 80 is allowed on the security group attached to the instance.

The results you see in the static results HTML page should be the same as running the following commands from the container.

prod.app1.key1

aws ssm get-parameters --names prod.app1.db-pass --with-decryption --region us-east-1
aws ssm get-parameters --names prod.app1.db-pass --no-with-decryption --region us-east-1

Both commands should work, as the policy provides access to both the parameter and the required KMS key.

general.license-code

aws ssm get-parameters --names general.license-code --no-with-decryption --region us-east-1
aws ssm get-parameters --names general.license-code --with-decryption --region us-east-1

Only the first command with the “no-with-decryption” parameter should work. The policy allows access to the parameter in Parameter Store but there’s no access to the KMS key. The second command should fail with an access denied error.

prod.app2.user-name

aws ssm get-parameters --names prod.app2.user-name –no-with-decryption --region us-east-1

The command should fail with an access denied error, as there are no permissions associated with the namespace for prod.app2.

Finishing up

Remember to delete all resources (such as the KMS keys and EC2 instance), so that you don’t incur charges.

Conclusion

Central secret management is an important aspect of securing containerized environments. By using Parameter Store and task IAM roles, customers can create a central secret management store and a well-integrated access layer that allows applications to access only the keys they need, to restrict access on a container basis, and to further encrypt secrets with custom keys with KMS.

Whether the secret management layer is implemented with Parameter Store, Amazon S3, Amazon DynamoDB, or a solution such as Vault or KeyWhiz, it’s a vital part to the process of managing and accessing secrets.

2016-12-30 33c3 – останалите лекции

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3333

The Moon and European Space Exploration беше в общи линии рекламна (и бая забавна) лекция на European Space Agency, в която директорът и (на голяма скорост) разказа какви неща са правили и се канят да правят, най-вече база на луната като отправна точка за изследване на други неща из космоса.

Interplanetary Colonization беше в общи линии преглед на съществуващите и вероятните в бъдеще технологии за космически двигатели за дълги разстояния.

Dissecting HDMI (гледах го на запис) беше доста просвещаващо и освен за стандарта и проблемите разказа и за устройството, което правят, което в комбинация с voctomix може да се окаже перфектното решение за stream-ване на конференции – може да успеем да смъкнем setup-а на доста по-малко устройства, конвертори, черни магии за скалиране и т.н.. Хората и раздават хардуер за който иска да дописва по него, много ми се искаше да имах времето за целта…

An Elevator to the Moon (and back) (запис) е пак преглед на технологиите за космически асансьор и за какво можем да го ползваме (имаше огромна Q&A след това). Като цяло в момента имаме технологията да направим такъв на луната или Марс, но за земята още не сме докарали верния материал.

Community на Мич Алтман беше за hackerspace-овете, обществата и като цяло за хакерите, хубава мотивираща лекция.

33C3 Infrastructure Review не беше толкова интересно, колкото предишни години – нямаше много подробности за мрежата, видео екипът не можа да си върже лаптопа (което беше голямо забавление за цялата публика) и като цяло най-интересна беше частта на мрежата за пневматична поща.

А на закриващата церемония наляха малко течен азот в една пластмасова бутилка, пуснаха я в един голям леген и я заляха с пластмасови топки. Гърмежът ги метна почти до тавана.

(и още няма идея къде ще е 34C3)

(мисля да гледам още, особено това, което доктора спомена в коментарите на последния post)

2016-12-29 33c3 – малко лекции

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3332

Тази година гледам 33c3 от вкъщи, и май гледам по-малко лекции. Ето какво съм гледал до тук:
(обработените лекции са на media.ccc.de, ако ги няма тук може да се търсят на relive)

What could possibly go wrong with <insert x86 instruction here>? беше описание и демонстрационен код как може да чрез ползване на кеша (flush-ване, напълване и т.н.) да се комуникира между несвързани процеси или дори виртуални машини, които се schedule-ват на същия процесор (т.е. и между core-ове, които споделят кеш).

How Do I Crack Satellite and Cable Pay TV? беше за reverse-ване на чиповете, които декриптират сателитна телевизия, прилично интересно.

Building a high throughput low-latency PCIe based SDR са хора, които разработват software-defined radio на mini PCIe карта, което може да се използва за всякакви по-сериозни цели (има включително синхронизация на часовника от GPS), като за момента са до средата (не са си постигнали целта за скорост на устройството), и ако постигнат прилична цена, това може да се окаже идеалното устройство за правене на софтуерни GSM/LTE неща.

Shut Up and Take My Money! беше за как са счупили на някакви хора мобилното приложение, което се занимава с потребителски транзакции. Звучеше като да трябва да затворят тия, дето са го мислили/писали.

What’s It Doing Now? беше доста интересна лекция за автопилоти по самолети, как тоя тип автоматика трябва да се следи внимателно и като цяло, че нещата не са толкова автомагически, колкото ни се иска. Лекциите за самолетни проблеми винаги са интересни 🙂

Nintendo Hacking 2016 описа какво са счупили по разлините nintendo устройства, exploit-и по boot loader-и и всякакви такива неща. Това беше една от лекциите, в която се споменаваше техниката с glitching – да се подаде по-нисък волтаж за малко, така че дадена инструкция да се намаже и да има шанс да се JMP-не някъде в наш код в някакъв момент, така че да можем да dump-нем нещо.

Where in the World Is Carmen Sandiego? беше лекция за чупенето на системите за резервиране на самолетни билети и като цяло пътувания. Силно тъжно, казаха си в началото, че няма много хакване в лекцията, понеже нещата са твърде счупени и лесни за атака. Кратък съвет – не си публикувайте снимка на boarding pass-а.

You can -j REJECT but you can not hide: Global scanning of the IPv6 Internet е разработка с DNS за сравнително лесно сканиране на всички съществуващи IPv6 DNS PTR записи, та да се намерят повечето съществуващи сървъри. Доста идейно и вероятно ще го тествам някъде 🙂

Tapping into the core описва интересно устройство за дебъгване/слушане на intel-ски процесори през USB и вероятно и мрежа, т.е. дебъгер, който може директно да пипа процесора. Звучи многообещаващо (лекцията гледаше основно на това като начин за правене на rootkit-ове).

Recount 2016: An Uninvited Security Audit of the U.S. Presidential Election беше много празни приказки за изборите в щатите, за пре-преброяване, в което не са открили проблем, и като цяло за счупената им система.

The Untold Story of Edward Snowden’s Escape from Hong Kong разказваше за двете седмици на Edward Snowden в Хонг Конг и бежанците, при които са го крили, преди да отлети, заедно с призив за fundraiser за тях.

State of Internet Censorship 2016 беше нормалното описание, заедно с проектите, чрез които се изследва. В общи линии изводът беше, че вече цензурирането е узаконено почти навсякъде, където се случва и става все по-често явление някоя държава да си спре internet-а за малко, около разни събития.

Million Dollar Dissidents and the Rest of Us описа какви таргетирани методи се използват от различни правителства (и кой им ги продава и учи) за hack-ване и вадене на данни от всякакви устройства на неприятни за тях хора. Нещата са прилично напреднали и е все по-ясно как повечето ни софтуер не става за secure дейности.

On Smart Cities, Smart Energy, And Dumb Security беше за колко са счупени “smart” електромерите и как сигурността на тия неща никой не и обръща сериозно внимание.

Dissecting modern (3G/4G) cellular modems беше интересно описание на съществуващ хардуер, който може да се използва за 3g/4g модем (който се използва в единия iPhone даже), и който накрая се оказа, че търкаля android/linux в себе си (т.е. може спокойно да се каже, че в iPhone 6 (ако не се лъжа) има един linux/adroid).

Тия дни ще догледам още някакви от записи и каквото остава утре и ще пиша пак. Много ми се иска да изгледам повечето неща за random генераторите и лекцията за HDMI.

systemd.conf 2015 Summary

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/systemdconf-2015-summary.html

systemd.conf 2015 is Over Now!

Last week our first systemd.conf conference
took place at betahaus, in Berlin, Germany. With almost 100 attendees,
a dense schedule of 23 high-quality talks stuffed into a single track
on just two days, a productive hackfest and numerous consumed
Club-Mates I believe it was quite a success!

If you couldn’t attend the conference, you may watch all talks on our
YouTube
Channel
. The
slides are available
online
,
too.

Many photos from the conference are available on the Google Events
Page
. Enjoy!

I’d specifically like to thank Daniel Mack, Chris Kühl and Nils Magnus
for running the conference, and making sure that it worked out as
smoothly as it did! Thank you very much, you did a fantastic job!

I’d also specifically like to thank the CCC Video Operation
Center
folks for the excellent video coverage of
the conference. Not only did they implement a live-stream for the
entire talks part of the conference, but also cut and uploaded videos
of all talks to our YouTube
Channel

within the same day (in fact, within a few hours after the talks
finished). That’s quite an impressive feat!

The folks from LinuxTag e.V. put a lot of time and energy in the
organization. It was great to see how well this all worked out!
Excellent work!

(BTW, LinuxTag e.V. and the CCC Video Operation Center folks are
willing to help with the organization of Free Software community
events in Germany (and Europe?). Hence, if you need an entity that can
do the financial work and other stuff for your Free Software project’s
conference, consider pinging LinuxTag, they might be willing to
help. Similar, if you are organizing such an event and are thinking
about providing video coverage, consider pinging the the CCC VOC
folks! Both of them get our best recommendations!)

I’d also like to thank our conference
sponsors
!
Specifically, we’d like to thank our Gold Sponsors Red Hat and
CoreOS for their support. We’d also like to thank our Silver
Sponsor Codethink, and our Bronze Sponsors Pengutronix,
Pantheon, Collabora, Endocode, the Linux Foundation,
Samsung and Travelping, as well as our Cooperation Partners
LinuxTag and kinvolk.io, and our Media Partner Golem.de.

Last but not least I’d really like to thank our speakers and attendees
for presenting and participating in the conference. Of course, the
conference we put together specifically for you, and we really hope
you had as much fun at it as we did!

Thank you all for attending, supporting, and organizing systemd.conf
2015
! We are looking forward to seeing you
and working with you again at systemd.conf 2016!

Thanks!

systemd for Administrators, Part XVII

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/journalctl.html

It’s
that
time again,
here’s
now the seventeenth
installment
of

my ongoing series
on
systemd
for
Administrators:

Using the Journal

A
while back I already
posted a blog story introducing some
functionality of the journal, and how it is exposed in
systemctl. In this episode I want to explain a few more uses
of the journal, and how you can make it work for you.

If you are wondering what the journal is, here’s an explanation in
a few words to get you up to speed: the journal is a component of systemd,
that captures Syslog messages, Kernel log messages, initial RAM disk
and early boot messages as well as messages written to STDOUT/STDERR
of all services, indexes them and makes this available to the user. It
can be used in parallel, or in place of a traditional syslog daemon,
such as rsyslog or syslog-ng. For more information, see the initial
announcement
.

The journal has been part of Fedora since F17. With Fedora 18 it
now has grown into a reliable, powerful tool to handle your logs. Note
however, that on F17 and F18 the journal is configured by default to
store logs only in a small ring-buffer in /run/log/journal,
i.e. not persistent. This of course limits its usefulness quite
drastically but is sufficient to show a bit of recent log history in
systemctl status. For Fedora 19, we plan to change this, and
enable persistent logging by default. Then, journal files will be
stored in /var/log/journal and can grow much larger, thus
making the journal a lot more useful.

Enabling Persistency

In the meantime, on F17 or F18, you can enable journald’s persistent storage manually:

# mkdir -p /var/log/journal

After that, it’s a good idea to reboot, to get some useful
structured data into your journal to play with. Oh, and since you have
the journal now, you don’t need syslog anymore (unless having
/var/log/messages as text file is a necessity for you.), so
you can choose to deinstall rsyslog:

# yum remove rsyslog

Basics

Now we are ready to go. The following text shows a lot of features
of systemd 195 as it will be included in Fedora 18[1], so
if your F17 can’t do the tricks you see, please wait for F18. First,
let’s start with some basics. To access the logs of the journal use
the journalctl(1)
tool. To have a first look at the logs, just type in:

# journalctl

If you run this as root you will see all logs generated on the
system, from system components the same way as for logged in
users. The output you will get looks like a pixel-perfect copy of the
traditional /var/log/messages format, but actually has a
couple of improvements over it:

  • Lines of error priority (and higher) will be highlighted red.
  • Lines of notice/warning priority will be highlighted bold.
  • The timestamps are converted into your local time-zone.
  • The output is auto-paged with your pager of choice (defaults to less).
  • This will show all available data, including rotated logs.
  • Between the output of each boot we’ll add a line clarifying that a new boot begins now.

Note that in this blog story I will not actually show you any of
the output this generates, I cut that out for brevity — and to give
you a reason to try it out yourself with a current image for F18’s
development version with systemd 195. But I do hope you get the idea
anyway.

Access Control

Browsing logs this way is already pretty nice. But requiring to be
root sucks of course, even administrators tend to do most of their
work as unprivileged users these days. By default, Journal users can
only watch their own logs, unless they are root or in the adm
group. To make watching system logs more fun, let’s add ourselves to
adm:

# usermod -a -G adm lennart

After logging out and back in as lennart I know have access
to the full journal of the system and all users:

$ journalctl

Live View

If invoked without parameters journalctl will show me the current
log database. Sometimes one needs to watch logs as they grow, where
one previously used tail -f /var/log/messages:

$ journalctl -f

Yes, this does exactly what you expect it to do: it will show you
the last ten logs lines and then wait for changes and show them as
they take place.

Basic Filtering

When invoking journalctl without parameters you’ll see the
whole set of logs, beginning with the oldest message stored. That of
course, can be a lot of data. Much more useful is just viewing the
logs of the current boot:

$ journalctl -b

This will show you only the logs of the current boot, with all the
aforementioned gimmicks mentioned. But sometimes even this is way too
much data to process. So what about just listing all the real issues
to care about: all messages of priority levels ERROR and worse, from
the current boot:

$ journalctl -b -p err

If you reboot only seldom the -b makes little sense,
filtering based on time is much more useful:

$ journalctl --since=yesterday

And there you go, all log messages from the day before at 00:00 in
the morning until right now. Awesome! Of course, we can combine this with
-p err or a similar match. But humm, we are looking for
something that happened on the 15th of October, or was it the 16th?

$ journalctl --since=2012-10-15 --until="2011-10-16 23:59:59"

Yupp, there we go, we found what we were looking for. But humm, I
noticed that some CGI script in Apache was acting up earlier today,
let’s see what Apache logged at that time:

$ journalctl -u httpd --since=00:00 --until=9:30

Oh, yeah, there we found it. But hey, wasn’t there an issue with
that disk /dev/sdc? Let’s figure out what was going on there:

$ journalctl /dev/sdc

OMG, a disk error![2] Hmm, let’s quickly replace the
disk before we lose data. Done! Next! — Hmm, didn’t I see that the vpnc binary made a booboo? Let’s
check for that:

$ journalctl /usr/sbin/vpnc

Hmm, I don’t get this, this seems to be some weird interaction with
dhclient, let’s see both outputs, interleaved:

$ journalctl /usr/sbin/vpnc /usr/sbin/dhclient

That did it! Found it!

Advanced Filtering

Whew! That was awesome already, but let’s turn this up a
notch. Internally systemd stores each log entry with a set of
implicit meta data. This meta data looks a lot like an
environment block, but actually is a bit more powerful: values can
take binary, large values (though this is the exception, and usually
they just contain UTF-8), and fields can have multiple values assigned
(an exception too, usually they only have one value). This implicit
meta data is collected for each and every log message, without user
intervention. The data will be there, and wait to be used by
you. Let’s see how this looks:

$ journalctl -o verbose -n
[...]
Tue, 2012-10-23 23:51:38 CEST [s=ac9e9c423355411d87bf0ba1a9b424e8;i=4301;b=5335e9cf5d954633bb99aefc0ec38c25;m=882ee28d2;t=4ccc0f98326e6;x=f21e8b1b0994d7ee]
        PRIORITY=6
        SYSLOG_FACILITY=3
        _MACHINE_ID=a91663387a90b89f185d4e860000001a
        _HOSTNAME=epsilon
        _TRANSPORT=syslog
        SYSLOG_IDENTIFIER=avahi-daemon
        _COMM=avahi-daemon
        _EXE=/usr/sbin/avahi-daemon
        _SYSTEMD_CGROUP=/system/avahi-daemon.service
        _SYSTEMD_UNIT=avahi-daemon.service
        _SELINUX_CONTEXT=system_u:system_r:avahi_t:s0
        _UID=70
        _GID=70
        _CMDLINE=avahi-daemon: registering [epsilon.local]
        MESSAGE=Joining mDNS multicast group on interface wlan0.IPv4 with address 172.31.0.53.
        _BOOT_ID=5335e9cf5d954633bb99aefc0ec38c25
        _PID=27937
        SYSLOG_PID=27937
        _SOURCE_REALTIME_TIMESTAMP=1351029098747042

(I cut out a lot of noise here, I don’t want to make this story
overly long. -n without parameter shows you the last 10 log
entries, but I cut out all but the last.)

With the -o verbose switch we enabled verbose
output. Instead of showing a pixel-perfect copy of classic
/var/log/messages that only includes a minimimal subset of
what is available we now see all the gory details the journal has
about each entry. But it’s highly interesting: there is user credential
information, SELinux bits, machine information and more. For a full
list of common, well-known fields, see the
man page
.

Now, as it turns out the journal database is indexed by all
of these fields, out-of-the-box! Let’s try this out:

$ journalctl _UID=70

And there you go, this will show all log messages logged from Linux
user ID 70. As it turns out one can easily combine these matches:

$ journalctl _UID=70 _UID=71

Specifying two matches for the same field will result in a logical
OR combination of the matches. All entries matching either will be
shown, i.e. all messages from either UID 70 or 71.

$ journalctl _HOSTNAME=epsilon _COMM=avahi-daemon

You guessed it, if you specify two matches for different field
names, they will be combined with a logical AND. All entries matching
both will be shown now, meaning that all messages from processes named
avahi-daemon and host epsilon.

But of course, that’s
not fancy enough for us. We are computer nerds after all, we live off
logical expressions. We must go deeper!

$ journalctl _HOSTNAME=theta _UID=70 + _HOSTNAME=epsilon _COMM=avahi-daemon

The + is an explicit OR you can use in addition to the implied OR when
you match the same field twice. The line above hence means: show me
everything from host theta with UID 70, or of host
epsilon with a process name of avahi-daemon.

And now, it becomes magic!

That was already pretty cool, right? Righ! But heck, who can
remember all those values a field can take in the journal, I mean,
seriously, who has thaaaat kind of photographic memory? Well, the
journal has:

$ journalctl -F _SYSTEMD_UNIT

This will show us all values the field _SYSTEMD_UNIT takes in the
database, or in other words: the names of all systemd services which
ever logged into the journal. This makes it super-easy to build nice
matches. But wait, turns out this all is actually hooked up with shell
completion on bash! This gets even more awesome: as you type your
match expression you will get a list of well-known field names, and of
the values they can take! Let’s figure out how to filter for SELinux
labels again. We remember the field name was something with SELINUX in
it, let’s try that:

$ journalctl _SE<TAB>

And yupp, it’s immediately completed:

$ journalctl _SELINUX_CONTEXT=

Cool, but what’s the label again we wanted to match for?

$ journalctl _SELINUX_CONTEXT=<TAB><TAB>
kernel                                                       system_u:system_r:local_login_t:s0-s0:c0.c1023               system_u:system_r:udev_t:s0-s0:c0.c1023
system_u:system_r:accountsd_t:s0                             system_u:system_r:lvm_t:s0                                   system_u:system_r:virtd_t:s0-s0:c0.c1023
system_u:system_r:avahi_t:s0                                 system_u:system_r:modemmanager_t:s0-s0:c0.c1023              system_u:system_r:vpnc_t:s0
system_u:system_r:bluetooth_t:s0                             system_u:system_r:NetworkManager_t:s0                        system_u:system_r:xdm_t:s0-s0:c0.c1023
system_u:system_r:chkpwd_t:s0-s0:c0.c1023                    system_u:system_r:policykit_t:s0                             unconfined_u:system_r:rpm_t:s0-s0:c0.c1023
system_u:system_r:chronyd_t:s0                               system_u:system_r:rtkit_daemon_t:s0                          unconfined_u:system_r:unconfined_t:s0-s0:c0.c1023
system_u:system_r:crond_t:s0-s0:c0.c1023                     system_u:system_r:syslogd_t:s0                               unconfined_u:system_r:useradd_t:s0-s0:c0.c1023
system_u:system_r:devicekit_disk_t:s0                        system_u:system_r:system_cronjob_t:s0-s0:c0.c1023            unconfined_u:unconfined_r:unconfined_dbusd_t:s0-s0:c0.c1023
system_u:system_r:dhcpc_t:s0                                 system_u:system_r:system_dbusd_t:s0-s0:c0.c1023              unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
system_u:system_r:dnsmasq_t:s0-s0:c0.c1023                   system_u:system_r:systemd_logind_t:s0
system_u:system_r:init_t:s0                                  system_u:system_r:systemd_tmpfiles_t:s0

Ah! Right! We wanted to see everything logged under PolicyKit’s security label:

$ journalctl _SELINUX_CONTEXT=system_u:system_r:policykit_t:s0

Wow! That was easy! I didn’t know anything related to SELinux could
be thaaat easy! 😉 Of course this kind of completion works with any
field, not just SELinux labels.

So much for now. There’s a lot more cool stuff in journalctl(1)
than this. For example, it generates JSON output for you! You can match
against kernel fields! You can get simple
/var/log/messages-like output but with relative timestamps!
And so much more!

Anyway, in the next weeks I hope to post more stories about all the
cool things the journal can do for you. This is just the beginning,
stay tuned.

Footnotes

[1] systemd 195 is currently still in Bodhi
but hopefully will get into F18 proper soon, and definitely before the
release of Fedora 18.

[2] OK, I cheated here, indexing by block device is not in
the kernel yet, but on its way due to Hannes’
fantastic work
, and I hope it will make appearence in
F18.

27C3 Fudfest

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/ccc-nervt.html

I really wonder why on earth the 27C3 accepted a nonsensical
paper like this
into their programme. So .. stupid. You read half the
proposal and it’s already kinda obvious that the presenter has no idea what he
is talking of. Fundamental errors, obvious misinterpretations, outdated issues:
this is just FUD.

And apparently this talk even is anonymous? Such a coward! FUDing around
anonymously is acceptable at the CCC?