Charge your Tesla automatically with Raspberry Pi

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/charge-your-tesla-automatically-with-raspberry-pi/

It’s the worst feeling in the world: waking up and realising you forgot to put your electric car on charge overnight. What do you do now? Dig a bike out of the shed? Wait four hours until there’s enough juice in the battery to get you where you need to be? Neither option works if you’re running late. If only there were a way to automate the process, so that when you park up, the charger find its way to the charging port on its own. That would make life so much easier.

This is quite the build

Of course, this is all conjecture, because I drive a car made in the same year I started university. Not even the windows go up and down automatically. But I can dream, and I still love this automatic Tesla charger built with Raspberry Pi.

Wait, don’t Tesla make those already?

Back in 2015 Tesla released a video of their own prototype which can automatically charge their cars. But things have gone quiet, and nothing seems to be coming to market any time soon – nothing directly from Tesla, anyway. And while we like the slightly odd snake-charmer vibes the Tesla prototype gives off, we really like Pat’s commitment to spending hours tinkering in order to automate a 20-second manual job. It’s how we do things around here.

This video makes me feel weird

Electric vehicle enthusiast Andrew Erickson has been keeping up with the prototype’s whereabouts, and discussed it on YouTube in 2020.

How did Pat build his home-made charger?

Tired of waiting on Tesla, Pat took matters into his own hands and developed a home-made solution with Raspberry Pi 4. Our tiny computer is the “brains of everything”, and is mounted to a carriage on Pat’s garage wall.

automatic tesla charger rig mounted on garage wall
The entire rig mounted to Pat’s garage wall

There’s a big servo at the end of the carriage, which rotates the charging arm out when it’s needed. And an ultrasonic distance sensor ensures none of the home-made apparatus hits the car.

automatic tesla charger sensors
Big white thing on the left is the charging arm. Pat pointing to the little green Raspberry Pi camera module up top. And the yellow box at the bottom is the distance sensor

How does the charger find the charging port?

A Raspberry Pi Camera Module takes photos and sends them back to a machine learning model (Pat used TensorFlow Lite) running on his Raspberry Pi 4. This is how the charging arm finds its way to the port. You can watch the model in action from this point in the build video.

automatic tesla charger in action
“Marco!” “Polo!” “Marco!” “Polo!”

Top stuff, Pat. Now I just need to acquire a Tesla from somewhere so I can build one for my own garage. Wait, I don’t have a garage either…

The post Charge your Tesla automatically with Raspberry Pi appeared first on Raspberry Pi.

Security updates for Saturday

Post Syndicated from original https://lwn.net/Articles/862487/rss

Security updates have been issued by Arch Linux (gitlab, nodejs, openexr, php, php7, rabbitmq, ruby-addressable, and spice), Fedora (suricata), Gentoo (binutils, docker, runc, and tor), Mageia (avahi, botan2, connman, gstreamer1.0-plugins, htmldoc, jhead, libcroco, libebml, libosinfo, openexr, php, php-smarty, pjproject, and python), openSUSE (apache2, bind, bouncycastle, ceph, containerd, docker, runc, cryptctl, curl, dovecot23, firefox, graphviz, gstreamer-plugins-bad, java-1_8_0-openj9, java-1_8_0-openjdk, libass, libjpeg-turbo, libopenmpt, libqt5-qtwebengine, libu2f-host, libwebp, libX11, lua53, lz4, nginx, ovmf, postgresql10, postgresql12, python-urllib3, qemu, roundcubemail, solo, thunderbird, ucode-intel, wireshark, and xterm), and SUSE (permissions).

Коментар: Да не допуснем нови диспечери на Задкулисието

Post Syndicated from Биволъ original https://bivol.bg/%D0%B4%D0%B0-%D0%BD%D0%B5-%D0%B4%D0%BE%D0%BF%D1%83%D1%81%D0%BD%D0%B5%D0%BC-%D0%BD%D0%BE%D0%B2%D0%B8-%D0%B4%D0%B8%D1%81%D0%BF%D0%B5%D1%87%D0%B5%D1%80%D0%B8-%D0%BD%D0%B0-%D0%B7%D0%B0%D0%B4%D0%BA%D1%83.html

събота 10 юли 2021


от Иво Алексиев Изборите на 11.07. обещават да са преломен момент. По демократичен път Суверенът изглежда решен да си върне овладяната от задкулисници държава. Мечтата на поколения българи може и…

Избори юли 2021: секциите в чужбина

Post Syndicated from original https://yurukov.net/blog/2021/%D0%B8%D0%B7%D0%B1%D0%BE%D1%80%D0%B8-%D1%8E%D0%BB%D0%B8-2021-%D1%81%D0%B5%D0%BA%D1%86%D0%B8%D0%B8%D1%82%D0%B5-%D0%B2-%D1%87%D1%83%D0%B6%D0%B1%D0%B8%D0%BD%D0%B0/

Утре българите в чужбина ще могат да гласуват в рекорден брой секции. В немалко от тях ще има машинно гласуване. Ето най-важното, което трябва да знаете за изборите:

  • Изборният ден започва в 7:00 сутринта местно време и свършва в 8:00 часа вечерта
  • Може да гласува всеки български гражданин независимо дали е подал заявление или не
  • Независимо от отбелязаната в заявлението Ви секция, може да гласувате където и да е в чужбина
  • В чужбина може да гласува всеки български гражданин независимо от обичайното му пребиваване. Това означава, че ако пътувате зад граница по работа или на почивка, може да упражните гласа си в близка до Вас секция. Отново, няма значение дали сте подавали заявление.
  • Ако все пак сте в България по време на изборите, може да гласувате само по постоянен адрес. Тогава попълвате декларация 17, че не сте гласували в чужбина и са длъжни да Ви отбележат. Присъствате в списъка и след името Ви е отбелязано МВнР
  • Ако гласувате в чужбина попълвате декларация 22, че не сте и няма да гласувате другаде. В секцията следва да има бланки за попълване на място, но имате право да я попълните вкъщи и да я донесете. Това ще Ви спести време. Ако комисията на секцията откаже да приеме предварително попълнено заявление, имате право да обжалвате и са длъжни да се свържат с ЦИК и да решат въпроса веднага без отлагане. Ако откажат или продължават да настояват, може да подадете сами сигнал до ЦИК на телефоните, които ще бъдат обявени на сайта или по мейл
  • Гласува се с валидна лична карта или паспорт или такава изтекла след 13 март 2020 г. Ако е изтекла преди това може да гласувате като покажете бележка от консулството, че сте заявили нов документ. Това беше потвърдено с решение на ЦИК.
  • В чужбина има възможност да гласувате само за партия/коалиция. Няма възможност за преференции, каквито има в страната. Така гласът Ви помага само на конкретната коалиция да влезе в парламента. Запознайте се и с образеца на бюлетината.
  • В част от секциите в чужбина ще има възможност да се гласува с машини. Това ефективно изключва вероятността за невалидни гласове. Пример за електронната бюлетина ще намерите тук, а тук е демо за работа с машините. Макар на екрана да има заглавие за „Преференции“, полето ще остане празно в чужбина по причините в предишната точка.

Както и на предишните избори може да натискате на адреса и да го отворите в Google. Тъй като изписването на адресите невинаги се намира в Google, съм добавил и възможност за отваряне по географски координати.

Също така, когато гласувате, ви призовавам да дадете обратна връзка колко време сте чакали. Това става чрез бутоните към въпросната секция. Повече информация за това и как ще помогне на другите да намерят близка секция може да прочетете тук.

Секциите в чужбина ще намерите на картата, както и на сайта на ЦИК:

<div style="height:550px"><iframe class="post-fullpage"> src="https://glasuvam.org/parl2107" width="100%" height="500px" allow="geolocation" frameborder="0"></iframe></div>

The post Избори юли 2021: секциите в чужбина first appeared on Блогът на Юруков.

Securing the Supply Chain: Lessons Learned from the Codecov Compromise

Post Syndicated from Justin Pagano original https://blog.rapid7.com/2021/07/09/securing-the-supply-chain-lessons-learned-from-the-codecov-compromise/

Securing the Supply Chain: Lessons Learned from the Codecov Compromise

Supply chain attacks are all the rage these days. While they’re not a new part of the threat landscape, they are growing in popularity among more sophisticated threat actors, and they can create significant system-wide disruption, expense, and loss of confidence across multiple organizations, sectors, or regions. The compromise of Codecov’s Bash Uploader script is one of the latest such attacks. While much is still unknown about the full impact of this incident on organizations around the world, it’s been another wake up call for the world that cybersecurity problems are getting more complex by the day.

This blog post is meant to provide the security community with defensive knowledge and techniques to protect against supply chain attacks involving continuous integration (CI) systems, such as Jenkins, Bamboo, etc., and version control systems, such as GitHub, GitLab, etc. It covers prevention techniques — for software suppliers and consumers — as well as detection and response techniques in the form of a playbook.

It has been co-developed by our Information Security, Security Research, and Managed Detection & Response teams. We believe one of the best ways for organizations to close their security achievement gap and outpace attackers is by openly sharing knowledge about ever-evolving security best practices.

Defending CI systems and source code repositories from similar supply chain attacks

Below are some of the security best practices defenders can use to prevent, detect, and respond to incidents like the Codecov compromise.

Securing the Supply Chain: Lessons Learned from the Codecov Compromise

Figure 1: High-level overview of known Codecov supply chain compromise stages

Prevention techniques

Provide and perform integrity checks for executable code

If you’re a software consumer

Use collision-resistant checksum hashes, such as with SHA-256 and SHA-512, provided by your vendor to validate checksums for all executable files or code they provide. Likewise, verify digital signatures for all executable files or code they provide.

If either of these integrity checks fail, notify your vendor ASAP as this could be a sign of compromised code.

If you’re a software supplier

Provide collision-resistant hashes, such as with SHA-256 and SHA-512, and store checksums out-of-band from their corresponding files (i.e. make it so that an attacker has to successfully carry out two attacks to compromise your code: one against the system hosting your checksum data and another against your content delivery systems). Provide users with easy-to-use instructions, including sample code, for performing checksum validation.

Additionally, digitally sign all executable code using tamper-resistant code signing frameworks such as in-toto and secure software update frameworks such as The Update Framework (TUF) (see DataDog’s blog post about using these tools for reference). Simply signing code with a private key is insufficient since attackers have demonstrated ways to compromise static signing keys stored on servers to forge authentic digital signatures.

Relevant for the following Codecov compromise attack stages:

  • Customers’ CI jobs dynamically load Bash Uploader

Version control third-party software components

Store and load local copies of third-party components in a version control system to track changes over time. Only update them after comparing code differences between versions, performing checksum validation, and authenticating digital signatures.

Relevant for the following Codecov compromise attack stages:

  • Bash Uploader script modified and replaced in GCS

Implement egress filtering

Identify trusted internet-accessible systems and apply host-based or network-based firewall rules to only allow egress network traffic to those trusted systems. Use specific IP addresses and fully qualified domain names whenever possible, and fallback to using IP ranges, subdomains, or domains only when necessary.

Relevant for the following Codecov compromise attack stages:

  • Environment variables, including creds, exfiltrated

Implement IP address safelisting

While zero-trust-networking (ZTN) doubts have been cast on the effectiveness of network perimeter security controls, such as IP address safelisting, they are still one of the easiest and most effective ways to mitigate attacks targeting internet routable systems. IP address safelisting is especially useful in the context of protecting service account access to systems when ZTN controls like hardware-backed device authentication certificates aren’t feasible to implement.

Popular source code repository services, such as GitHub, provide this functionality, although it may require you to host your own server or, if using their cloud hosted option, have multiple organizations in place to host your private repositories separately from your public repositories.

Relevant for the following Codecov compromise attack stages:

  • Creds used to access source code repos
  • Bash Uploader script modified and replaced in GCS

Apply least privilege permissions for CI jobs using job-specific credentials

For any credentials a CI job uses, provide a credential for that specific job (i.e. do not reuse a single credential across multiple CI jobs). Only provision permissions to each credential that are needed for the CI job to execute successfully: no more, no less. This will shrink the blast radius of a credential compromise

Relevant for the following Codecov compromise attack stages:

  • Creds used to access source code repos

Use encrypted secrets management for safe credential storage

If you absolutely cannot avoid storing credentials in source code, use cryptographic tooling such as AWS KMS and the AWS Encryption SDK to encrypt credentials before storing them in source code. Otherwise, store them in a secrets management solution, such as Vault, AWS Secrets Manager, or GitHub Actions Encrypted Secrets (if you’re using GitHub Actions as your CI service, that is).

Relevant for the following Codecov compromise attack stages:

  • Creds used to access source code repos

Block plaintext secrets from code commits

Implement pre-commit hooks with tools like git-secrets to detect and block plaintext credentials before they’re committed to your repositories.

Relevant for the following Codecov compromise attack stages:

  • Creds used to access source code repos

Use automated frequent service account credential rotation

Rotate credentials that are used programmatically (e.g. service account passwords, keys, tokens, etc.) to ensure that they’re made unusable at some point in the future if they’re exposed or obtained by an attacker.

If you’re able to automate credential rotation, rotate them as frequently as hourly. Also, create two “credential rotator” credentials that can both rotate all service account credentials and rotate each other. This ensures that the credential that is used to rotate other credentials is also short lived.

Relevant for the following Codecov compromise attack stages:

  • Creds used to access source code repos

Detection techniques

While we strongly advocate for adopting multiple layers of prevention controls to make it harder for attackers to compromise software supply chains, we also recognize that prevention controls are imperfect by themselves. Having multiple layers of detection controls is essential for catching suspicious or malicious activity that you can’t (or in some cases shouldn’t) have prevention controls for.

Identify Dependencies

You’ll need these in place to create detection rules and investigate suspicious activity:

  1. Process execution logs, including full command line data, or CI job output logs
  2. Network logs (firewall, network flow, etc.), including source and destination IP address
  3. Authentication logs (on-premise and cloud-based applications), including source IP and identity/account name
  4. Activity audit logs (on-premise and cloud-based applications), including source IP and identity/account name
  5. Indicators of compromise (IOCs), including IPs, commands, file hashes, etc.

Ingress from atypical IP addresses or regions

Whether or not you’re able to implement IP address safelisting for accessing certain systems/environments, use an IP address safelist to detect when atypical IP addresses are accessing critical systems that should only be accessed by trusted IPs.

Relevant for the following Codecov compromise attack stages:

  • Bash Uploader script modified and replaced in GCS
  • Creds used to access source code repos

Egress to atypical IP addresses or regions

Whether or not you’re able to implement egress filtering for certain systems/environments, use an IP address safelist to detect when atypical IP addresses are being connected to.

Relevant for the following Codecov compromise attack stages:

  • Environment variables, including creds, exfiltrated

Environment variables being passed to network connectivity processes

It’s unusual for a system’s local environment variables to be exported and passed into processes used to communicate over a network (curl, wget, nc, etc.), regardless of the IP address or domain being connected to.

Relevant for the following Codecov compromise attack stages:

  • Environment variables, including creds, exfiltrated

Response techniques

The response techniques outlined below are, in some cases, described in the context of the IOCs that were published by Codecov. Dependencies identified in “Detection techniques” above are also dependencies for response steps outlined below.

Data exfiltration response steps: CI servers

Identify and contain affected systems and data

  1. Search CI systems’ process logs, job output, and job configuration files to identify usage of compromised third-party components (in regex form). This will identify potentially affected CI systems that have been using the third-party component that is in scope. This is useful for getting a full inventory of potentially affected systems and examining any local logs that might not be in your SIEM

curl (-s )?https://codecov.io/bash

2. Search for known IOC IP addresses in regex form (based on RegExr community pattern)

(.*79\.135\.72\.34|178\.62\.86\.114|104\.248\.94\.23|185\.211\.156\.78|91\.194\.227\.*|5\.189\.73\.*|218\.92\.0\.247|122\.228\.19\.79|106\.107\.253\.89|185\.71\.67\.56|45\.146\.164\.164|118\.24\.150\.193|37\.203\.243\.207|185\.27\.192\.99\.*)

3. Search for known IOC command line pattern(s)

curl -sm 0.5 -d "$(git remote -v)

4. Create forensic image of affected system(s) identified in steps 1 – 3

5. Network quarantine and/or power off affected system(s)

6. Replace affected system(s) with last known good backup, image snapshot, or clean rebuild

7. Analyze forensic image and historical CI system process, job output, and/or network traffic data to identify potentially exposed sensitive data, such as credentials

Search for malicious usage of potentially exposed credentials

  1. Search authentication and activity audit logs for IP address IOCs
  2. Search authentication and activity audit logs for potentially compromised account events originating from IP addresses outside of organization’s known IP addresses
  3. This could potentially uncover new IP address IOCs

Unauthorized access response steps: source code repositories

Clone full historical source code repository content

Note: This content is based on git-based version control systems

Version control systems such as git coincidentally provide forensics-grade information by virtue of them tracking all changes over time. In order to be able to fully search all data from a given repository, certain git commands must be run in sequence.

  1. Set git config to get full commit history for all references (branches, tags), including pull requests, and clone repositories that need to be analyzed (*nix shell script)

git config --global remote.origin.fetch '+refs/pull/*:refs/remotes/origin/pull/*'
# Space delimited list of repos to clone
declare -a repos=("repo1" "repo2" "repo3")
git_url="https://myGitServer.biz/myGitOrg"
# Loop through each repo and clone it locally
for r in ${repos[@]}; do
echo "Cloning $git_url/$r"
git clone "$git_url/$r"
done

2. In the same directory where repositories were cloned from the step above, export full git commit history in text format for each repository. List git committers at top of each file in case they need to be contacted to gather context (*nix shell script)

git fetch --all
for dir in *(/) ; do
(rm $dir.commits.txt
cd $dir
git fetch --all
echo "******COMMITTERS FOR THIS REPO********" >> ../$dir.commits.txt
git shortlog -s -n >> ../$dir.commits.txt
echo "**************************************" >> ../$dir.commits.txt
git log --all --decorate --oneline --graph -p >> ../$dir.commits.txt
cd ..)
done

a. Note: the below steps can be done with tools such as Atom, Visual Studio Code, and Sublime Text and extensions/plugins you can install in them.

If performing manual reviews of these commit history text files, create copies of those files and use the regex below to find and replace git’s log graph formatting that prepends each line of text

^(\|\s*)*(\+|-|\\|/\||\*\|*)*\s*

b. Then, sort the text in ascending or descending order and de-duplicate/unique-ify it. This will make it easier to manually parse.

Search for binary files and content in repositories

Exporting commit history to text files does not export data from any binary files (e.g. ZIP files, XLSX files, etc.). In order to thoroughly analyze source code repository content, binary files need to be identified and reviewed.

  1. Find binary files in folder containing all cloned git repositories based on file extension (*nix shell script)

find -E . -regex '.*\.(jpg|png|pdf|doc|docx|xls|xlsx|zip|7z|swf|atom|mp4|mkv|exe|ppt|pptx|vsd|rar|tiff|tar|rmd|md)'

2. Find binary files in folder containing all cloned git repositories based on MIME type (*nix shell script)

find . -type f -print0 | xargs -0 file --mime-type | grep -e image/jpg -e image/png -e application/pdf -e application/msword -e application/vnd.openxmlformats-officedocument.wordprocessingml.document -e application/vnd.ms-excel -e application/vnd.openxmlformats-officedocument.spreadsheetml.sheet -e application/zip -e application/x-7z-compressed -e application/x-shockwave-flash -e video/mp4 -e application/vnd.ms-powerpoint -e application/vnd.openxmlformats-officedocument.presentationml.presentation -e application/vnd.visio -e application/vnd.rar -e image/tiff -e application/x-targ

3. Find encoded binary content in commit history text files and other text-based files

grep -ir "url(data:" | cut -d\) -f1
grep -ir "base64" | cut -d\" -f1

Search for plaintext credentials: passwords, API keys, tokens, certificate private keys, etc.

  1. Search commit history text files for known credential patterns using tools such as TruffleHog and GitLeaks
  2. Search binary file contents identified in Search for binary files and content in repositories for credentials

Search logs for malicious access to discovered credentials

  1. Follow steps from Data exfiltration response (Searching for malicious usage of potentially exposed credentials) using logs from systems associated with credentials discovered in Search for plaintext credentials

New findings about attacker behavior from Project Sonar

We are fortunate to have a tremendous amount of data at our fingertips thanks to Project Sonar which conducts internet-wide surveys across more than 70 different services and protocols to gain insights into global exposure to common vulnerabilities. We analyzed data from Project Sonar to see if we could gain any additional context about the IP address IOCs associated with Codecov’s Bash Uploader script compromise. What we found was interesting, to say the least:

  • The threat actor set up the first exfiltration server (178.62.86[.]114) on or about February 1, 2021
  • Historical DNS records from remotly[.]ru and seasonver[.]ru have, and continue to, point to this server
  • The threat actor configured a simple HTTP redirect on the exfiltration server to about.codecov.io to avoid detection
    { “http_code”: 301, “http_body”: “”, “server”: “nginx”, “alt-svc”: “clear”, “location”: “http://about.codecov.io/“, “via”: “1.1 google” }
  • The redirect was removed from the exfiltration server on or before February 22, 2021, presumably by the server owner having detected these changes
  • The threat actor set up new infrastructure (104.248.94[.]23) that more closely mirrored Codecov’s GCP setup as their new exfiltration server on or about March 7, 2021
    { “http_code”: 301, “http_body”: “”, “server”: “envoy”, “alt-svc”: “clear”, “location”: “http://about.codecov.io/“, “via”: “1.1 google” }
  • The new exfiltration server was last seen on April 1, 2021

We hope the content in this blog will help defenders prevent, detect, and respond to these types of supply chain attacks going forward.

Configure SAML single sign-on for Kibana with AD FS on Amazon Elasticsearch Service

Post Syndicated from Sajeev Attiyil Bhaskaran original https://aws.amazon.com/blogs/security/configure-saml-single-sign-on-for-kibana-with-ad-fs-on-amazon-elasticsearch-service/

It’s a common use case for customers to integrate identity providers (IdPs) with Amazon Elasticsearch Service (Amazon ES) to achieve single sign-on (SSO) with Kibana. This integration makes it possible for users to leverage their existing identity credentials and offers administrators a single source of truth for user and permissions management. In this blog post, we’ll discuss how you can configure Security Assertion Markup Language (SAML) authentication for Kibana by using Amazon ES and Microsoft Active Directory Federation Services (AD FS).

Amazon ES now natively supports SSO authentication that uses the SAML protocol. With SAML authentication for Kibana, users can integrate directly with their existing third-party IdPs, such as Okta, Ping Identity, OneLogin, Auth0, AD FS, AWS Single Sign-on, and Azure Active Directory. SAML authentication for Kibana is powered by Open Distro for Elasticsearch, an Apache 2.0-licensed distribution of Elasticsearch, and is available to all Amazon ES customers who have enabled fine-grained access controls.

When you set up SAML authentication with Kibana, you can configure authentication that uses either service provider (SP)-initiated SSO or IdP-initiated SSO. The SP-initiated SSO flow occurs when a user directly accesses any SAML-configured Kibana endpoint, at which time Amazon ES redirects the user to their IdP for authentication, followed by a redirect back to Amazon ES after successful authentication. An IdP-initiated SSO flow typically occurs when a user chooses a link that first initiates the sign-in flow at the IdP, skipping the redirect between Amazon ES and the IdP. This blog post will focus on the SAML SP-initiated SSO flow.

Prerequisites

To complete this walkthrough, you must have the following:

Solution overview

For the solution presented in this post, you use your existing AD FS as an IdP for the user’s authentication. The SAML federation uses a claim-based authentication model in which user attributes (in this case stored in Active Directory) are passed from the IdP (AD FS) to the SP (Kibana).

Let’s walk through how a user would use the SAML protocol to access Amazon ES Kibana (the SP) while using AD FS as the IdP. In Figure 1, the user authentication request comes from an on-premises network, which is connected to Amazon VPC through a VPN connection—in this case, this could also be over AWS Direct Connect. The Amazon ES domain and AD FS are created in the same VPC.

Figure 1: A high-level view of a SAML transaction between Amazon ES and AD FS

Figure 1: A high-level view of a SAML transaction between Amazon ES and AD FS

The initial sign-in flow is as follows:

  1. Open a browser on the on-premises computer and navigate to the Kibana endpoint for your Amazon ES domain in the VPC.
  2. Amazon ES generates a SAML authentication request for the user and redirects it back to the browser.
  3. The browser redirects the SAML authentication request to AD FS.
  4. AD FS parses the SAML request and prompts user to enter credentials.
    1. User enters credentials and AD FS authenticates the user with Active Directory.
    2. After successful authentication, AD FS generates a SAML response and returns the encoded SAML response to the browser. The SAML response contains the destination (the Assertion Consumer Service (ACS) URL), the authentication response issuer (the AD FS entity ID URL), the digital signature, and the claim (which user is authenticated with AD FS, the user’s NameID, the group, the attribute used in SAML assertions, and so on).
  5. The browser sends the SAML response to the Kibana ACS URL, and then Kibana redirects to Amazon ES.
  6. Amazon ES validates the SAML response. If all the validations pass, you are redirected to the Kibana front page. Authorization is performed by Kibana based on the role mapped to the user. The role mapping is performed based on attributes of the SAML assertion being consumed by Kibana and Amazon ES.

Deploy the solution

Now let’s walk through the steps to set up SAML authentication for Kibana single sign-on by using Amazon ES and Microsoft AD FS.

Enable SAML for Amazon Elasticsearch Service

The first step in the configuration setup process is to enable SAML authentication in the Amazon ES domain.

To enable SAML for Amazon ES

  1. Sign in to the Amazon ES console and choose any existing Amazon ES domain that meets the criteria described in the Prerequisites section of this post.
  2. Under Actions, select Modify Authentication.
  3. Select the Enable SAML authentication check box.
    Figure 2: Enable SAML authentication

    Figure 2: Enable SAML authentication

    When you enable SAML, it automatically creates and displays the different URLs that are required to configure SAML support in your IdP.

    Figure 3: URLs for configuring the IdP

    Figure 3: URLs for configuring the IdP

  4. Look under Configure your Identity Provider (IdP), and note down the URL values for Service provider entity ID and SP-initiated SSO URL.

Set up and configure AD FS

During the SAML authentication process, the browser receives the SAML assertion token from AD FS and forwards it to the SP. In order to pass the claims to the Amazon ES domain, AD FS (the claims provider) and the Amazon ES domain (the relying party) have to establish a trust between them. Then you define the rules for what type of claims AD FS needs to send to the Amazon ES domain. The Amazon ES domain authorizes the user with internal security roles or backend roles, according to the claims in the token.

To configure Amazon ES as a relying party in AD FS

  1. Sign in to the AD FS server. In Server Manager, choose Tools, and then choose AD FS Management.
  2. In the AD FS management console, open the context (right-click) menu for Relying Party Trust, and then choose Add Relying Party Trust.

    Figure 4: Set up a relying party trust

    Figure 4: Set up a relying party trust

  3. In the Add Relying Party Trust Wizard, select Claims aware, and then choose Start.

    Figure 5: Create a claims aware application

    Figure 5: Create a claims aware application

  4. On the Select Data Source page, choose Enter data about the relying party manually, and then choose Next.

    Figure 6: Enter data about the relying party manually

    Figure 6: Enter data about the relying party manually

  5. On the Specify Display Name page, type in the display name of your choice for the relying party, and then choose Next. Choose Next again to move past the Configure Certificate screen. (Configuring a token encryption certificate is optional and at the time of writing, Amazon ES doesn’t support SAML token encryption.)

    Figure 7: Provide a display name for the relying party

    Figure 7: Provide a display name for the relying party

  6. On the Configure URL page, do the following steps.
    1. Choose the Enable support for the SAML 2.0 WebSSO protocol check box.
    2. In the URL field, add the SP-initiated SSO URL that you noted when you enabled SAML authentication in Amazon ES earlier.
    3. Choose Next.

      Figure 8: Enable SAML support and provide the SP-initiated SSO URL

      Figure 8: Enable SAML support and provide the SP-initiated SSO URL

  7. On the Configure Identifiers page, do the following:
      1. For Relying party trust identifier, provide the service provider entity ID that you noted when you enabled SAML authentication in Amazon ES.
      2. Choose Add, and then choose Next.

     

    Figure 9: Provide the service provider entity ID

    Figure 9: Provide the service provider entity ID

  8. On the Choose Access Control Policy page, choose the appropriate access for your domain. Depending on your requirements, choose one of these options:
    • Choose Permit Specific Group to restrict access to one or more groups in your Active Directory domain based on the Active Directory group.
    • Choose Permit Everyone to allow all Active Directory domain users to access Kibana.

    Note: This step only provides access for the users to authenticate into Kibana. You have not yet set up Open Distro security roles and permissions.

     

    Figure 10: Choose an access control policy

    Figure 10: Choose an access control policy

  9. On the Ready to Add Trust page, choose Next, and then choose Close.

Now you’ve finished adding Amazon ES as a relying party trust.

To configure claim issuance rules for the relying party during the authentication process, AD FS sends user attributes—claims—to the relying party. With claim rules, you define what claims AD FS can send to the Amazon ES domain. In the following procedure, you create two claim rules: one is to send the incoming Windows account name as the Name ID and the other is to send Active Directory groups as roles.

To configure claim issuance rules

  1. On the Relying Party Trusts page, right-click the relying party trust (in this case, AWS_ES_Kibana) and choose Edit Claim Issuance Policy.

    Figure 11: Edit the claim issuance policy

    Figure 11: Edit the claim issuance policy

  2. Configure the claim rule to send the Windows account name as the Name ID, using these steps.
    1. In the Edit Claim Issuance Policy dialog box, choose Add Rule. The Add Transform Claim Rule Wizard opens.
    2. For Rule Type, choose Transform an Incoming Claim, and then choose Next.
    3. On the Configure Rule page, enter the following information:
      • Claim rule name: NameId
      • Incoming claim type: Windows account name
      • Outgoing claim type: Name ID
      • Outgoing name ID format: Unspecified
      • Pass through all claim values: Select this option
    4. Choose Finish.

     

    Figure 12: Set the claim rule for Name ID

    Figure 12: Set the claim rule for Name ID

  3. Configure Active Directory groups to send as roles, using the following steps.
    1. In the Edit Claim Issuance Policy dialog box, choose Add Rule. The Add Transform Claim Rule Wizard opens.
    2. For Rule Type, choose Send LDAP Attributes as Claims, and then choose Next.
    3. On the Configure Rule page, enter or choose the following settings:
      • Claim rule name: Send-Groups-as-Roles
      • Attribute store: Active Directory
      • LDAP attribute: Token-Groups – Unqualified Names (to select the group name)
      • Outgoing claim type: Roles (the value for Roles should match the Roles Key that you will set in the Configure SAML in the Amazon ES domain step later in this process)
    4. Choose Finish

      Figure 13: Set claim rule for Active Directory groups as Roles

      Figure 13: Set claim rule for Active Directory groups as Roles

The configuration of AD FS is now complete and you can download the SAML metadata file from AD FS. The SAML metadata is in XML format and is needed to configure SAML in the Amazon ES domain. The AD FS metadata file (the IdP metadata) can be accessed from the following link (replace <AD FS FQDN> with the domain name of your AD FS server). Copy the XML and note down the value of entityID from the XML, as shown in Figure 14. You will need this information in the next steps.

https://<AD FS FQDN>/FederationMetadata/2007-06/FederationMetadata.xml

 

Figure 14: The value of entityID in the XML file

Figure 14: The value of entityID in the XML file

Configure SAML in the Amazon ES domain

Next, you configure SAML settings in the Amazon Elasticsearch Service console. You need to import the IdP metadata, configure the IdP entity ID, configure the backend role, and set up the Roles key.

To configure SAML setting in the Amazon ES domain

    1. Sign in to the Amazon Elasticsearch Service console. On the Actions menu, choose Modify authentication.
    2. Import the IdP metadata, using the following steps.
      1. Choose Import IdP metadata, and then choose Metadata from IdP.
      2. Paste the contents of the FederationMetadata XML file (the IdP metadata) that you copied earlier in the Add or edit metadata field. You can also choose the Import from XML file button if you have the metadata file on the local disk.

        Figure 15: The imported identity provider metadata

        Figure 15: The imported identity provider metadata

    3. Copy and paste the value of entityID from the XML file to the IdP entity ID field, if that field isn’t autofilled.
    4. For SAML manager backend role (the console may refer to this as master backend role), enter the name of the group you created in AD FS as part of the prerequisites for this post. In this walkthrough, we set the name of the group as admins, and therefore the backend role is admins.

Optionally, you can also provide the user name instead of the backend role.

  1. Set up the Roles key, using the following steps.
    1. Under Optional SAML settings, for Roles key, enter Roles. This value must match the value for Outgoing claim type, which you set when you configured claims rules earlier.

      Figure 16: Set the Roles key

      Figure 16: Set the Roles key

    2. Leave the Subject key field empty to use the NameID element of the SAML assertion for the user name. Keep the defaults for everything else, and then choose Submit.

It can take few minutes to update the SAML settings and for the domain to come back to the active state.

Congratulations! You’ve completed all the SP and IdP configurations.

Sign in to Kibana

When the domain comes back to the active state, choose the Kibana URL in the Amazon ES console. You will be redirected to the AD FS sign-in page for authentication. Provide the user name and password for any of the users in the admins group. The example in Figure 17 uses the credentials for the user [email protected], who is a member of the admins group.

Figure 17: The AD FS sign-in screen with user credentials

Figure 17: The AD FS sign-in screen with user credentials

AD FS authenticates the user and redirect the page to Kibana. If the user has at least one role mapped, you go to the Kibana home page, as shown in Figure 18. In this walkthrough, you mapped the AD FS group admins as a backend role to the manager user. Internally, the Open Distro security plugin maps the backend role admins to the security roles all_access and security_manager. Therefore, the Active Directory user in the admins group is authorized with the privileges of the manager user in the domain. For more granular access, you can create different AD FS groups and map the group names (backend roles) to internal security roles by using Role Mappings in Kibana.

Figure 18: The AD FS user user1@example.com is successfully logged in to Kibana

Figure 18: The AD FS user [email protected] is successfully logged in to Kibana

Note: At the time of writing for this blog post, if you specify the <SingleLogoutService /> details in the AD FS metadata XML, when you sign out from Kibana, Kibana will call AD FS directly and try to sign the user out. This doesn’t work currently, because AD FS expects the sign-out request to be signed with a certificate that Amazon ES doesn’t currently support. If you remove <SingleLogoutService /> from the metadata XML file, Amazon ES will use its own internal sign-out mechanism and sign the user out on the Amazon ES side. No calls will be made to AD FS for signing out.

Conclusion

In this post, we covered setting up SAML authentication for Kibana single sign-on by using Amazon ES and Microsoft AD FS. The integration of IdPs with your Amazon ES domain provides a powerful way to control fine-grained access to your Kibana endpoint and integrate with existing identity lifecycle processes for create/update/delete operations, which reduces the operational overhead required to manage users.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Elasticsearch Service forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sajeev Attiyil Bhaskaran

Sajeev works closely with AWS customers to provide them architectural and engineering assistance and guidance. He dives deep into big data technologies and streaming solutions and leads onsite and online sessions for customers to design best solutions for their use cases. Outside of work, he enjoys spending time with his wife and daughter.

Author

Jagadeesh Pusapadi

Jagadeesh is a Senior Solutions Architect with AWS working with customers on their strategic initiatives. He helps customers build innovative solutions on the AWS Cloud by providing architectural guidance to achieve desired business outcomes.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close