Security updates have been issued by Arch Linux (tomcat7), Debian (kernel and perl), Fedora (libwmf and mpg123), Mageia (bluez, ffmpeg, gstreamer0.10-plugins-good, gstreamer1.0-plugins-good, libwmf, tomcat, and tor), openSUSE (emacs, fossil, freexl, php5, and xen), Red Hat (augeas, rh-mysql56-mysql, samba, and samba4), Scientific Linux (augeas, samba, and samba4), Slackware (samba), SUSE (emacs and kernel), and Ubuntu (qemu).
Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/09/15/timeshiftgrafanabuzz-1w-issue-13/
It’s been a busy week here at Grafana Labs – Grafana 4.5 is now available! We’ve made a lot of enhancements and added new features in this release, so be sure and check out the release blog post to see the full changelog. The GrafanaCon EU CFP is officially open so please don’t forget to submit your topic. We’re looking for technical and non-technical talks of all sizes.
From the Blogosphere
Percona Live Europe Featured Talks: Visualize Your Data with Grafana Featuring Daniel Lee: The folks from Percona sat down with Grafana Labs Software Developer Daniel Lee to discuss his upcoming talk at PerconaLive Europe 2017, Dublin, and how data can drive better decision making for your business. Get your tickets now, and use code: SeeMeSpeakPLE17 for 10% off!
Performance monitoring with ELK / Grafana: This article walks you through setting up the ELK stack to monitor webpage load time, but switches out Kibana for Grafana so you can visualize data from other sources right next to this performance data.
ESXi Lab Series: Aaron created a video mini-series about implementing both offensive and defensive security in an ESXi Lab environment. Parts four and five focus on monitoring with Grafana, but you’ll probably want to start with one.
Raspberry Pi Monitoring with Grafana: We’ve been excited to see more and more articles about Grafana from Raspberry Pi users. This article helps you install and configure Grafana, and also touches on what monitoring is and why it’s important.
This week we were busy putting the finishing touches on the new release, but we do have an update to the Gnocchi data source plugin to announce, and a new annotation plugin that works with any data source. Install or update plugins on an on-prem instance using the Grafana-cli, or with one click on Hosted Grafana.
Gnocchi Data Source – The latest release adds the reaggregation feature. Gnocchi can pre-compute the aggregation of timeseries (ex: aggregate the mean every 10 minute for 1 year). Then allows you to (re)aggregate timeseries, since stored timeseries have already been aggregated. A big shout out to sileht for adding new features to the Gnocchi plugin.
GrafanaCon EU Call for Papers is Open
Have a big idea to share? A shorter talk or a demo you’d like to show off? We’re looking for technical and non-technical talks of all sizes.
Tweet of the Week
We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove
— GBrian (@CBGBrian) September 14, 2017
Awesome – really looking forward to seeing updates as you get to 1.0!
We Need Your Help
We’re conducting an experiment and need your help. Do you have a graph that you love because the data is beautiful or because the graph provides interesting information? Please get in touch. Tweet or send us an email with a screenshot, and we’ll tell you about the experiment.
What do you think?
We’re always interested in how we can improve our weekly roundups. Submit a comment on this article below, or post something at our community forum. Help us make these roundups better and better!
Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/09/13/grafana-4.5-released/
Grafana v4.5 is now available for download. This release has some really significant improvements to Prometheus, Elasticsearch, MySQL and to the Table panel.
Prometheus Query Editor
The new query editor has full syntax highlighting. As well as auto complete for metrics, functions, and range vectors. There is also integrated function docs right from the query editor!
Elasticsearch: Add ad-hoc filters from the table panel
Table cell links!
Create column styles that turn cells into links that use the value in the cell (or other other row values) to generate a url to another dashboard or system. Useful for
using the table panel as way to drilldown into dashboard with more detail or to ticket system for example.
Query Inspector is a new feature that shows query requests and responses. This can be helpful if a graph is not shown or shows something very different than what you expected.
More information here.
- Table panel: Render cell values as links that can have an url template that uses variables from current table row. #3754
- Elasticsearch: Add ad hoc filters directly by clicking values in table panel #8052.
- MySQL: New rich query editor with syntax highlighting
- Prometheus: New rich query editor with syntax highlighting, metric & range auto complete and integrated function docs. #5117
- GitHub OAuth: Support for GitHub organizations with 100+ teams. #8846, thx @skwashd
- Graphite: Calls to Graphite api /metrics/find now include panel or dashboad time range (from & until) in most cases, #8055
- Graphite: Added new graphite 1.0 functions, available if you set version to 1.0.x in data source settings. New Functions: mapSeries, reduceSeries, isNonNull, groupByNodes, offsetToZero, grep, weightedAverage, removeEmptySeries, aggregateLine, averageOutsidePercentile, delay, exponentialMovingAverage, fallbackSeries, integralByInterval, interpolate, invert, linearRegression, movingMin, movingMax, movingSum, multiplySeriesWithWildcards, pow, powSeries, removeBetweenPercentile, squareRoot, timeSlice, closes #8261
- Elasticsearch: Ad-hoc filters now use query phrase match filters instead of term filters, works on non keyword/raw fields #9095.
- InfluxDB/Elasticsearch: The panel & data source option named “Group by time interval” is now named “Min time interval” and does now always define a lower limit for the auto group by time. Without having to use
>prefix (that prefix still works). This should in theory have close to zero actual impact on existing dashboards. It does mean that if you used this setting to define a hard group by time interval of, say “1d”, if you zoomed to a time range wide enough the time range could increase above the “1d” range as the setting is now always considered a lower limit.
This option is now rennamed (and moved to Options sub section above your queries):
Datas source selection & options & help are now above your metric queries.
- InfluxDB: Change time range filter for absolute time ranges to be inclusive instead of exclusive #8319, thx @Oxydros
- InfluxDB: Added paranthesis around tag filters in queries #9131
- Modals: Maintain scroll position after opening/leaving modal #8800
- Templating: You cannot select data source variables as data source for other template variables #7510
- Security: Security fix for api vulnerability (in multiple org setups).
Head to the v4.5 download page for download links & instructions.
A big thanks to all the Grafana users who contribute by submitting PRs, bug reports, helping out on our community site and providing feedback!
Post Syndicated from Sara Rodas original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-september-2017/
As a school supply aficionado, the month of September has always held a special place in my heart. Nothing sets the tone for success like getting a killer deal on pens and a crisp college ruled notebook. Even if back to school shopping trips have secured a seat in your distant memory, this is still a perfect time of year to stock up on office supplies and set aside some time for flexing those learning muscles. A great way to get started: scan through our September Tech Talks and check out the ones that pique your interest. This month we are covering re:Invent, AI, and much more.
September 2017 – Schedule
Noted below are the upcoming scheduled live, online technical sessions being held during the month of September. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts.
Webinars featured this month are:
Monday, September 11
9:00 – 9:40 AM PDT: What’s New with Amazon DynamoDB
10:30 – 11:10 AM PDT: Local Testing and Deployment Best Practices for Serverless Applications
12:00 – 12:40 PM PDT: Managing Secrets for Containers with Amazon ECS
Tuesday, September 12
9:00 – 9:40 AM PDT: Get Ready for re:Invent 2017 Content Overview
10:30 – 11:10 AM PDT: Deep Dive on User Sign-up and Sign-in with Amazon Cognito
12:00 – 12:40 PM PDT: Using CloudTrail to Enhance Compliance and Governance of S3
Wednesday, September 13
9:00 – 9:40 AM PDT: Best Practices for Processing Managed Hadoop Workloads
10:30 – 11:10 AM PDT: Migrating Your Oracle Database to PostgreSQL
12:00 – 12:40 PM PDT: Configuration Management in the Cloud
Thursday, September 14
9:00 – 9:40 AM PDT: Tackle Your Dark Data Challenge with AWS Glue
10:30 – 11:10 AM PDT: Deep Dive on MySQL Databases on AWS
12:00 – 12:40 PM PDT: Using AWS Batch and AWS Step Functions to Design and Run High-Throughput Workflows
Tuesday, September 26
9:00 – 9:40 AM PDT: An Overview of AI on the AWS Platform
10:30 – 11:10 AM PDT: Introduction to Generative Adversarial Networks (GAN) with Apache MXNet
12:00 – 12:40 PM PDT: Revolutionizing Backup & Recovery Using Amazon S3
2:00 – 2:40 PM PDT: Securing Your Desktops with Amazon WorkSpaces
Wednesday, September 27
Security & Identity
9:00 – 9:40 AM PDT: Advanced DNS Traffic Management using Amazon Route 53
10:30 – 11:10 AM PDT: Deep Dive on Amazon EFS (with Encryption)
Hands on Lab
12:30 – 2:00 PM PDT: Hands on Lab: Windows Workloads
Thursday, September 28
Security & Identity
9:00 – 9:40 AM PDT: How to use AWS WAF to Mitigate OWASP Top 10 attacks
10:30 – 11:10 AM PDT: AWS Greengrass Technical Deep Dive with Demo
Hands on Lab
1:00 – 1:40 PM PDT: Design, Deploy, and Optimize SQL Server on AWS
The AWS Online Tech Talks series covers a broad range of topics at varying technical levels. These sessions feature live demonstrations & customer examples led by AWS engineers and Solution Architects. Check out the AWS YouTube channel for more on-demand webinars on AWS technologies.
Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/09/01/timeshiftgrafanabuzz-1w-issue-11/
September is here and summer is officially drawing to a close, but the Grafana team has stayed busy. We’re prepping for an upcoming Grafana 4.5 release, had some new and updated plugins, and would like to thank two contributors for fixing a non-obvious bug. Also – The CFP for GrafanaCon EU is open, and we’d like you to speak!
GrafanaCon EU CFP is Open
Have a big idea to share? Have a shorter talk or a demo you’d like to show off?
We’re looking for 40-minute detailed talks, 20-minute general talks and 10-minute lightning talks. We have a perfect slot for any type of content.
From the Blogosphere
Zabbix, Grafana and Python, a Match Made in Heaven: David’s article, published earlier this year, hits on some great points about open source software and how you don’t have to spend much (or any) money to get valuable monitoring for your infrastructure.
The Business of Democratizing Metrics: Our friends over at Packet stopped by the office recently to sit down and chat with the Grafana Labs co-founders. They discussed how Grafana started, how monitoring has evolved, and democratizing metrics.
Visualizing CloudWatch with Grafana: Yuzo put together an article outlining his first experience adding a CloudWatch data source in Grafana, importing his first dashboard, then comparing the graphs between Grafana and CloudWatch.
Monitoring Linux performance with Grafana: Jim wanted to monitor his CentOS home router to get network traffic and disk usage stats, but wanted to try something different than his previous cacti monitoring. This walkthrough shows how he set things up to collect, store and visualize the data.
Visualizing Jenkins Pipeline Results in Grafana: Piotr provides a walkthrough of his setup and configuration to view Jenkins build results for his continuous delivery environment in Grafana.
This week we’ve added a plugin for the new time series database Sidewinder, and updates to the Carpet Plot graph panel. If you haven’t installed a plugin, it’s easy. For on-premises installations, the Grafana-cli will do the work for you. If you’re using Hosted Grafana, you can install any plugin with one click.
This week’s MVC (Most Valuable Contributor)
This week we want to thank two contributors who worked together to fix a non-obvious bug in the new MySQL data source (a bug with sorting values in the legend).
Thank you both for taking the time to both troubleshoot and fix the issue. Much appreciated!
Tweet of the Week
We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove
Nice! Combining different panel types on a dashboard can add more context to your data – Looks like a very functional dashboard.
— Alex Hafner (@alexhafner) August 30, 2017
What do you think?
Let us know how we’re doing! Submit a comment on this article below, or post something at our community forum. Help us make these roundups better and better!
Security updates have been issued by Debian (connman, faad2, gnupg, imagemagick, libdbd-mysql-perl, mercurial, and php5), openSUSE (postgresql93 and samba and resource-agents), Oracle (poppler), Scientific Linux (poppler), SUSE (firefox and php7), and Ubuntu (pyjwt).
Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/amazon-aurora-fast-database-cloning/
Today, I want to quickly show off a feature of Amazon Aurora that I find incredibly useful: Fast Database Cloning. By taking advantage of Aurora’s underlying distributed storage engine you’re able to quickly and cheaply create a copy-on-write clone of your database.
In my career I’ve frequently spent time waiting on some representative sample of data to use in development, experiments, or analytics. If I had a 2TB database it could take hours just waiting for a copy of the data to be ready before I could peform my tasks. Even within RDS MySQL, I would still have to wait several hours for a snapshot copy to complete before I was able to test a schema migration or perform some analytics. Aurora solves this problem in a very interesting way.
The distributed storage engine for Aurora allows us to do things which are normally not feasible or cost-effective with a traditional database engine. By creating pointers to individual pages of data the storage engine enables fast database cloning. Then, when you make changes to the data in the source or the clone, a copy-on-write protocol creates a new copy of that page and updates the pointers. This means my 2TB snapshot restore job that used to take an hour is now ready in about 5 minutes – and most of that time is spent provisioning a new RDS instance.
The time it takes to create the clone is independent of the size of the database since we’re pointing at the same storage. It also makes cloning a very cost-effective operation since I only pay storage costs for the changed pages instead of an entire copy. The database clone is still a regular Aurora Database Cluster with all the same durability guarentees.
Let’s clone a database. First, I’ll select an Aurora (MySQL) instance and select “create-clone” from the Instance Actions.
Next I’ll name our clone
dolly-the-sheep and provision it.
It took about 5 minutes and 30 seconds for my clone to become available and I started making some large schema changes and saw no performance impact. The schema changes themselves completed faster than they would have on traditional MySQL due to improvements the Aurora team made to enable faster DDL operations. I could subsequently create a clone-of-a-clone or even a clone-of-a-clone-of-a-clone (and so on) if I wanted to have another team member perform some tests on my schema changes while I continued to make changes of my own. It’s important to note here that clones are first class databases from the perspective of RDS. I still have all of the features that every other Aurora database supports: snapshots, backups, monitoring and more.
I hope this feature will allow you and your teams to save a lot of time and money on experimenting and developing applications based on Amazon Aurora. You can read more about this feature in the Amazon Aurora User Guide and I strongly suggest following the AWS Database Blog. Anurag Gupta’s posts on quorums and Amazon Aurora storage are particularly interesting.
Have follow-up questions or feedback? Ping us at [email protected], or leave a comment here. We’d love to get your thoughts and suggestions.
A couple of months ago on the blog, I announced the AWS Chatbot Challenge in conjunction with Slack. The AWS Chatbot Challenge was an opportunity to build a unique chatbot that helped to solve a problem or that would add value for its prospective users. The mission was to build a conversational, natural language chatbot using Amazon Lex and leverage Lex’s integration with AWS Lambda to execute logic or data processing on the backend.
I know that you all have been anxiously waiting to hear announcements of who were the winners of the AWS Chatbot Challenge as much as I was. Well wait no longer, the winners of the AWS Chatbot Challenge have been decided.
May I have the Envelope Please? (The Trumpets sound)
The winners of the AWS Chatbot Challenge are:
- First Place: BuildFax Counts by Joe Emison
- Second Place: Hubsy by Andrew Riess, Andrew Puch, and John Wetzel
- Third Place: PFMBot by Benny Leong and his team from MoneyLion.
- Large Organization Winner: ADP Payroll Innovation Bot by Eric Liu, Jiaxing Yan, and Fan Yang
Diving into the Winning Chatbot Projects
Let’s take a walkthrough of the details for each of the winning projects to get a view of what made these chatbots distinctive, as well as, learn more about the technologies used to implement the chatbot solution.
BuildFax Counts by Joe Emison
The BuildFax Counts bot was created as a real solution for the BuildFax company to decrease the amount the time that sales and marketing teams can get answers on permits or properties with permits meet certain criteria.
BuildFax, a company co-founded by bot developer Joe Emison, has the only national database of building permits, which updates data from approximately half of the United States on a monthly basis. In order to accommodate the many requests that come in from the sales and marketing team regarding permit information, BuildFax has a technical sales support team that fulfills these requests sent to a ticketing system by manually writing SQL queries that run across the shards of the BuildFax databases. Since there are a large number of requests received by the internal sales support team and due to the manual nature of setting up the queries, it may take several days for getting the sales and marketing teams to receive an answer.
The BuildFax Counts chatbot solves this problem by taking the permit inquiry that would normally be sent into a ticket from the sales and marketing team, as input from Slack to the chatbot. Once the inquiry is submitted into Slack, a query executes and the inquiry results are returned immediately.
The BuildFax Counts bot is used today for the BuildFax sales and marketing team to get back data on inquiries immediately that previously took up to a week to receive results.
Not only is BuildFax Counts bot our 1st place winner and wonderful solution, but its creator, Joe Emison, is a great guy. Joe has opted to donate his prize; the $5,000 cash, the $2,500 in AWS Credits, and one re:Invent ticket to the Black Girls Code organization. I must say, you rock Joe for helping these kids get access and exposure to technology.
Hubsy by Andrew Riess, Andrew Puch, and John Wetzel
Hubsy bot was created to redefine and personalize the way users traditionally manage their HubSpot account. HubSpot is a SaaS system providing marketing, sales, and CRM software. Hubsy allows users of HubSpot to create engagements and log engagements with customers, provide sales teams with deals status, and retrieves client contact information quickly. Hubsy uses Amazon Lex’s conversational interface to execute commands from the HubSpot API so that users can gain insights, store and retrieve data, and manage tasks directly from Facebook, Slack, or Alexa.
In order to implement the Hubsy chatbot, Andrew and the team members used AWS Lambda to create a Lambda function with Node.js to parse the users request and call the HubSpot API, which will fulfill the initial request or return back to the user asking for more information. Terraform was used to automatically setup and update Lambda, CloudWatch logs, as well as, IAM profiles. Amazon Lex was used to build the conversational piece of the bot, which creates the utterances that a person on a sales team would likely say when seeking information from HubSpot. To integrate with Alexa, the Amazon Alexa skill builder was used to create an Alexa skill which was tested on an Echo Dot. Cloudwatch Logs are used to log the Lambda function information to CloudWatch in order to debug different parts of the Lex intents. In order to validate the code before the Terraform deployment, ESLint was additionally used to ensure the code was linted and proper development standards were followed.
PFMBot by Benny Leong and his team from MoneyLion
PFMBot, Personal Finance Management Bot, is a bot to be used with the MoneyLion finance group which offers customers online financial products; loans, credit monitoring, and free credit score service to improve the financial health of their customers. Once a user signs up an account on the MoneyLion app or website, the user has the option to link their bank accounts with the MoneyLion APIs. Once the bank account is linked to the APIs, the user will be able to login to their MoneyLion account and start having a conversation with the PFMBot based on their bank account information.
ADP Payroll Innovation Bot by Eric Liu, Jiaxing Yan, and Fan Yang
ADP PI (Payroll Innovation) bot is designed to help employees of ADP customers easily review their own payroll details and compare different payroll data by just asking the bot for results. The ADP PI Bot additionally offers issue reporting functionality for employees to report payroll issues and aids HR managers in quickly receiving and organizing any reported payroll issues.
The ADP Payroll Innovation bot is an ecosystem for the ADP payroll consisting of two chatbots, which includes ADP PI Bot for external clients (employees and HR managers), and ADP PI DevOps Bot for internal ADP DevOps team.
The architecture for the ADP PI DevOps bot is different architecture from the ADP PI bot shown above as it is deployed internally to ADP. The ADP PI DevOps bot allows input from both Slack and Alexa. When input comes into Slack, Slack sends the request to Lex for it to process the utterance. Lex then calls the Lambda backend, which obtains ADP data sitting in the ADP VPC running within an Amazon VPC. When input comes in from Alexa, a Lambda function is called that also obtains data from the ADP VPC running on AWS.
The architecture for the ADP PI bot consists of users entering in requests and/or entering issues via Slack. When requests/issues are entered via Slack, the Slack APIs communicate via Amazon API Gateway to AWS Lambda. The Lambda function either writes data into one of the Amazon DynamoDB databases for recording issues and/or sending issues or it sends the request to Lex. When sending issues, DynamoDB integrates with Trello to keep HR Managers abreast of the escalated issues. Once the request data is sent from Lambda to Lex, Lex processes the utterance and calls another Lambda function that integrates with the ADP API and it calls ADP data from within the ADP VPC, which runs on Amazon Virtual Private Cloud (VPC).
The ADP PI bot ecosystem has the following functional groupings:
- Summarize Payrolls
- Compare Payrolls
- Escalate Issues
- Evolve PI Bot
HR Manager Functionality
- Bot Management
- Audit and Feedback
- Reduce call volume in service centers (ADP PI Bot).
- Track issues and generate reports (ADP PI Bot).
- Monitor jobs for various environment (ADP PI DevOps Bot)
- View job dashboards (ADP PI DevOps Bot)
- Query job details (ADP PI DevOps Bot)
Let’s all wish all the winners of the AWS Chatbot Challenge hearty congratulations on their excellent projects.
You can review more details on the winning projects, as well as, all of the submissions to the AWS Chatbot Challenge at: https://awschatbot2017.devpost.com/submissions. If you are curious on the details of Chatbot challenge contest including resources, rules, prizes, and judges, you can review the original challenge website here: https://awschatbot2017.devpost.com/.
Hopefully, you are just as inspired as I am to build your own chatbot using Lex and Lambda. For more information, take a look at the Amazon Lex developer guide or the AWS AI blog on Building Better Bots Using Amazon Lex (Part 1)
Chat with you soon!
Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-cloudhsm-update-cost-effective-hardware-key-management/
Our customers run an incredible variety of mission-critical workloads on AWS, many of which process and store sensitive data. As detailed in our Overview of Security Processes document, AWS customers have access to an ever-growing set of options for encrypting and protecting this data. For example, Amazon Relational Database Service (RDS) supports encryption of data at rest and in transit, with options tailored for each supported database engine (MySQL, SQL Server, Oracle, MariaDB, PostgreSQL, and Aurora).
Many customers use AWS Key Management Service (KMS) to centralize their key management, with others taking advantage of the hardware-based key management, encryption, and decryption provided by AWS CloudHSM to meet stringent security and compliance requirements for their most sensitive data and regulated workloads (you can read my post, AWS CloudHSM – Secure Key Storage and Cryptographic Operations, to learn more about Hardware Security Modules, also known as HSMs).
Major CloudHSM Update
Today, building on what we have learned from our first-generation product, we are making a major update to CloudHSM, with a set of improvements designed to make the benefits of hardware-based key management available to a much wider audience while reducing the need for specialized operating expertise. Here’s a summary of the improvements:
Pay As You Go – CloudHSM is now offered under a pay-as-you-go model that is simpler and more cost-effective, with no up-front fees.
Fully Managed – CloudHSM is now a scalable managed service; provisioning, patching, high availability, and backups are all built-in and taken care of for you. Scheduled backups extract an encrypted image of your HSM from the hardware (using keys that only the HSM hardware itself knows) that can be restored only to identical HSM hardware owned by AWS. For durability, those backups are stored in Amazon Simple Storage Service (S3), and for an additional layer of security, encrypted again with server-side S3 encryption using an AWS KMS master key.
Open & Compatible – CloudHSM is open and standards-compliant, with support for multiple APIs, programming languages, and cryptography extensions such as PKCS #11, Java Cryptography Extension (JCE), and Microsoft CryptoNG (CNG). The open nature of CloudHSM gives you more control and simplifies the process of moving keys (in encrypted form) from one CloudHSM to another, and also allows migration to and from other commercially available HSMs.
More Secure – CloudHSM Classic (the original model) supports the generation and use of keys that comply with FIPS 140-2 Level 2. We’re stepping that up a notch today with support for FIPS 140-2 Level 3, with security mechanisms that are designed to detect and respond to physical attempts to access or modify the HSM. Your keys are protected with exclusive, single-tenant access to tamper-resistant HSMs that appear within your Virtual Private Clouds (VPCs). CloudHSM supports quorum authentication for critical administrative and key management functions. This feature allows you to define a list of N possible identities that can access the functions, and then require at least M of them to authorize the action. It also supports multi-factor authentication using tokens that you provide.
AWS-Native – The updated CloudHSM is an integral part of AWS and plays well with other tools and services. You can create and manage a cluster of HSMs using the AWS Management Console, AWS Command Line Interface (CLI), or API calls.
You can create CloudHSM clusters that contain 1 to 32 HSMs, each in a separate Availability Zone in a particular AWS Region. Spreading HSMs across AZs gives you high availability (including built-in load balancing); adding more HSMs gives you additional throughput. The HSMs within a cluster are kept in sync: performing a task or operation on one HSM in a cluster automatically updates the others. Each HSM in a cluster has its own Elastic Network Interface (ENI).
All interaction with an HSM takes place via the AWS CloudHSM client. It runs on an EC2 instance and uses certificate-based mutual authentication to create secure (TLS) connections to the HSMs.
At the hardware level, each HSM includes hardware-enforced isolation of crypto operations and key storage. Each customer HSM runs on dedicated processor cores.
Setting Up a Cluster
Let’s set up a cluster using the CloudHSM Console:
I click on Create cluster to get started, select my desired VPC and the subnets within it (I can also create a new VPC and/or subnets if needed):
Then I review my settings and click on Create:
After a few minutes, my cluster exists, but is uninitialized:
Initialization simply means retrieving a certificate signing request (the Cluster CSR):
And then creating a private key and using it to sign the request (these commands were copied from the Initialize Cluster docs and I have omitted the output. Note that ID identifies the cluster):
The next step is to apply the signed certificate to the cluster using the console or the CLI. After this has been done, the cluster can be activated by changing the password for the HSM’s administrative user, otherwise known as the Crypto Officer (CO).
Once the cluster has been created, initialized and activated, it can be used to protect data. Applications can use the APIs in AWS CloudHSM SDKs to manage keys, encrypt & decrypt objects, and more. The SDKs provide access to the CloudHSM client (running on the same instance as the application). The client, in turn, connects to the cluster across an encrypted connection.
The new HSM is available today in the US East (Northern Virginia), US West (Oregon), US East (Ohio), and EU (Ireland) Regions, with more in the works. Pricing starts at $1.45 per HSM per hour.
Security updates have been issued by Debian (firefox-esr), Fedora (cacti, community-mysql, and pspp), Mageia (varnish), openSUSE (mariadb, nasm, pspp, and rubygem-rubyzip), Oracle (evince, freeradius, golang, java-1.7.0-openjdk, log4j, NetworkManager and libnl3, pki-core, qemu-kvm, and X.org), Red Hat (flash-plugin), and Slackware (curl and mozilla).
Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/08/04/timeshiftgrafanabuzz-1w-issue-7/
Hard to believe it’s already August! This week there were a ton of articles to highlight. It’s really exciting to see how many data aficionados there are out there coming up with new ways to connect Grafana to their data, wherever it may live. In this issue we cover crypto currency visualization, home automation setups and breakdown the installation in a number of environments. Enjoy!
Grafana 4.4.2 Released
From the Blogosphere
Monitoring CouchDB with Prometheus, Grafana and Docker: Geoff walks us through all of the steps to get Prometheus, Alertmanager and Grafana installed in Docker to monitor and alert on a CouchDB cluster. These six steps will have you up and running in no time.
Try InfluxDB and Grafana by Docker: Continuing with our Docker theme, Xiao breaks down all of the pieces, explores the configuration options, and explains the Docker commands to setup a simple monitoring stack by using collectd, InfluxDB and Grafana.
Installation of Collectd, Graphite and Grafana – Part 2: Last week we covered the first article in a series focused on setting up a complete Graphite stack. This week we tackle installing Graphite, its components, and Grafana on the server.
Grafana and Home Automation: More and more pieces of our homes are becoming “smart”, so why not monitor them? This article walks you through collecting data from home automation software Jeedom, sending metrics to InfluxDB, and visualizing and alerting in Grafana – so you can know how your smart-toaster is performing.
Making an Awesome Dashboard for your Crypto Currencies in 3 Steps: Christian lays out three steps that will help you keep an eye on your Bitcoin and Ethereum investments. His PHP script fetches things like current price, current balances, earnings, and sends the data to InfluxDB via UDP. He’s also created a dashboard that’s ready to import so you can get back to mining.
FHEM #6 – Grafana and InfluxDB: We’re seeing more and more articles about using Grafana to monitor home automation. This is the sixth article in a series getting data from FHEM into Grafana using InfdluxDB. It also touches on connecting Grafana to MariaDB, taking advantage of Grafana’s alpha native MySQL support.
Installation Overview of Node Exporter, Prometheus and Grafana: Looking to get started with Prometheus? Frits walks us through installing Node Exporter, Prometheus, and Grafana and importing our first dashboard.
Collect Metrics from Liberty Apps and Display in Grafana: This in-depth article covers adding custom metrics to your Liberty application and how to monitor these metrics using collectd, Graphite and Grafana.
Gatling, Graphite, Grafana: Your Application Under High Surveillance!: David explores Gatling, for load testing which can write the data to Graphite and over to Grafana for visualization and alerting.
Plugins and Dashboards
Last week’s timeShift was packed full of plugin updates, as well as a couple of new ones. This week was a little quieter on the plugin front, but we still have a new data source plugin to announce. It’s easy to install this new plugin via the grafana-cli for an on-prem Grafana instance, or a 1-click install on Hosted Grafana.
PRTG Data Source – This data source visualizes data from the Paessler PRTG monitoring system. The easy to use query editor included with this plugin gives access to an array of PRTG metadata properties including Status, Message, Active, Tags, Priority, and more. Annotation support to show sensor status messages on graphs.
This week’s MVC (Most Valuable Contributor)
This week we highlight a contributor who is going to make everyone waiting for Elasticsearch alerting in Grafana jump for joy!
Tweet of the Week
We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove
Having fun with pflogsumm and mailq, I'm addicted! I turn boring numbers into beautiful dashboards. How to monitor Zimbra with Grafana, soon pic.twitter.com/WFNtg7vHNk
— Jorge de la Cruz (@jorgedlcruz) August 2, 2017
We love when people talk about Grafana at meetups and conferences.
Wednesday, August 16, 2017 – 7:30pm | Apprenda HQ
433 River Street, 4th Floor, Troy, NY
Kubernetes focused event! demo from Apprenda and how Kubernetes is used @ GitHub:
Steve Wade, is a Principal Kubernetes Consultant from London and will be providing some fundamental information about the Kubernetes ecosystem as well as overview of its core components. He’ll also talk about some monitoring and alerting best practices learned from working with Kubernetes customers and demo how Prometheus, Grafana and Slack can be used to monitor, visualize and alert on both the Kubernetes platform as well as application workloads.
Aaron Brown, a Site Reliability Engineer at Github, will dive into the ways in which Kubernetes is used within Github to make software development and deployment more efficient.
What do you think?
Please tell us how we’re doing. We want to make sure this continues to be a valuable resource for the Grafana community. Submit a comment on this article below, or post something at our community forum. Help us make this better!
Post Syndicated from Nathan Peck original https://aws.amazon.com/blogs/compute/nginx-reverse-proxy-sidecar-container-on-amazon-ecs/
Reverse proxies are a powerful software architecture primitive for fetching resources from a server on behalf of a client. They serve a number of purposes, from protecting servers from unwanted traffic to offloading some of the heavy lifting of HTTP traffic processing.
This post explains the benefits of a reverse proxy, and explains how to use NGINX and Amazon EC2 Container Service (Amazon ECS) to easily implement and deploy a reverse proxy for your containerized application.
NGINX is a high performance HTTP server that has achieved significant adoption because of its asynchronous event driven architecture. It can serve thousands of concurrent requests with a low memory footprint. This efficiency also makes it ideal as a reverse proxy.
Amazon ECS is a highly scalable, high performance container management service that supports Docker containers. It allows you to run applications easily on a managed cluster of Amazon EC2 instances. Amazon ECS helps you get your application components running on instances according to a specified configuration. It also helps scale out these components across an entire fleet of instances.
Sidecar containers are a common software pattern that has been embraced by engineering organizations. It’s a way to keep server side architecture easier to understand by building with smaller, modular containers that each serve a simple purpose. Just like an application can be powered by multiple microservices, each microservice can also be powered by multiple containers that work together. A sidecar container is simply a way to move part of the core responsibility of a service out into a containerized module that is deployed alongside a core application container.
The following diagram shows how an NGINX reverse proxy sidecar container operates alongside an application server container:
In this architecture, Amazon ECS has deployed two copies of an application stack that is made up of an NGINX reverse proxy side container and an application container. Web traffic from the public goes to an Application Load Balancer, which then distributes the traffic to one of the NGINX reverse proxy sidecars. The NGINX reverse proxy then forwards the request to the application server and returns its response to the client via the load balancer.
Reverse proxy for security
Security is one reason for using a reverse proxy in front of an application container. Any web server that serves resources to the public can expect to receive lots of unwanted traffic every day. Some of this traffic is relatively benign scans by researchers and tools, such as Shodan or nmap:
Security updates have been issued by Debian (apache2, enigmail, graphicsmagick, ipsec-tools, libquicktime, lucene-solr, mysql-5.5, nasm, and supervisor), Fedora (mingw-librsvg2, php-PHPMailer, and webkitgtk4), Mageia (freeradius, gdk-pixbuf2.0, graphicsmagick, java-1.8.0-openjdk, kernel, libmtp, libgphoto, libraw, nginx, openvpn, postgresql9.4, valgrind, webkit2, and wireshark), openSUSE (apache2, chromium, libical, mysql-community-server, and nginx), Oracle (kernel), Red Hat (chromium-browser and eap7-jboss-ec2-eap), Slackware (squashfs), and Ubuntu (linux-hwe and nss).
Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-database-systems-administrator/
Are you a Database Systems Administrator who is looking for a challenging and fast-paced working environment? Want to a join our dynamic team and help Backblaze grow to new heights? Our Operations team is a distributed and collaborative group of individual contributors. We work closely together to build and maintain our home grown cloud storage farm, carefully controlling costs by utilizing open source and various brands of technology, as well as designing our own cloud storage servers. Members of Operations participate in the prioritization and decision making process, and make a difference everyday. The environment is challenging, but we balance the challenges with rewards, and we are looking for clever and innovative people to join us.
- Own the administration of Cassandra and MySQL
- Lead projects across a range of IT disciplines
- Understand environment thoroughly enough to administer/debug the system
- Participate in the 24×7 on-call rotation and respond to alerts as needed
- Expert knowledge of Cassandra & MySQL
- Expert knowledge of Linux administration (Debian preferred)
- Scripting skills
- Experience in automation/configuration management
- Position is based in the San Mateo, California corporate office
Required for all Backblaze Employees
- Good attitude and willingness to do whatever it takes to get the job done.
- Desire to learn and adapt to rapidly changing technologies and work environment.
- Relentless attention to detail.
- Excellent communication and problem solving skills.
- Backblaze is an Equal Opportunity Employer and we offer competitive salary and benefits, including our no policy vacation policy.
Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 – robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.
We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.
We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple – grow sustainably and profitably.
Some Backblaze Perks:
- Competitive healthcare plans
- Competitive compensation and 401k
- All employees receive Option grants
- Unlimited vacation days
- Strong coffee
- Fully stocked Micro kitchen
- Catered breakfast and lunches
- Awesome people who work on awesome projects
- Childcare bonus
- Normal work hours
- Get to bring your pets into the office
- San Mateo Office – located near Caltrain and Highways 101 & 280.
If this sounds like you — follow these steps:
- Send an email to [email protected] with the position in the subject line.
- Include your resume.
- Tell us a bit about your experience and why you’re excited to work with Backblaze.
AWS CloudFormation helps AWS customers implement an Infrastructure as Code model. Instead of setting up their environments and applications by hand, they build a template and use it to create all of the necessary resources, collectively known as a CloudFormation stack. This model removes opportunities for manual error, increases efficiency, and ensures consistent configurations over time.
Today I would like to tell you about a new feature that makes CloudFormation even more useful. This feature is designed to help you to address the challenges that you face when you use Infrastructure as Code in situations that include multiple AWS accounts and/or AWS Regions. As a quick review:
Accounts – As I have told you in the past, many organizations use a multitude of AWS accounts, often using AWS Organizations to arrange the accounts into a hierarchy and to group them into Organizational Units, or OUs (read AWS Organizations – Policy-Based Management for Multiple AWS Accounts to learn more). Our customers use multiple accounts for business units, applications, and developers. They often create separate accounts for development, testing, staging, and production on a per-application basis.
Regions – Customers also make great use of the large (and ever-growing) set of AWS Regions. They build global applications that span two or more regions, implement sophisticated multi-region disaster recovery models, replicate S3, Aurora, PostgreSQL, and MySQL data in real time, and choose locations for storage and processing of sensitive data in accord with national and regional regulations.
This expansion into multiple accounts and regions comes with some new challenges with respect to governance and consistency. Our customers tell us that they want to make sure that each new account is set up in accord with their internal standards. Among other things, they want to set up IAM users and roles, VPCs and VPC subnets, security groups, Config Rules, logging, and AWS Lambda functions in a consistent and reliable way.
In order to address these important customer needs, we are launching CloudFormation StackSet today. You can now define an AWS resource configuration in a CloudFormation template and then roll it out across multiple AWS accounts and/or Regions with a couple of clicks. You can use this to set up a baseline level of AWS functionality that addresses the cross-account and cross-region scenarios that I listed above. Once you have set this up, you can easily expand coverage to additional accounts and regions.
This feature always works on a cross-account basis. The master account owns one or more StackSets and controls deployment to one or more target accounts. The master account must include an assumable IAM role and the target accounts must delegate trust to this role. To learn how to do this, read Prerequisites in the StackSet Documentation.
Each StackSet references a CloudFormation template and contains lists of accounts and regions. All operations apply to the cross-product of the accounts and regions in the StackSet. If the StackSet references three accounts (A1, A2, and A3) and four regions (R1, R2, R3, and R4), there are twelve targets:
- Region R1: Accounts A1, A2, and A3.
- Region R2: Accounts A1, A2, and A3.
- Region R3: Accounts A1, A2, and A3.
- Region R4: Accounts A1, A2, and A3.
Deploying a template initiates creation of a CloudFormation stack in an account/region pair. Templates are deployed sequentially to regions (you control the order) to multiple accounts within the region (you control the amount of parallelism). You can also set an error threshold that will terminate deployments if stack creation fails.
You can use your existing CloudFormation templates (taking care to make sure that they are ready to work across accounts and regions), create new ones, or use one of our sample templates. We are launching with support for the AWS partition (all public regions except those in China), and expect to expand it to to the others before too long.
Using the Console, I start by clicking on Create StackSet. I can use my own template or one of the samples. I’ll use the last sample (Add config rule encrypted volumes):
I click on View template to learn more about the template and the rule:
I give my StackSet a name. The template that I selected accepts an optional parameter, and I can enter it at this time:
Next, I choose the accounts and regions. I can enter account numbers directly, reference an AWS organizational unit, or upload a list of account numbers:
I can set up the regions and control the deployment order:
I can also set the deployment options. Once I am done I click on Next to proceed:
I can add tags to my StackSet. They will be applied to the AWS resources created during the deployment:
The deployment begins, and I can track the status from the Console:
I can open up the Stacks section to see each stack. Initially, the status of each stack is OUTDATED, indicating that the template has yet to be deployed to the stack; this will change to CURRENT after a successful deployment. If a stack cannot be deleted, the status will change to INOPERABLE.
After my initial deployment, I can click on Manage StackSet to add additional accounts, regions, or both, to create additional stacks:
This new feature is available now and you can start using it today at no extra charge (you pay only for the AWS resources created on your behalf).
PS – If you create some useful templates and would like to share them with other AWS users, please send a pull request to our AWS Labs GitHub repo.
Security updates have been issued by Debian (catdoc, gsoap, and libtasn1-3), Fedora (GraphicsMagick, java-1.8.0-openjdk, krb5, librsvg2, nodejs, phpldapadmin, rubygem-rack-cors, and yara), Mageia (irssi), openSUSE (rubygem-puppet), Red Hat (kernel), Slackware (tcpdump), and Ubuntu (imagemagick, linux, linux-raspi2, linux-snapdragon, linux-lts-xenial, mysql-5.5, samba, and xorg-server, xorg-server-hwe-16.04, xorg-server-lts-xenial).
Security updates have been issued by Debian (php5 and ruby-mixlib-archive), Fedora (knot, knot-resolver, and spice), Oracle (graphite2 and java-1.8.0-openjdk), Red Hat (graphite2, java-1.6.0-sun, java-1.7.0-oracle, java-1.8.0-openjdk, and java-1.8.0-oracle), Scientific Linux (java-1.8.0-openjdk), and Ubuntu (kernel, linux, linux-raspi2, linux-hwe, and mysql-5.5, mysql-5.7).
Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/07/21/timeshiftgrafanabuzz-1w-issue-5/
We cover a lot of ground in this week’s timeShift. From diving into building your own plugin, finding the right dashboard, configuration options in the alerting feature, to monitoring your local weather, there’s something for everyone. Are you writing an article about Grafana, or have you come across an article you found interesting? Please get in touch, we’ll add it to our roundup.
From the Blogosphere
Going open-source in monitoring, part III: 10 most useful Grafana dashboards to monitor Kubernetes and services: We have hundreds of pre-made dashboards ready for you to install into your on-prem or hosted Grafana, but not every one will fit your specific monitoring needs. In part three of the series, Sergey discusses is experiences with finding useful dashboards and shows off ten of the best dashboards you can install for monitoring Kubernetes clusters and the services deployed on them.
Using AWS Lambda and API gateway for server-less Grafana adapters: Sometimes you’ll want to visualize metrics from a data source that may not yet be supported in Grafana natively. With the plugin functionality introduced in Grafana 3.0, anyone can create their own data sources. Using the SimpleJson data source, Jonas describes how he used AWS Lambda and AWS API gateway to write data source adapters for Grafana.
How to Use Grafana to Monitor JMeter Non-GUI Results – Part 2: A few issues ago we listed an article for using Grafana to monitor JMeter Non-GUI results, which required a number of non-trivial steps to complete. This article shows of an easier way to accomplish this that doesn’t require any additional configuration of InfluxDB.
Programming your Personal Weather Chart: It’s always great to see Grafana used outside of the typical dev-ops usecase. This article runs you through the steps to create your own weather chart and show off your local weather stats in Grafana. BONUS: Rob shows off a magic mirror he created, which can display this data.
vSphere Performance data – Part 6 – The Dashboard(s): This 6-part series goes into a ton of detail and walks you through the various methods of retrieving vSphere performance data, storing the data in a TSDB, and creating dashboards for the metrics. Part 6 deals specifically with Grafana, but I highly recommend reading all of the articles, as it chronicles the journey of metrics exploration, storage, and visualization from someone who had no prior experience with time series data.
Alerting in Grafana: Alerting in Grafana is a fairly new feature and one that we’re continuing to iterate on. We’re soon adding additional data source support, new notification channels, clustering, silencing rules, and more. This article steps you through all the configuration options to get you to your first alert.
Plugins and Dashboards
It can seem like work slows during July and August, but we’re still seeing a lot of activity in the community. This week we have a new graph panel to show off that gives you some unique looking dashboards, and an update to the Zabbix data source, which adds some really great features. You can install both of the plugins now on your on-prem Grafana via our cli, or with one-click on GrafanaCloud.
Bubble Chart Panel This super-cool looking panel groups your tag values into clusters of circles. The size of the circle represents the aggregated value of the time series data. There are also multiple color schemes to make those bubbles POP (pun intended)! Currently it works against OpenTSDB and Bosun, so give it a try!
Zabbix Alex has been hard at work, making improvements on the Zabbix App for Grafana. This update adds annotations, template variables, alerting and more. Thanks Alex! If you’d like to try out the app, head over to http://play.grafana-zabbix.org/dashboard/db/zabbix-db-mysql?orgId=2
This week’s MVC (Most Valuable Contributor)
Open source software can’t thrive without the contributions from the community. Each week we’ll recognize a Grafana contributor and thank them for all of their PRs, bug reports and feedback.
Tweet of the Week
This week’s tweet comes from @geek_dave
Great looking dashboard Dave! And thank you for adding new features and keeping it updated. It’s creators like you who make the dashboard repository so awesome!
— Dave Cadwallader (@geek_dave) July 18, 2017
We love when people talk about Grafana at meetups and conferences.
Monday, July 24, 2017 – 7:30pm | Google Campus Warsaw
Ząbkowska 27/31, Warsaw, Poland
Iot & HOME AUTOMATION #3 openHAB, InfluxDB, Grafana:
If you are interested in topics of the internet of things and home automation, this might be a good occasion to meet people similar to you. If you are into it, we will also show you how we can all work together on our common projects.
Tell us how we’re Doing.
We’d love your feedback on what kind of content you like, length, format, etc – so please keep the comments coming! You can submit a comment on this article below, or post something at our community forum. Help us make this better.
Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/heart-maker-faire/
We at the Raspberry Pi Foundation find it incredibly rewarding to help people make and share things they love. It’s amazing to be part of an incredibly creative community of makers. And we’re not the only ones who feel this way: for this year’s Maker Faire UK, the team over at NUSTEM created the Heart of Maker Faire, a Pi-powered art installation that is a symbol of this unique community. And to be perfectly frank, it’s bloody gorgeous.
NUSTEM’s new installation for Maker Faire UK 2017, held on 1st & 2nd April at the Centre for Life, Newcastle-upon-Tyne. Visitors wrote notes about things they love, and sealed them in jars. They then read their heart rates, and used the control boxes to associate their jar and heart rate with a space on the shelves.
A heart for the community
NUSTEM is a STEM outreach organisation from Northumbria University, and the makers there are always keen to build interactive projects that get people excited about technology. So at this year’s Faire, attendees passing their installation were invited to write down something close to their heart, put that note in a jar, and measure their heart rate. Then they could connect their heart rate, via a QR code, to a space on a shelf lined with LEDs. Once they placed the jar in their space, the LEDs started blinking to imitate their heart beat. With this art piece, the NUSTEM team wants to say something about “how we’re all individuals, but about our similarities too”.
Still beating. Heart of #MakerFaireUK
Making the heart beat
This is no small build – it uses more than 2,000 NeoPixel LEDs, as well as five Raspberry Pis, among other components. Two Pi 3s are in charge of registering people’s contributions and keeping track of their jars. A Pi Zero W acts as a central hub, connecting its bigger siblings via WiFi, and storing a MySQL database of the jars’ data. Finally, two more Pi 3s control the LEDs of the Heart via a script written in Processing. The NUSTEM team has made the code available here for you “to laugh at” (their words, not mine!)
A heart for art
Processing is an open-source programming language used to create images, graphs, and animations. It can respond to keyboard and mouse input, so you can write games with it as well. Moreover, it runs on the Pi, and you can use it to talk to the Pi’s GPIO pins, as the Heart of Maker Faire team did. Hook up buttons, sensors, and LEDs, and get ready to create amazing interactive pieces of art! If you’d like to learn more, read Matt’s blog post, or watch the talk he gave about Processing at our fifth birthday party earlier this year.
Matt Richardson: Art with Processing on the Raspberry Pi Sunday 5th March 2017 Raspberry Pi Birthday Event 2017 Filmed and edited by David and Andrew Ferguson. This video is not an official video published by the Raspberry Pi Foundation. No copyright infringement intended.
To help you get started, we’re providing a free learning resource introducing you to the basics of Processing. We’d love to see what you create, so do share a link to your masterworks in the comments!
World Maker Faire
We’ll be attending World Maker Faire in New York on the 23rd and 24th of September. Will you be there?
Security updates have been issued by Debian (bind9, heimdal, samba, and xorg-server), Fedora (cacti, evince, expat, globus-ftp-client, globus-gass-cache-program, globus-gass-copy, globus-gram-client, globus-gram-job-manager, globus-gram-job-manager-condor, globus-gridftp-server, globus-gssapi-gsi, globus-io, globus-net-manager, globus-xio, globus-xio-gsi-driver, globus-xio-pipe-driver, globus-xio-udt-driver, jabberd, myproxy, perl-DBD-MySQL, and php), openSUSE (libcares2), SUSE (xorg-x11-server), and Ubuntu (evince and nginx).