Tag Archives: monitoring

7 tools for analyzing performance in Linux with bcc/BPF (opensource.com)

Post Syndicated from corbet original https://lwn.net/Articles/739861/rss

Brendan Gregg introduces a
set of BPF-based tracing tools
on opensource.com.
Traditional analysis of filesystem performance focuses on block I/O
statistics—what you commonly see printed by the iostat(1) tool and plotted
by many performance-monitoring GUIs. Those statistics show how the disks
are performing, but not really the filesystem. Often you care more about
the filesystem’s performance than the disks, since it’s the filesystem that
applications make requests to and wait for. And the performance of
filesystems can be quite different from that of disks! Filesystems may
serve reads entirely from memory cache and also populate that cache via a
read-ahead algorithm and for write-back caching. xfsslower shows filesystem
performance—what the applications directly experience.

Ultimate 3D printer control with OctoPrint

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/octoprint/

Control and monitor your 3D printer remotely with a Raspberry Pi and OctoPrint.

Timelapse of OctoPrint Ornament

Printed on a bq Witbox STL file can be found here: http://www.thingiverse.com/thing:191635 OctoPrint is located here: http://www.octoprint.org

3D printing

Whether you have a 3D printer at home or use one at your school or local makerspace, it’s fair to assume you’ve had a failed print or two in your time. Filament knotting or running out, your print peeling away from the print bed — these are common issues for all 3D printing enthusiasts, and they can be costly if they’re discovered too late.

OctoPrint

OctoPrint is a free open-source software, created and maintained by Gina Häußge, that performs a multitude of useful 3D printing–related tasks, including remote control of your printer, live video, and data collection.

The OctoPrint logo

Control and monitoring

To control the print process, use OctoPrint on a Raspberry Pi connected to your 3D printer. First, ensure a safe uninterrupted run by using the software to restrict who can access the printer. Then, before starting your print, use the web app to work on your STL file. The app also allows you to reposition the print head at any time, as well as pause or stop printing if needed.

Live video streaming

Since OctoPrint can stream video of your print as it happens, you can watch out for any faults that may require you to abort and restart. Proud of your print? Record the entire process from start to finish and upload the time-lapse video to your favourite social media platform.

OctoPrint software graphic user interface screenshot

Data capture

Octoprint records real-time data, such as the temperature, giving you another way to monitor your print to ensure a smooth, uninterrupted process. Moreover, the records will help with troubleshooting if there is a problem.

OctoPrint software graphic user interface screenshot

Print the Millenium Falcon

OK, you can print anything you like. However, this design definitely caught our eye this week.

3D-Printed Fillenium Malcon (Timelapse)

This is a Timelapse of my biggest print project so far on my own designed/built printer. It’s 500x170x700(mm) and weights 3 Kilograms of Filament.

You can support the work of Gina and OctoPrint by visiting her Patreon account and following OctoPrint on Twitter, Facebook, or G+. And if you’ve set up a Raspberry Pi to run OctoPrint, or you’ve created some cool Pi-inspired 3D prints, make sure to share them with us on our own social media channels.

The post Ultimate 3D printer control with OctoPrint appeared first on Raspberry Pi.

Amazon QuickSight Update – Geospatial Visualization, Private VPC Access, and More

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-quicksight-update-geospatial-visualization-private-vpc-access-and-more/

We don’t often recognize or celebrate anniversaries at AWS. With nearly 100 services on our list, we’d be eating cake and drinking champagne several times a week. While that might sound like fun, we’d rather spend our working hours listening to customers and innovating. With that said, Amazon QuickSight has now been generally available for a little over a year and I would like to give you a quick update!

QuickSight in Action
Today, tens of thousands of customers (from startups to enterprises, in industries as varied as transportation, legal, mining, and healthcare) are using QuickSight to analyze and report on their business data.

Here are a couple of examples:

Gemini provides legal evidence procurement for California attorneys who represent injured workers. They have gone from creating custom reports and running one-off queries to creating and sharing dynamic QuickSight dashboards with drill-downs and filtering. QuickSight is used to track sales pipeline, measure order throughput, and to locate bottlenecks in the order processing pipeline.

Jivochat provides a real-time messaging platform to connect visitors to website owners. QuickSight lets them create and share interactive dashboards while also providing access to the underlying datasets. This has allowed them to move beyond the sharing of static spreadsheets, ensuring that everyone is looking at the same and is empowered to make timely decisions based on current data.

Transfix is a tech-powered freight marketplace that matches loads and increases visibility into logistics for Fortune 500 shippers in retail, food and beverage, manufacturing, and other industries. QuickSight has made analytics accessible to both BI engineers and non-technical business users. They scrutinize key business and operational metrics including shipping routes, carrier efficient, and process automation.

Looking Back / Looking Ahead
The feedback on QuickSight has been incredibly helpful. Customers tell us that their employees are using QuickSight to connect to their data, perform analytics, and make high-velocity, data-driven decisions, all without setting up or running their own BI infrastructure. We love all of the feedback that we get, and use it to drive our roadmap, leading to the introduction of over 40 new features in just a year. Here’s a summary:

Looking forward, we are watching an interesting trend develop within our customer base. As these customers take a close look at how they analyze and report on data, they are realizing that a serverless approach offers some tangible benefits. They use Amazon Simple Storage Service (S3) as a data lake and query it using a combination of QuickSight and Amazon Athena, giving them agility and flexibility without static infrastructure. They also make great use of QuickSight’s dashboards feature, monitoring business results and operational metrics, then sharing their insights with hundreds of users. You can read Building a Serverless Analytics Solution for Cleaner Cities and review Serverless Big Data Analytics using Amazon Athena and Amazon QuickSight if you are interested in this approach.

New Features and Enhancements
We’re still doing our best to listen and to learn, and to make sure that QuickSight continues to meet your needs. I’m happy to announce that we are making seven big additions today:

Geospatial Visualization – You can now create geospatial visuals on geographical data sets.

Private VPC Access – You can now sign up to access a preview of a new feature that allows you to securely connect to data within VPCs or on-premises, without the need for public endpoints.

Flat Table Support – In addition to pivot tables, you can now use flat tables for tabular reporting. To learn more, read about Using Tabular Reports.

Calculated SPICE Fields – You can now perform run-time calculations on SPICE data as part of your analysis. Read Adding a Calculated Field to an Analysis for more information.

Wide Table Support – You can now use tables with up to 1000 columns.

Other Buckets – You can summarize the long tail of high-cardinality data into buckets, as described in Working with Visual Types in Amazon QuickSight.

HIPAA Compliance – You can now run HIPAA-compliant workloads on QuickSight.

Geospatial Visualization
Everyone seems to want this feature! You can now take data that contains a geographic identifier (country, city, state, or zip code) and create beautiful visualizations with just a few clicks. QuickSight will geocode the identifier that you supply, and can also accept lat/long map coordinates. You can use this feature to visualize sales by state, map stores to shipping destinations, and so forth. Here’s a sample visualization:

To learn more about this feature, read Using Geospatial Charts (Maps), and Adding Geospatial Data.

Private VPC Access Preview
If you have data in AWS (perhaps in Amazon Redshift, Amazon Relational Database Service (RDS), or on EC2) or on-premises in Teradata or SQL Server on servers without public connectivity, this feature is for you. Private VPC Access for QuickSight uses an Elastic Network Interface (ENI) for secure, private communication with data sources in a VPC. It also allows you to use AWS Direct Connect to create a secure, private link with your on-premises resources. Here’s what it looks like:

If you are ready to join the preview, you can sign up today.

Jeff;

 

AWS Achieves FedRAMP JAB Moderate Provisional Authorization for 20 Services in the AWS US East/West Region

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-jab-moderate-authorization-for-20-services-in-us-eastwest/

The AWS US East/West Region has received a Provisional Authority to Operate (P-ATO) from the Joint Authorization Board (JAB) at the Federal Risk and Authorization Management Program (FedRAMP) Moderate baseline.

Though AWS has maintained an AWS US East/West Region Agency-ATO since early 2013, this announcement represents AWS’s carefully deliberated move to the JAB for the centralized maintenance of our P-ATO for 10 services already authorized. This also includes the addition of 10 new services to our FedRAMP program (see the complete list of services below). This doubles the number of FedRAMP Moderate services available to our customers to enable increased use of the cloud and support modernized IT missions. Our public sector customers now can leverage this FedRAMP P-ATO as a baseline for their own authorizations and look to the JAB for centralized Continuous Monitoring reporting and updates. In a significant enhancement for our partners that build their solutions on the AWS US East/West Region, they can now achieve FedRAMP JAB P-ATOs of their own for their Platform as a Service (PaaS) and Software as a Service (SaaS) offerings.

In line with FedRAMP security requirements, our independent FedRAMP assessment was completed in partnership with a FedRAMP accredited Third Party Assessment Organization (3PAO) on our technical, management, and operational security controls to validate that they meet or exceed FedRAMP’s Moderate baseline requirements. Effective immediately, you can begin leveraging this P-ATO for the following 20 services in the AWS US East/West Region:

  • Amazon Aurora (MySQL)*
  • Amazon CloudWatch Logs*
  • Amazon DynamoDB
  • Amazon Elastic Block Store
  • Amazon Elastic Compute Cloud
  • Amazon EMR*
  • Amazon Glacier*
  • Amazon Kinesis Streams*
  • Amazon RDS (MySQL, Oracle, Postgres*)
  • Amazon Redshift
  • Amazon Simple Notification Service*
  • Amazon Simple Queue Service*
  • Amazon Simple Storage Service
  • Amazon Simple Workflow Service*
  • Amazon Virtual Private Cloud
  • AWS CloudFormation*
  • AWS CloudTrail*
  • AWS Identity and Access Management
  • AWS Key Management Service
  • Elastic Load Balancing

* Services with first-time FedRAMP Moderate authorizations

We continue to work with the FedRAMP Project Management Office (PMO), other regulatory and compliance bodies, and our customers and partners to ensure that we are raising the bar on our customers’ security and compliance needs.

To learn more about how AWS helps customers meet their security and compliance requirements, see the AWS Compliance website. To learn about what other public sector customers are doing on AWS, see our Government, Education, and Nonprofits Case Studies and Customer Success Stories. To review the public posting of our FedRAMP authorizations, see the FedRAMP Marketplace.

– Chris Gile, Senior Manager, AWS Public Sector Risk and Compliance

timeShift(GrafanaBuzz, 1w) Issue 22

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/11/17/timeshiftgrafanabuzz-1w-issue-22/

Welome to TimeShift

We hope you liked our recent article with videos and slides from the events we’ve participated in recently. With Thanksgiving right around the corner, we’re getting a breather from work-related travel, but only a short one. We have some events in the coming weeks, and of course are busy filling in the details for GrafanaCon EU.

This week we have a lot of articles, videos and presentations to share, as well as some important plugin updates. Enjoy!


Latest Release

Grafana 4.6.2 is now available and includes some bug fixes:

  • Prometheus: Fixes bug with new Prometheus alerts in Grafana. Make sure to download this version if your using Prometheus for alerting. More details in the issue. #9777
  • Color picker: Bug after using textbox input field to change/paste color string #9769
  • Cloudwatch: build using golang 1.9.2 #9667, thanks @mtanda
  • Heatmap: Fixed tooltip for “time series buckets” mode #9332
  • InfluxDB: Fixed query editor issue when using > or < operators in WHERE clause #9871

Download Grafana 4.6.2 Now


From the Blogosphere

Cloud Tech 10 – 13th November 2017 – Grafana, Linux FUSE Adapter, Azure Stack and more!: Mark Whitby is a Cloud Solution Architect at Microsoft UK. Each week he prodcues a video reviewing new developments with Microsoft Azure. This week Mark covers the new Azure Monitoring Plugin we recently announced. He also shows you how to get up and running with Grafana quickly using the Azure Marketplace.

Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes: Oracle published an article on monitoring WebLogic server on Kubernetes. To do this, you’ll use the WebLogic Monitoring Exporter to scrape the server metrics and feed them to Prometheus, then visualize the data in Grafana. Marina goes into a lot of detail and provides sample files and configs to help you get going.

Getting Started with Prometheus: Will Robinson has started a new series on monitoring with Prometheus from someone who has never touched it before. Part 1 introduces a number of monitoring tools and concepts, and helps define a number of monitoring terms. Part 2 teaches you how to spin up Prometheus in a Docker container, and takes a look at writing queries. Looking forward to the third post, when he dives into the visualization aspect.

Monitoring with Prometheus: Alexander Schwartz has made the slides from his most recent presentation from the Continuous Lifcycle Conference in Germany available. In his talk, he discussed getting started with Prometheus, how it differs from other monitoring concepts, and provides examples of how to monitor and alert. We’ll link to the video of the talk when it’s available.

Using Grafana with SiriDB: Jeroen van der Heijden has written an in-depth tutorial to help you visualize data from the open source TSDB, SiriDB in Grafana. This tutorial will get you familiar with setting up SiriDB and provides a sample dashboard to help you get started.

Real-Time Monitoring with Grafana, StatsD and InfluxDB – Artur Caliendo Prado: This is a video from a talk at The Conf, held in Brazil. Artur’s presentation focuses on the experiences they had building a monitoring stack at Youse, how their monitoring became more complex as they scaled, and the platform they built to make sense of their data.

Using Grafana & Inlfuxdb to view XIV Host Performance Metrics – Part 4 Array Stats: This is the fourth part in a series of posts about host performance metrics. This post dives in to array stats to identify workloads and maintain balance across ports. Check out part 1, part 2 and part 3.


GrafanaCon Tickets are Going Fast

Tickets are going fast for GrafanaCon EU, but we still have a seat reserved for you. Join us March 1-2, 2018 in Amsterdam for 2 days of talks centered around Grafana and the surrounding monitoring ecosystem including Graphite, Prometheus, InfluxData, Elasticsearch, Kubernetes, and more.

Get Your Ticket Now


Grafana Plugins

Plugin authors are often adding new features and fixing bugs, which will make your plugin perform better – so it’s important to keep your plugins up to date. We’ve made updating easy; for on-prem Grafana, use the Grafana-cli tool, or update with 1 click if you’re using Hosted Grafana.

UPDATED PLUGIN

Hawkular data source – There is an important change in this release – as this datasource is now able to fetch not only Hawkular Metrics but also Hawkular Alerts, the server URL in the datasource configuration must be updated: http://myserver:123/hawkular/metrics must be changed to http://myserver:123/hawkular

Some of the changes (see the release notes) for more details):

  • Allow per-query tenant configuration
  • Annotations can now be configured out of Availability metrics and Hawkular Alerts events in addition to string metrics
  • allows dot character in tag names

Update

UPDATED PLUGIN

Diagram Panel – This is the first release in a while for the popular Diagram Panel plugin.

In addition to these changes, there are also a number of bug fixes:

Update

UPDATED PLUGIN

Influx Admin Panel – received a number of improvements:

  • Fix issue always showing query results
  • When there is only one row, swap rows/cols (ie: SHOW DIAGNOSTICS)
  • Improved auto-refresh behavior
  • Fix query time sorting
  • show ‘status’ field (killed, etc)

Update


Upcoming Events:

In between code pushes we like to speak at, sponsor and attend all kinds of conferences and meetups. We have some awesome talks and events coming soon. Hope to see you at one of these!

How to Use Open Source Projects for Performance Monitoring | Webinar
Nov. 29, 1pm EST
:
Check out how you can use popular open source projects, for performance monitoring of your Infrastructure, Application, and Cloud faster, easier, and to scale. In this webinar, Daniel Lee from Grafana Labs, and Chris Churilo from InfluxData, will provide you with step by step instruction from download & configure, to collecting metrics and building dashboards and alerts.

RSVP

KubeCon | Austin, TX – Dec. 6-8, 2017: We’re sponsoring KubeCon 2017! This is the must-attend conference for cloud native computing professionals. KubeCon + CloudNativeCon brings together leading contributors in:

  • Cloud native applications and computing
  • Containers
  • Microservices
  • Central orchestration processing
  • And more

Buy Tickets

FOSDEM | Brussels, Belgium – Feb 3-4, 2018: FOSDEM is a free developer conference where thousands of developers of free and open source software gather to share ideas and technology. Carl Bergquist is managing the Cloud and Monitoring Devroom, and the CFP is now open. There is no need to register; all are welcome. If you’re interested in speaking at FOSDEM, submit your talk now!


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

We were glad to be a part of InfluxDays this year, and looking forward to seeing the InfluxData team in NYC in February.


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


How are we doing?

I enjoy writing these weekly roudups, but am curious how I can improve them. Submit a comment on this article below, or post something at our community forum. Help us make these weekly roundups better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Event-Driven Computing with Amazon SNS and AWS Compute, Storage, Database, and Networking Services

Post Syndicated from Christie Gifrin original https://aws.amazon.com/blogs/compute/event-driven-computing-with-amazon-sns-compute-storage-database-and-networking-services/

Contributed by Otavio Ferreira, Manager, Software Development, AWS Messaging

Like other developers around the world, you may be tackling increasingly complex business problems. A key success factor, in that case, is the ability to break down a large project scope into smaller, more manageable components. A service-oriented architecture guides you toward designing systems as a collection of loosely coupled, independently scaled, and highly reusable services. Microservices take this even further. To improve performance and scalability, they promote fine-grained interfaces and lightweight protocols.

However, the communication among isolated microservices can be challenging. Services are often deployed onto independent servers and don’t share any compute or storage resources. Also, you should avoid hard dependencies among microservices, to preserve maintainability and reusability.

If you apply the pub/sub design pattern, you can effortlessly decouple and independently scale out your microservices and serverless architectures. A pub/sub messaging service, such as Amazon SNS, promotes event-driven computing that statically decouples event publishers from subscribers, while dynamically allowing for the exchange of messages between them. An event-driven architecture also introduces the responsiveness needed to deal with complex problems, which are often unpredictable and asynchronous.

What is event-driven computing?

Given the context of microservices, event-driven computing is a model in which subscriber services automatically perform work in response to events triggered by publisher services. This paradigm can be applied to automate workflows while decoupling the services that collectively and independently work to fulfil these workflows. Amazon SNS is an event-driven computing hub, in the AWS Cloud, that has native integration with several AWS publisher and subscriber services.

Which AWS services publish events to SNS natively?

Several AWS services have been integrated as SNS publishers and, therefore, can natively trigger event-driven computing for a variety of use cases. In this post, I specifically cover AWS compute, storage, database, and networking services, as depicted below.

Compute services

  • Auto Scaling: Helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You can configure Auto Scaling lifecycle hooks to trigger events, as Auto Scaling resizes your EC2 cluster.As an example, you may want to warm up the local cache store on newly launched EC2 instances, and also download log files from other EC2 instances that are about to be terminated. To make this happen, set an SNS topic as your Auto Scaling group’s notification target, then subscribe two Lambda functions to this SNS topic. The first function is responsible for handling scale-out events (to warm up cache upon provisioning), whereas the second is in charge of handling scale-in events (to download logs upon termination).

  • AWS Elastic Beanstalk: An easy-to-use service for deploying and scaling web applications and web services developed in a number of programming languages. You can configure event notifications for your Elastic Beanstalk environment so that notable events can be automatically published to an SNS topic, then pushed to topic subscribers.As an example, you may use this event-driven architecture to coordinate your continuous integration pipeline (such as Jenkins CI). That way, whenever an environment is created, Elastic Beanstalk publishes this event to an SNS topic, which triggers a subscribing Lambda function, which then kicks off a CI job against your newly created Elastic Beanstalk environment.

  • Elastic Load Balancing: Automatically distributes incoming application traffic across Amazon EC2 instances, containers, or other resources identified by IP addresses.You can configure CloudWatch alarms on Elastic Load Balancing metrics, to automate the handling of events derived from Classic Load Balancers. As an example, you may leverage this event-driven design to automate latency profiling in an Amazon ECS cluster behind a Classic Load Balancer. In this example, whenever your ECS cluster breaches your load balancer latency threshold, an event is posted by CloudWatch to an SNS topic, which then triggers a subscribing Lambda function. This function runs a task on your ECS cluster to trigger a latency profiling tool, hosted on the cluster itself. This can enhance your latency troubleshooting exercise by making it timely.

Storage services

  • Amazon S3: Object storage built to store and retrieve any amount of data.You can enable S3 event notifications, and automatically get them posted to SNS topics, to automate a variety of workflows. For instance, imagine that you have an S3 bucket to store incoming resumes from candidates, and a fleet of EC2 instances to encode these resumes from their original format (such as Word or text) into a portable format (such as PDF).In this example, whenever new files are uploaded to your input bucket, S3 publishes these events to an SNS topic, which in turn pushes these messages into subscribing SQS queues. Then, encoding workers running on EC2 instances poll these messages from the SQS queues; retrieve the original files from the input S3 bucket; encode them into PDF; and finally store them in an output S3 bucket.

  • Amazon EFS: Provides simple and scalable file storage, for use with Amazon EC2 instances, in the AWS Cloud.You can configure CloudWatch alarms on EFS metrics, to automate the management of your EFS systems. For example, consider a highly parallelized genomics analysis application that runs against an EFS system. By default, this file system is instantiated on the “General Purpose” performance mode. Although this performance mode allows for lower latency, it might eventually impose a scaling bottleneck. Therefore, you may leverage an event-driven design to handle it automatically.Basically, as soon as the EFS metric “Percent I/O Limit” breaches 95%, CloudWatch could post this event to an SNS topic, which in turn would push this message into a subscribing Lambda function. This function automatically creates a new file system, this time on the “Max I/O” performance mode, then switches the genomics analysis application to this new file system. As a result, your application starts experiencing higher I/O throughput rates.

  • Amazon Glacier: A secure, durable, and low-cost cloud storage service for data archiving and long-term backup.You can set a notification configuration on an Amazon Glacier vault so that when a job completes, a message is published to an SNS topic. Retrieving an archive from Amazon Glacier is a two-step asynchronous operation, in which you first initiate a job, and then download the output after the job completes. Therefore, SNS helps you eliminate polling your Amazon Glacier vault to check whether your job has been completed, or not. As usual, you may subscribe SQS queues, Lambda functions, and HTTP endpoints to your SNS topic, to be notified when your Amazon Glacier job is done.

  • AWS Snowball: A petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data.You can leverage Snowball notifications to automate workflows related to importing data into and exporting data from AWS. More specifically, whenever your Snowball job status changes, Snowball can publish this event to an SNS topic, which in turn can broadcast the event to all its subscribers.As an example, imagine a Geographic Information System (GIS) that distributes high-resolution satellite images to users via Web browser. In this example, the GIS vendor could capture up to 80 TB of satellite images; create a Snowball job to import these files from an on-premises system to an S3 bucket; and provide an SNS topic ARN to be notified upon job status changes in Snowball. After Snowball changes the job status from “Importing” to “Completed”, Snowball publishes this event to the specified SNS topic, which delivers this message to a subscribing Lambda function, which finally creates a CloudFront web distribution for the target S3 bucket, to serve the images to end users.

Database services

  • Amazon RDS: Makes it easy to set up, operate, and scale a relational database in the cloud.RDS leverages SNS to broadcast notifications when RDS events occur. As usual, these notifications can be delivered via any protocol supported by SNS, including SQS queues, Lambda functions, and HTTP endpoints.As an example, imagine that you own a social network website that has experienced organic growth, and needs to scale its compute and database resources on demand. In this case, you could provide an SNS topic to listen to RDS DB instance events. When the “Low Storage” event is published to the topic, SNS pushes this event to a subscribing Lambda function, which in turn leverages the RDS API to increase the storage capacity allocated to your DB instance. The provisioning itself takes place within the specified DB maintenance window.

  • Amazon ElastiCache: A web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud.ElastiCache can publish messages using Amazon SNS when significant events happen on your cache cluster. This feature can be used to refresh the list of servers on client machines connected to individual cache node endpoints of a cache cluster. For instance, an ecommerce website fetches product details from a cache cluster, with the goal of offloading a relational database and speeding up page load times. Ideally, you want to make sure that each web server always has an updated list of cache servers to which to connect.To automate this node discovery process, you can get your ElastiCache cluster to publish events to an SNS topic. Thus, when ElastiCache event “AddCacheNodeComplete” is published, your topic then pushes this event to all subscribing HTTP endpoints that serve your ecommerce website, so that these HTTP servers can update their list of cache nodes.

  • Amazon Redshift: A fully managed data warehouse that makes it simple to analyze data using standard SQL and BI (Business Intelligence) tools.Amazon Redshift uses SNS to broadcast relevant events so that data warehouse workflows can be automated. As an example, imagine a news website that sends clickstream data to a Kinesis Firehose stream, which then loads the data into Amazon Redshift, so that popular news and reading preferences might be surfaced on a BI tool. At some point though, this Amazon Redshift cluster might need to be resized, and the cluster enters a ready-only mode. Hence, this Amazon Redshift event is published to an SNS topic, which delivers this event to a subscribing Lambda function, which finally deletes the corresponding Kinesis Firehose delivery stream, so that clickstream data uploads can be put on hold.At a later point, after Amazon Redshift publishes the event that the maintenance window has been closed, SNS notifies a subscribing Lambda function accordingly, so that this function can re-create the Kinesis Firehose delivery stream, and resume clickstream data uploads to Amazon Redshift.

  • AWS DMS: Helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.DMS also uses SNS to provide notifications when DMS events occur, which can automate database migration workflows. As an example, you might create data replication tasks to migrate an on-premises MS SQL database, composed of multiple tables, to MySQL. Thus, if replication tasks fail due to incompatible data encoding in the source tables, these events can be published to an SNS topic, which can push these messages into a subscribing SQS queue. Then, encoders running on EC2 can poll these messages from the SQS queue, encode the source tables into a compatible character set, and restart the corresponding replication tasks in DMS. This is an event-driven approach to a self-healing database migration process.

Networking services

  • Amazon Route 53: A highly available and scalable cloud-based DNS (Domain Name System). Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources.You can set CloudWatch alarms and get automated Amazon SNS notifications when the status of your Route 53 health check changes. As an example, imagine an online payment gateway that reports the health of its platform to merchants worldwide, via a status page. This page is hosted on EC2 and fetches platform health data from DynamoDB. In this case, you could configure a CloudWatch alarm for your Route 53 health check, so that when the alarm threshold is breached, and the payment gateway is no longer considered healthy, then CloudWatch publishes this event to an SNS topic, which pushes this message to a subscribing Lambda function, which finally updates the DynamoDB table that populates the status page. This event-driven approach avoids any kind of manual update to the status page visited by merchants.

  • AWS Direct Connect (AWS DX): Makes it easy to establish a dedicated network connection from your premises to AWS, which can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.You can monitor physical DX connections using CloudWatch alarms, and send SNS messages when alarms change their status. As an example, when a DX connection state shifts to 0 (zero), indicating that the connection is down, this event can be published to an SNS topic, which can fan out this message to impacted servers through HTTP endpoints, so that they might reroute their traffic through a different connection instead. This is an event-driven approach to connectivity resilience.

More event-driven computing on AWS

In addition to SNS, event-driven computing is also addressed by Amazon CloudWatch Events, which delivers a near real-time stream of system events that describe changes in AWS resources. With CloudWatch Events, you can route each event type to one or more targets, including:

Many AWS services publish events to CloudWatch. As an example, you can get CloudWatch Events to capture events on your ETL (Extract, Transform, Load) jobs running on AWS Glue and push failed ones to an SQS queue, so that you can retry them later.

Conclusion

Amazon SNS is a pub/sub messaging service that can be used as an event-driven computing hub to AWS customers worldwide. By capturing events natively triggered by AWS services, such as EC2, S3 and RDS, you can automate and optimize all kinds of workflows, namely scaling, testing, encoding, profiling, broadcasting, discovery, failover, and much more. Business use cases presented in this post ranged from recruiting websites, to scientific research, geographic systems, social networks, retail websites, and news portals.

Start now by visiting Amazon SNS in the AWS Management Console, or by trying the AWS 10-Minute Tutorial, Send Fan-out Event Notifications with Amazon SNS and Amazon SQS.

 

Staying Busy Between Code Pushes

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/11/16/staying-busy-between-code-pushes/

Staying Busy Between Code Pushes.

Maintaining a regular cadence of pushing out releases, adding new features, implementing bug fixes and staying on top of support requests is important for any software to thrive; but especially important for open source software due to its rapid pace. It’s easy to lose yourself in code and forget that events are happening all the time – in every corner of the world, where we can learn, share knowledge, and meet like-minded individuals to build better software, together. There are so many amazing events we’d like to participate in, but there simply isn’t enough time (or budget) to fit them all in. Here’s what we’ve been up to recently; between code pushes.

Recent Events

Øredev Conference | Malmö, Sweden: Øredev is one of the biggest developer conferences in Scandinavia, and Grafana Labs jumped at the chance to be a part of it. In early November, Grafana Labs Principal Developer, Carl Bergquist, gave a great talk on “Monitoring for Everyone”, which discussed the concepts of monitoring and why everyone should care, different ways to monitor your systems, extending your monitoring to containers and microservices, and finally what to monitor and alert on. Watch the video of his talk below.

InfluxDays | San Francisco, CA: Dan Cech, our Director of Platform Services, spoke at InfluxDays in San Francisco on Nov 14, and Grafana Labs sponsored the event. InfluxDB is a popular data source for Grafana, so we wanted to connect to the InfluxDB community and show them how to get the most out of their data. Dan discussed building dashboards, choosing the best panels for your data, setting up alerting in Grafana and a few sneak peeks of the upcoming Grafana 5.0. The video of his talk is forthcoming, but Dan has made his presentation available.

PromCon | Munich, Germany: PromCon is the Prometheus-focused event of the year. In August, Carl Bergquist, had the opportunity to speak at PromCon and take a deep dive into Grafana and Prometheus. Many attendees at PromCon were already familiar with Grafana, since it’s the default dashboard tool for Prometheus, but Carl had a trove of tricks and optimizations to share. He also went over some major changes and what we’re currently working on.

CNCF Meetup | New York, NY: Grafana Co-founder and CEO, Raj Dutt, particpated in a panel discussion with the folks of Packet and the Cloud Native Computing Foundation. The discussion focused on the success stories, failures, rationales and in-the-trenches challenges when running cloud native in private or non “public cloud” datacenters (bare metal, colocation, private clouds, special hardware or networking setups, compliance and security-focused deployments).

Percona Live | Dublin: Daniel Lee traveled to Dublin, Ireland this fall to present at the database conference Percona Live. There he showed the new native MySQL support, along with a number of upcoming features in Grafana 5.0. His presentation is available to download.

Big Monitoring Meetup | St. Petersburg, Russian Federation: Alexander Zobnin, our developer located in Russia, is the primary maintainer of our popular Zabbix plugin. He attended the Big Monitoring Meetup to discuss monitoring, Grafana dashboards and democratizing metrics.

Why observability matters – now and in the future | Webinar: Our own Carl Bergquist and Neil Gehani, Director of Product at Weaveworks, to discover best practices on how to get started with monitoring both your application and infrastructure. Start capturing metrics that matter, aggregate and visualize them in a useful way that allows for identifying bottlenecks and proactively preventing incidents. View Carl’s presentation.

Upcoming Events

We’re going to maintain this momentum with a number of upcoming events, and hope you can join us.

KubeCon | Austin, TX – Dec. 6-8, 2017: We’re sponsoring KubeCon 2017! This is the must-attend conference for cloud native computing professionals. KubeCon + CloudNativeCon brings together leading contributors in:

  • Cloud native applications and computing
  • Containers
  • Microservices
  • Central orchestration processing
  • And more.

Buy Tickets

How to Use Open Source Projects for Performance Monitoring | Webinar
Nov. 29, 1pm EST:
Check out how you can use popular open source projects, for performance monitoring of your Infrastructure, Application, and Cloud faster, easier, and to scale. In this webinar, Daniel Lee from Grafana Labs, and Chris Churilo from InfluxData, will provide you with step by step instruction from download & configure, to collecting metrics and building dashboards and alerts.

RSVP

FOSDEM | Brussels, Belgium – Feb 3-4, 2018: FOSDEM is a free developer conference where thousands of developers of free and open source software gather to share ideas and technology. Carl Bergquist is managing the Cloud and Monitoring Devroom, and the CFP is now open. There is no need to register; all are welcome. If you’re interested in speaking at FOSDEM, submit your talk now!

GrafanaCon EU

Last, but certainly not least, the next GrafanaCon is right around the corner. GrafanaCon EU (to be held in Amsterdam, Netherlands, March 1-2. 2018),is a two-day event with talks centered around Grafana and the surrounding ecosystem. In addition to the latest features and functionality of Grafana, you can expect to see and hear from members of the monitoring community like Graphite, Prometheus, InfluxData, Elasticsearch Kubernetes, and more. Head to grafanacon.org to see the latest speakers confirmed. We have speakers from Automattic, Bloomberg, CERN, Fastly, Tinder and more!

Conclusion

The Grafana Labs team is spread across the globe. Having a “post-geographic” company structure give us the opportunity to take part in events wherever they may be held in the world. As our team continues to grow, we hope to take part in even more events, and hope you can find the time to join us.

Protect your Reputation with Email Pausing and Configuration Set Metrics

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/ses/protect-your-reputation-with-email-pausing-and-configuration-set-metrics/

In August, we launched the reputation dashboard, which helps you track important metrics that could impact your ability to send emails. By monitoring the metrics in this dashboard, you can protect your sender reputation, which can increase the likelihood that the emails you send will reach your customers’ inboxes.

Today, we’re launching two features that build upon the capabilities of the reputation dashboard. The first is the ability to temporarily pause email sending, either at the configuration set level, or across your entire Amazon SES account. The second is the ability to export reputation metrics for individual configuration sets.

Email Pausing

Today’s update includes new API operations that can temporarily pause your ability to send email using Amazon SES. To disable email sending across your entire Amazon SES account, you can use the UpdateAccountSendingEnabled operation. To pause sending only for emails sent using a specific configuration set, you can use the UpdateConfigurationSetSendingEnabled operation.

Email pausing is helpful because Amazon SES uses automatic enforcement policies. If the bounce or complaint rates for your account are too high, your account is automatically placed on probation. If the bounce or complaint issues continue after the probation period has ended, your account may be suspended.

With email pausing, you can temporarily halt your ability to send email before your account is placed on probation. While your ability to send email is paused, you can identify the issues that were causing your account to register high bounce or complaint rates. You can then resume sending after the issues are resolved.

Email pausing helps ensure that your ability to send email using Amazon SES is not interrupted because of enforcement issues. It helps ensure that your sender reputation won’t be damaged by mistakes or unforeseen issues.

You can learn more about the UpdateAccountSendingEnabled and UpdateConfigurationSetSendingEnabled operations in the Amazon Simple Email Service API Reference.

Configuration Set Reputation Metrics

Amazon SES automatically publishes the bounce and complaint rates for your account to Amazon CloudWatch. In CloudWatch, you can monitor these metrics over time, and create alarms that notify you when your reputation metrics cross certain thresholds.

With today’s update, you can also publish reputation metrics for individual configuration sets to CloudWatch. This feature gives you additional information about the messages you send using Amazon SES. For example, if you send all of your marketing emails using one configuration set, and your transactional emails using a different configuration set, you can view distinct reputation metrics for each type of email.

Because we anticipate that this feature will lead to the creation of many new configuration sets, we’re increasing the maximum number of configuration sets you can create from 50 to 10,000.

For more information about exporting reputation metrics for configuration sets, see Exporting Reputation Metrics for a Configuration Set to CloudWatch in the Amazon Simple Email Service Developer Guide.

Automating These Features

You can use AWS services—including Amazon SNS, AWS Lambda, and Amazon CloudWatch—to create a solution that automatically pauses email sending for your account when your overall reputation metrics cross a certain threshold. Or, to minimize disruption to your email sending program, you can pause email sending for a specific configuration set when the metrics for that configuration set cross a threshold. The following image illustrates the processes that occur when you implement these solutions.

A flow diagram that illustrates a solution for automatically pausing Amazon SES email sending. Amazon SES provides reputation metrics to CloudWatch. If those metrics exceed a threshold, a CloudWatch alarm is triggered, which triggers an SNS topic. The SNS topic sends notifications (email, SMS), and executes a Lambda function, which pauses email sending in SES.

For more information on both of these solutions, see Automatically Pausing Email Sending in the Amazon Simple Email Service Developer Guide.

We’re always looking for ways to help safeguard the reputation you’ve worked hard to build. If you have suggestions, questions, or comments, we’d love to hear from you in the comments below, or in the Amazon SES Forum.

These features are now available in the following AWS Regions: US West (Oregon), US East (N. Virginia), and EU (Ireland).

timeShift(GrafanaBuzz, 1w) Issue 21

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/11/10/timeshiftgrafanabuzz-1w-issue-21/

This week the Stockholm team was in Malmö, Sweden for Øredev – one of the biggest developer conferences in Scandinavia, while the rest of Grafana Labs had to live vicariously through Twitter posts. We also announced a collaboration with Microsoft’s Azure team to create an official Azure data source plugin for Grafana. We’ve also announced the next block of speakers at GrafanaCon. Awesome week!


Photos from Oredev


Latest Release

Grafana 4.6.1 adds some bug fixes:

  • Singlestat: Lost thresholds when using save dashboard as #96816
  • Graph: Fix for series override color picker #97151
  • Go: build using golang 1.9.2 #97134
  • Plugins: Fixed problem with loading plugin js files behind auth proxy #95092
  • Graphite: Annotation tooltip should render empty string when undefined #9707

Download Grafana 4.6.1 Now


From the Blogosphere

Grafana Launches Microsoft Azure Data Source: In this article, Grafana Labs co-founder and CEO Raj, Dutt talks about the new Azure data source for Grafana, the collaboration between teams, and how much he admires Microsoft’s embrace of open source software.

Monitor Azure Services and Applications Using Grafana: Continuing the theme of Microsoft Azure, the Azure team published an article about the collaboration and resulting plugin. Ashwin discusses what prompted the project and shares some links to dive in deeper into how to get up and running.

Monitoring for Everyone: It only took 1 day for the organizers of Oredev Conference to start publishing videos of the talks. Bravo! Carl Bergquist’s talk is a great overview of the whys, what’s, and how’s of monitoring.

Eight years of Go: This article is in honor of Go celebrating 8 years, and discusses the growth and popularity of the language. We are thrilled to be in such good company in the “Go’s impact in open source” section. Congrats, and we wish you many more years of success!

A DIY Dashboard with Grafana: Christoph wanted to experiment with how to feed time series from his own code into a Grafana dashboard. He wrote a proof of concept called grada to connect any Go code to a Grafana dashboard panel.

Visualize Time-Series Data with Open Source Grafana and InfluxDB: Our own Carl Bergquist co-authored an article with Gunnar Aasen from InfluxData on using Grafana with InfluxDB. This is a follow up to a webinar the two participated in earlier in the year.


GrafanaCon EU

Planning for GrafanaCon EU is rolling right along, and we’re excited to announce a new block of speakers! We’ll continue to confirm speakers regularly, so keep an eye on grafanacon.org. Here are the latest additions:

Stig Sorensen
HEAD OF TELEMETRY
BLOOMBERG

Sean Hanson
SOFTWARE DEVELOPER
BLOOMBERG

Utkarsh Bhatnagar
SR. SOFTWARE ENGINEER
TINDER

Borja Garrido
PROJECT ASSOCIATE
CERN

Abhishek Gahlot
SOFTWARE ENGINEER
Automattic

Anna MacLachlan
CONTENT MARKETING MANAGER
Fastly

Gerlando Piro
FRONT END DEVELOPER
Fastly

GrafanaCon Tickets are Available!

Now that you’re getting a glimpse of who will be speaking, lock in your seat for GrafanaCon EU today! Join us March 1-2, 2018 in Amsterdam for 2 days of talks centered around Grafana and the surrounding monitoring ecosystem including Graphite, Prometheus, InfluxData, Elasticsearch, Kubernetes, and more.

Get Your Ticket Now


Upcoming Events:

In between code pushes we like to speak at, sponsor and attend all kinds of conferences and meetups. We have some awesome talks lined up this November. Hope to see you at one of these events!


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Pretty awesome to have the co-founder of Kubernetes tweet about Grafana!


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


How are we doing?

Well, that wraps up another week! How we’re doing? Submit a comment on this article below, or post something at our community forum. Help us make these weekly roundups better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Me on the Equifax Breach

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/me_on_the_equif.html

Testimony and Statement for the Record of Bruce Schneier
Fellow and Lecturer, Belfer Center for Science and International Affairs, Harvard Kennedy School
Fellow, Berkman Center for Internet and Society at Harvard Law School

Hearing on “Securing Consumers’ Credit Data in the Age of Digital Commerce”

Before the

Subcommittee on Digital Commerce and Consumer Protection
Committee on Energy and Commerce
United States House of Representatives

1 November 2017
2125 Rayburn House Office Building
Washington, DC 20515

Mister Chairman and Members of the Committee, thank you for the opportunity to testify today concerning the security of credit data. My name is Bruce Schneier, and I am a security technologist. For over 30 years I have studied the technologies of security and privacy. I have authored 13 books on these subjects, including Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (Norton, 2015). My popular newsletter CryptoGram and my blog Schneier on Security are read by over 250,000 people.

Additionally, I am a Fellow and Lecturer at the Harvard Kennedy School of Government –where I teach Internet security policy — and a Fellow at the Berkman-Klein Center for Internet and Society at Harvard Law School. I am a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of Electronic Privacy Information Center and VerifiedVoting.org. I am also a special advisor to IBM Security and the Chief Technology Officer of IBM Resilient.

I am here representing none of those organizations, and speak only for myself based on my own expertise and experience.

I have eleven main points:

1. The Equifax breach was a serious security breach that puts millions of Americans at risk.

Equifax reported that 145.5 million US customers, about 44% of the population, were impacted by the breach. (That’s the original 143 million plus the additional 2.5 million disclosed a month later.) The attackers got access to full names, Social Security numbers, birth dates, addresses, and driver’s license numbers.

This is exactly the sort of information criminals can use to impersonate victims to banks, credit card companies, insurance companies, cell phone companies and other businesses vulnerable to fraud. As a result, all 143 million US victims are at greater risk of identity theft, and will remain at risk for years to come. And those who suffer identify theft will have problems for months, if not years, as they work to clean up their name and credit rating.

2. Equifax was solely at fault.

This was not a sophisticated attack. The security breach was a result of a vulnerability in the software for their websites: a program called Apache Struts. The particular vulnerability was fixed by Apache in a security patch that was made available on March 6, 2017. This was not a minor vulnerability; the computer press at the time called it “critical.” Within days, it was being used by attackers to break into web servers. Equifax was notified by Apache, US CERT, and the Department of Homeland Security about the vulnerability, and was provided instructions to make the fix.

Two months later, Equifax had still failed to patch its systems. It eventually got around to it on July 29. The attackers used the vulnerability to access the company’s databases and steal consumer information on May 13, over two months after Equifax should have patched the vulnerability.

The company’s incident response after the breach was similarly damaging. It waited nearly six weeks before informing victims that their personal information had been stolen and they were at increased risk of identity theft. Equifax opened a website to help aid customers, but the poor security around that — the site was at a domain separate from the Equifax domain — invited fraudulent imitators and even more damage to victims. At one point, the official Equifax communications even directed people to that fraudulent site.

This is not the first time Equifax failed to take computer security seriously. It confessed to another data leak in January 2017. In May 2016, one of its websites was hacked, resulting in 430,000 people having their personal information stolen. Also in 2016, a security researcher found and reported a basic security vulnerability in its main website. And in 2014, the company reported yet another security breach of consumer information. There are more.

3. There are thousands of data brokers with similarly intimate information, similarly at risk.

Equifax is more than a credit reporting agency. It’s a data broker. It collects information about all of us, analyzes it all, and then sells those insights. It might be one of the biggest, but there are 2,500 to 4,000 other data brokers that are collecting, storing, and selling information about us — almost all of them companies you’ve never heard of and have no business relationship with.

The breadth and depth of information that data brokers have is astonishing. Data brokers collect and store billions of data elements covering nearly every US consumer. Just one of the data brokers studied holds information on more than 1.4 billion consumer transactions and 700 billion data elements, and another adds more than 3 billion new data points to its database each month.

These brokers collect demographic information: names, addresses, telephone numbers, e-mail addresses, gender, age, marital status, presence and ages of children in household, education level, profession, income level, political affiliation, cars driven, and information about homes and other property. They collect lists of things we’ve purchased, when we’ve purchased them, and how we paid for them. They keep track of deaths, divorces, and diseases in our families. They collect everything about what we do on the Internet.

4. These data brokers deliberately hide their actions, and make it difficult for consumers to learn about or control their data.

If there were a dozen people who stood behind us and took notes of everything we purchased, read, searched for, or said, we would be alarmed at the privacy invasion. But because these companies operate in secret, inside our browsers and financial transactions, we don’t see them and we don’t know they’re there.

Regarding Equifax, few consumers have any idea what the company knows about them, who they sell personal data to or why. If anyone knows about them at all, it’s about their business as a credit bureau, not their business as a data broker. Their website lists 57 different offerings for business: products for industries like automotive, education, health care, insurance, and restaurants.

In general, options to “opt-out” don’t work with data brokers. It’s a confusing process, and doesn’t result in your data being deleted. Data brokers will still collect data about consumers who opt out. It will still be in those companies’ databases, and will still be vulnerable. It just don’t be included individually when they sell data to their customers.

5. The existing regulatory structure is inadequate.

Right now, there is no way for consumers to protect themselves. Their data has been harvested and analyzed by these companies without their knowledge or consent. They cannot improve the security of their personal data, and have no control over how vulnerable it is. They only learn about data breaches when the companies announce them — which can be months after the breaches occur — and at that point the onus is on them to obtain credit monitoring services or credit freezes. And even those only protect consumers from some of the harms, and only those suffered after Equifax admitted to the breach.

Right now, the press is reporting “dozens” of lawsuits against Equifax from shareholders, consumers, and banks. Massachusetts has sued Equifax for violating state consumer protection and privacy laws. Other states may follow suit.

If any of these plaintiffs win in the court, it will be a rare victory for victims of privacy breaches against the companies that have our personal information. Current law is too narrowly focused on people who have suffered financial losses directly traceable to a specific breach. Proving this is difficult. If you are the victim of identity theft in the next month, is it because of Equifax or does the blame belong to another of the thousands of companies who have your personal data? As long as one can’t prove it one way or the other, data brokers remain blameless and liability free.

Additionally, much of this market in our personal data falls outside the protections of the Fair Credit Reporting Act. And in order for the Federal Trade Commission to levy a fine against Equifax, it needs to have a consent order and then a subsequent violation. Any fines will be limited to credit information, which is a small portion of the enormous amount of information these companies know about us. In reality, this is not an effective enforcement regime.

Although the FTC is investigating Equifax, it is unclear if it has a viable case.

6. The market cannot fix this because we are not the customers of data brokers.

The customers of these companies are people and organizations who want to buy information: banks looking to lend you money, landlords deciding whether to rent you an apartment, employers deciding whether to hire you, companies trying to figure out whether you’d be a profitable customer — everyone who wants to sell you something, even governments.

Markets work because buyers choose from a choice of sellers, and sellers compete for buyers. None of us are Equifax’s customers. None of us are the customers of any of these data brokers. We can’t refuse to do business with the companies. We can’t remove our data from their databases. With few limited exceptions, we can’t even see what data these companies have about us or correct any mistakes.

We are the product that these companies sell to their customers: those who want to use our personal information to understand us, categorize us, make decisions about us, and persuade us.

Worse, the financial markets reward bad security. Given the choice between increasing their cybersecurity budget by 5%, or saving that money and taking the chance, a rational CEO chooses to save the money. Wall Street rewards those whose balance sheets look good, not those who are secure. And if senior management gets unlucky and the a public breach happens, they end up okay. Equifax’s CEO didn’t get his $5.2 million severance pay, but he did keep his $18.4 million pension. Any company that spends more on security than absolutely necessary is immediately penalized by shareholders when its profits decrease.

Even the negative PR that Equifax is currently suffering will fade. Unless we expect data brokers to put public interest ahead of profits, the security of this industry will never improve without government regulation.

7. We need effective regulation of data brokers.

In 2014, the Federal Trade Commission recommended that Congress require data brokers be more transparent and give consumers more control over their personal information. That report contains good suggestions on how to regulate this industry.

First, Congress should help plaintiffs in data breach cases by authorizing and funding empirical research on the harm individuals receive from these breaches.

Specifically, Congress should move forward legislative proposals that establish a nationwide “credit freeze” — which is better described as changing the default for disclosure from opt-out to opt-in — and free lifetime credit monitoring services. By this I do not mean giving customers free credit-freeze options, a proposal by Senators Warren and Schatz, but that the default should be a credit freeze.

The credit card industry routinely notifies consumers when there are suspicious charges. It is obvious that credit reporting agencies should have a similar obligation to notify consumers when there is suspicious activity concerning their credit report.

On the technology side, more could be done to limit the amount of personal data companies are allowed to collect. Increasingly, privacy safeguards impose “data minimization” requirements to ensure that only the data that is actually needed is collected. On the other hand, Congress should not create a new national identifier to replace the Social Security Numbers. That would make the system of identification even more brittle. Better is to reduce dependence on systems of identification and to create contextual identification where necessary.

Finally, Congress needs to give the Federal Trade Commission the authority to set minimum security standards for data brokers and to give consumers more control over their personal information. This is essential as long as consumers are these companies’ products and not their customers.

8. Resist complaints from the industry that this is “too hard.”

The credit bureaus and data brokers, and their lobbyists and trade-association representatives, will claim that many of these measures are too hard. They’re not telling you the truth.

Take one example: credit freezes. This is an effective security measure that protects consumers, but the process of getting one and of temporarily unfreezing credit is made deliberately onerous by the credit bureaus. Why isn’t there a smartphone app that alerts me when someone wants to access my credit rating, and lets me freeze and unfreeze my credit at the touch of the screen? Too hard? Today, you can have an app on your phone that does something similar if you try to log into a computer network, or if someone tries to use your credit card at a physical location different from where you are.

Moreover, any credit bureau or data broker operating in Europe is already obligated to follow the more rigorous EU privacy laws. The EU General Data Protection Regulation will come into force, requiring even more security and privacy controls for companies collecting storing the personal data of EU citizens. Those companies have already demonstrated that they can comply with those more stringent regulations.

Credit bureaus, and data brokers in general, are deliberately not implementing these 21st-century security solutions, because they want their services to be as easy and useful as possible for their actual customers: those who are buying your information. Similarly, companies that use this personal information to open accounts are not implementing more stringent security because they want their services to be as easy-to-use and convenient as possible.

9. This has foreign trade implications.

The Canadian Broadcast Corporation reported that 100,000 Canadians had their data stolen in the Equifax breach. The British Broadcasting Corporation originally reported that 400,000 UK consumers were affected; Equifax has since revised that to 15.2 million.

Many American Internet companies have significant numbers of European users and customers, and rely on negotiated safe harbor agreements to legally collect and store personal data of EU citizens.

The European Union is in the middle of a massive regulatory shift in its privacy laws, and those agreements are coming under renewed scrutiny. Breaches such as Equifax give these European regulators a powerful argument that US privacy regulations are inadequate to protect their citizens’ data, and that they should require that data to remain in Europe. This could significantly harm American Internet companies.

10. This has national security implications.

Although it is still unknown who compromised the Equifax database, it could easily have been a foreign adversary that routinely attacks the servers of US companies and US federal agencies with the goal of exploiting security vulnerabilities and obtaining personal data.

When the Fair Credit Reporting Act was passed in 1970, the concern was that the credit bureaus might misuse our data. That is still a concern, but the world has changed since then. Credit bureaus and data brokers have far more intimate data about all of us. And it is valuable not only to companies wanting to advertise to us, but foreign governments as well. In 2015, the Chinese breached the database of the Office of Personal Management and stole the detailed security clearance information of 21 million Americans. North Korea routinely engages in cybercrime as way to fund its other activities. In a world where foreign governments use cyber capabilities to attack US assets, requiring data brokers to limit collection of personal data, securely store the data they collect, and delete data about consumers when it is no longer needed is a matter of national security.

11. We need to do something about it.

Yes, this breach is a huge black eye and a temporary stock dip for Equifax — this month. Soon, another company will have suffered a massive data breach and few will remember Equifax’s problem. Does anyone remember last year when Yahoo admitted that it exposed personal information of a billion users in 2013 and another half billion in 2014?

Unless Congress acts to protect consumer information in the digital age, these breaches will continue.

Thank you for the opportunity to testify today. I will be pleased to answer your questions.

Sky: People Can’t Pirate Live Soccer in the UK Anymore

Post Syndicated from Andy original https://torrentfreak.com/sky-people-cant-pirate-live-soccer-in-the-uk-anymore-171108/

The commotion over the set-top box streaming phenomenon is showing no signs of dying down and if day one at the Cable and Satellite Broadcasting Association of Asia (CASBAA) Conference 2017 was anything to go by, things are only heating up.

Held at Studio City in Macau, the conference has a strong anti-piracy element and was opened by Joe Welch, CASBAA Board Chairman and SVP Public Affairs Asia, 21st Century Fox. He began Tuesday by noting the important recent launch of a brand new anti-piracy initiative.

“CASBAA recently launched the Coalition Against Piracy, funded by 18 of the region’s content players and distribution partners,” he said.

TF reported on the formation of the coalition mid-October. It includes heavyweights such as Disney, Fox, HBO, NBCUniversal and BBC Worldwide, and will have a strong focus on the illicit set-top box market.

Illegal streaming devices (or ISDs, as the industry calls them), were directly addressed in a segment yesterday afternoon titled Face To Face. Led by Dr. Ros Lynch, Director of Copyright & IP Enforcement at the UK Intellectual Property Office, the session detailed the “onslaught of online piracy” and the rise of ISDs that is apparently “shaking the market”.

Given the apparent gravity of those statements, the following will probably come as a surprise. According to Lynch, the UK IPO sought the opinion of UK-based rightsholders about the pirate box phenomenon a while back after being informed of their popularity in the East. The response was that pirate boxes weren’t an issue. It didn’t take long, however, for things to blow up.

“The UKIPO provides intelligence and evidence to industry and the Police Intellectual Property Crime Unit (PIPCU) in London who then take enforcement actions,” Lynch explained.

“We first heard about the issues with ISDs from [broadcaster] TVB in Hong Kong and we then consulted the UK rights holders who responded that it wasn’t a problem. Two years later the issue just exploded.”

The evidence of that in the UK isn’t difficult to find. In addition to millions of devices with both free Kodi addon and subscription-based systems deployed, the app market has bloomed too, offering free or near to free content to all.

This caught the eye of the Premier League who this year obtained two pioneering injunctions (1,2) to tackle live streams of football games. Streams are blocked by local ISPs in real-time, making illicit online viewing a more painful experience than it ever has been. No doubt progress has been made on this front, with thousands of streams blocked, but according to broadcaster Sky, the results are unprecedented.

“Site-blocking has moved the goalposts significantly,” said Matthew Hibbert, head of litigation at Sky UK.

“In the UK you cannot watch pirated live Premier League content anymore,” he said.

While progress has been good, the statement is overly enthusiastic. TF sources have been monitoring the availability of pirate streams on around dozen illicit sites and services every Saturday (when it is actually illegal to broadcast matches in the UK) and service has been steady on around half of them and intermittent at worst on the rest.

There are hundreds of other platforms available so while many are definitely affected by Premier League blocking, it’s safe to assume that live football piracy hasn’t been wiped out. Nevertheless, it would be wrong to suggest that no progress has been made, in this and other related areas.

Kevin Plumb, Director of Legal Services at The Premier League, said that pubs showing football from illegal streams had also massively dwindled in numbers.

“In the past 18 months the illegal broadcasting of live Premier League matches in pubs in the UK has been decimated,” he said.

This result is almost certainly down to prosecutions taken in tandem with the Federation Against Copyright Theft (FACT), that have seen several landlords landed with large fines. Indeed, both sides of the market have been tackled, with both licensed premises and IPTV device sellers being targeted.

“The most successful thing we’ve done to combat piracy has been to undertake criminal prosecutions against ISD piracy,” said FACT chief Kieron Sharp yesterday. “Everyone is pleading guilty to these offenses.”

Most if not all of FACT-led prosecutions target device and subscription sellers under fraud legislation but that could change in the future, Lynch of the Intellectual Property Office said.

“While the UK works to update its legislation, we can’t wait for the new legislation to take enforcement actions and we rely heavily on ‘conspiracy to defraud’ charges, and have successfully prosecuted a number of ISD retailers,” she said.

Finally, information provided yesterday by network company CISCO shine light on what it costs to run a subscription-based pirate IPTV operation.

Director of Intelligence & Security Operations Avigail Gutman said a pirate IPTV server offering 1,000 channels to around 1,000 subscribers can cost as little as 2,000 euros per month to run but can generate 12,000 euros in revenue during the same period.

“In April of 2017, ten major paid TV and content providers had relinquished 3.09 million euros per month to 285 ISD-based streaming pirate syndicates,” she said.

There’s little doubt that IPTV piracy, both paid and free, is here to stay. The big question is how it will be tackled short and long-term and whether any changes in legislation will have any unintended knock-on effects.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Backing Up the Modern Enterprise with Backblaze for Business

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/endpoint-backup-solutions/

Endpoint backup diagram

Organizations of all types and sizes need reliable and secure backup. Whether they have as few as 3 or as many as 300,000 computer users, an organization’s computer data is a valuable business asset that needs to be protected.

Modern organizations are changing how they work and where they work, which brings new challenges to making sure that company’s data assets are not only available, but secure. Larger organizations have IT departments that are prepared to address these needs, but often times in smaller and newer organizations the challenge falls upon office management who might not be as prepared or knowledgeable to face a work environment undergoing dramatic changes.

Whether small or large, local or world-wide, for-profit or non-profit, organizations need a backup strategy and solution that matches the new ways of working in the enterprise.

The Enterprise Has Changed, and So Has Data Use

More and more, organizations are working in the cloud. These days organizations can operate just fine without their own file servers, database servers, mail servers, or other IT infrastructure that used to be standard for all but the smallest organization.

The reality is that for most organizations, though, it’s a hybrid work environment, with a combination of cloud-based and PC and Macintosh-based applications. Legacy apps aren’t going away any time soon. They will be with us for a while, with their accompanying data scattered amongst all the desktops, laptops and other endpoints in corporate headquarters, home offices, hotel rooms, and airport waiting areas.

In addition, the modern workforce likely combines regular full-time employees, remote workers, contractors, and sometimes interns, volunteers, and other temporary workers who also use company IT assets.

The Modern Enterprise Brings New Challenges for IT

These changes in how enterprises work present a problem for anyone tasked with making sure that data — no matter who uses it or where it lives — is adequately backed-up. Cloud-based applications, when properly used and managed, can be adequately backed up, provided that users are connected to the internet and data transfers occur regularly — which is not always the case. But what about the data on the laptops, desktops, and devices used by remote employees, contractors, or just employees whose work keeps them on the road?

The organization’s backup solution must address all the needs of the modern organization or enterprise using both cloud and PC and Mac-based applications, and not be constrained by employee or computer location.

A Ten-Point Checklist for the Modern Enterprise for Backing Up

What should the modern enterprise look for when evaluating a backup solution?

1) Easy to deploy to workers’ computers

Whether installed by the computer user or an IT person locally or remotely, the backup solution must be easy to implement quickly with minimal demands on the user or administrator.

2) Fast and unobtrusive client software

Backups should happen in the background by efficient (native) PC and Macintosh software clients that don’t consume valuable processing power or take memory away from applications the user needs.

3) Easy to configure

The backup solutions must be easy to configure for both the user and the IT professional. Ease-of-use means less time to deploy, configure, and manage.

4) Defaults to backing up all valuable data

By default, the solution backs up commonly used files and folders or directories, including desktops. Some backup solutions are difficult and intimidating because they require that the user chose what needs to be backed up, often missing files and folders/directories that contain valuable data.

5) Works automatically in the background

Backups should happen automatically, no matter where the computer is located. The computer user, especially the remote or mobile one, shouldn’t be required to attach cables or drives, or remember to initiate backups. A working solution backs up automatically without requiring action by the user or IT administrator.

6) Data restores are fast and easy

Whether it’s a single file, directory, or an entire system that must be restored, a user or IT sysadmin needs to be able to restore backed up data as quickly as possible. In cases of large restores to remote locations, the ability to send a restore via physical media is a must.

7) No limitations on data

Throttling, caps, and data limits complicate backups and require guesses about how much storage space will be needed.

8) Safe & Secure

Organizations require that their data is secure during all phases of initial upload, storage, and restore.

9) Easy-to-manage

The backup solution needs to provide a clear and simple web management interface for all functions. Designing for ease-of-use leads to efficiency in management and operation.

10) Affordable and transparent pricing

Backup costs should be predictable, understandable, and without surprises.

Two Scenarios for the Modern Enterprise

Enterprises exist in many forms and types, but wanting to meet the above requirements is common across all of them. Below, we take a look at two common scenarios showing how enterprises face these challenges. Three case studies are available that provide more information about how Backblaze customers have succeeded in these environments.

Enterprise Profile 1

The needs of a smaller enterprise differ from those of larger, established organizations. This organization likely doesn’t have anyone who is devoted full-time to IT. The job of on-boarding new employees and getting them set up with a computer likely falls upon an executive assistant or office manager. This person might give new employees a checklist with the software and account information and lets users handle setting up the computer themselves.

Organizations in this profile need solutions that are easy to install and require little to no configuration. Backblaze, by default, backs up all user data, which lets the organization be secure in knowing all the data will be backed up to the cloud — including files left on the desktop. Combined with Backblaze’s unlimited data policy, organizations have a truly “set it and forget it” platform.

Customizing Groups To Meet Teams’ Needs

The Groups feature of Backblaze for Business allows an organization to decide whether an individual client’s computer will be Unmanaged (backups and restores under the control of the worker), or Managed, in which an administrator can monitor the status and frequency of backups and handle restores should they become necessary. One group for the entire organization might be adequate at this stage, but the organization has the option to add additional groups as it grows and needs more flexibility and control.

The organization, of course, has the choice of managing and monitoring users using Groups. With Backblaze’s Groups, organizations can set user-based access rules, which allows the administrator to create restores for lost files or entire computers on an employee’s behalf, to centralize billing for all client computers in the organization, and to redeploy a recovered computer or new computer with the backed up data.

Restores

In this scenario, the decision has been made to let each user manage her own backups, including restores, if necessary, of individual files or entire systems. If a restore of a file or system is needed, the restore process is easy enough for the user to handle it by herself.

Case Study 1

Read about how PagerDuty uses Backblaze for Business in a mixed enterprise of cloud and desktop/laptop applications.

PagerDuty Case Study

In a common approach, the employee can retrieve an accidentally deleted file or an earlier version of a document on her own. The Backblaze for Business interface is easy to navigate and was designed with feedback from thousands of customers over the course of a decade.

In the event of a lost, damaged, or stolen laptop,  administrators of Managed Groups can  initiate the restore, which could be in the form of a download of a restore ZIP file from the web management console, or the overnight shipment of a USB drive directly to the organization or user.

Enterprise Profile 2

This profile is for an organization with a full-time IT staff. When a new worker joins the team, the IT staff is tasked with configuring the computer and delivering it to the new employee.

Backblaze for Business Groups

Case Study 2

Global charitable organization charity: water uses Backblaze for Business to back up workers’ and volunteers’ laptops as they travel to developing countries in their efforts to provide clean and safe drinking water.

charity: water Case Study

This organization can take advantage of additional capabilities in Groups. A Managed Group makes sense in an organization with a geographically dispersed work force as it lets IT ensure that workers’ data is being regularly backed up no matter where they are. Billing can be company-wide or assigned to individual departments or geographical locations. The organization has the choice of how to divide the organization into Groups (location, function, subsidiary, etc.) and whether the Group should be Managed or Unmanaged. Using Managed Groups might be suitable for most of the organization, but there are exceptions in which sensitive data might dictate using an Unmanaged Group, such as could be the case with HR, the executive team, or finance.

Deployment

By Invitation Email, Link, or Domain

Backblaze for Business allows a number of options for deploying the client software to workers’ computers. Client installation is fast and easy on both Windows and Macintosh, so sending email invitations to users or automatically enrolling users by domain or invitation link, is a common approach.

By Remote Deployment

IT might choose to remotely and silently deploy Backblaze for Business across specific Groups or the entire organization. An administrator can silently deploy the Backblaze backup client via the command-line, or use common RMM (Remote Monitoring and Management) tools such as Jamf and Munki.

Restores

Case Study 3

Read about how Bright Bear Technology Solutions, an IT Managed Service Provider (MSP), uses the Groups feature of Backblaze for Business to manage customer backups and restores, deploy Backblaze licenses to their customers, and centralize billing for all their client-based backup services.

Bright Bear Case Study

Some organizations are better equipped to manage or assist workers when restores become necessary. Individual users will be pleased to discover they can roll-back files to an earlier version if they wish, but IT will likely manage any complete system restore that involves reconfiguring a computer after a repair or requisitioning an entirely new system when needed.

This organization might chose to retain a client’s entire computer backup for archival purposes, using Backblaze B2 as the cloud storage solution. This is another advantage of having a cloud storage provider that combines both endpoint backup and cloud object storage among its services.

The Next Step: Server Backup & Data Archiving with B2 Cloud Storage

As organizations grow, they have increased needs for cloud storage beyond Macintosh and PC data backup. Backblaze’s object cloud storage, Backblaze B2, provides low-cost storage and archiving of records, media, and server data that can grow with the organization’s size and needs.

B2 Cloud Storage is available through the same Backblaze management console as Backblaze Computer Backup. This means that Admins have one console for billing, monitoring, deployment, and role provisioning. B2 is priced at 1/4 the cost of Amazon S3, or $0.005 per month per gigabyte (which equals $5/month per terabyte).

Why Modern Enterprises Chose Backblaze

Backblaze for Business

Businesses and organizations select Backblaze for Business for backup because Backblaze is designed to meet the needs of the modern enterprise. Backblaze customers are part of a a platform that has a 10+ year track record of innovation and over 400 petabytes of customer data already under management.

Backblaze’s backup model is proven through head-to-head comparisons to back up data that other backup solutions overlook in their default configurations — including valuable files that are needed after an accidental deletion, theft, or computer failure.

Backblaze is the only enterprise-level backup company that provides TOTP (Time-based One-time Password) via both SMS and Authentication app to all accounts at no incremental charge. At just $50/year/computer, Backblaze is affordable for any size of enterprise.

Modern Enterprises can Meet The Challenge of The Changing Data Environment

With the right backup solution and strategy, the modern enterprise will be prepared to ensure that its data is protected from accident, disaster, or theft, whether its data is in one office or dispersed among many locations, and remote and mobile employees.

Backblaze for Business is an affordable solution that enables organizations to meet the evolving data demands facing the modern enterprise.

The post Backing Up the Modern Enterprise with Backblaze for Business appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Using taxies to monitor air quality in Peru

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/air-quality-peru/

When James Puderer moved to Lima, Peru, his roadside runs left a rather nasty taste in his mouth. Hit by the pollution from old diesel cars in the area, he decided to monitor the air quality in his new city using Raspberry Pis and the abundant taxies as his tech carriers.

Taxi Datalogger – Assembly

How to assemble the enclosure for my Taxi Datalogger project: https://www.hackster.io/james-puderer/distributed-air-quality-monitoring-using-taxis-69647e

Sensing air quality in Lima

Luckily for James, almost all taxies in Lima are equipped with the standard hollow vinyl roof sign seen in the video above, which makes them ideal for hacking.

Using a Raspberry Pi alongside various Adafuit tech including the BME280 Temperature/Humidity/Pressure Sensor and GPS Antenna, James created a battery-powered retrofit setup that fits snugly into the vinyl sign.

The schematic of the air quality monitor tech inside the taxi sign

With the onboard tech, the device collects data on longitude, latitude, humidity, temperature, pressure, and airborne particle count, feeding it back to an Android Things datalogger. This data is then pushed to Google IoT Core, where it can be remotely accessed.

Next, the data is processed by Google Dataflow and turned into a BigQuery table. Users can then visualize the collected measurements. And while James uses Google Maps to analyse his data, there are many tools online that will allow you to organise and study your figures depending on what final result you’re hoping to achieve.

A heat map of James' local area showing air quality

James hopped in a taxi and took his monitor on the road, collecting results throughout the journey

James has provided the complete build process, including all tech ingredients and code, on his Hackster.io project page, and urges makers to create their own air quality monitor for their local area. He also plans on building upon the existing design by adding a 12V power hookup for connecting to the taxi, functioning lights within the sign, and companion apps for drivers.

Sensing the world around you

We’ve seen a wide variety of Raspberry Pi projects using sensors to track the world around us, such as Kasia Molga’s Human Sensor costume series, which reacts to air pollution by lighting up, and Clodagh O’Mahony’s Social Interaction Dress, which she created to judge how conversation and physical human interaction can be scored and studied.

Human Sensor

Kasia Molga’s Human Sensor — a collection of hi-tech costumes that react to air pollution within the wearer’s environment.

Many people also build their own Pi-powered weather stations, or use the Raspberry Pi Oracle Weather Station, to measure and record conditions in their towns and cities from the roofs of schools, offices, and homes.

Have you incorporated sensors into your Raspberry Pi projects? Share your builds in the comments below or via social media by tagging us.

The post Using taxies to monitor air quality in Peru appeared first on Raspberry Pi.

timeShift(GrafanaBuzz, 1w) Issue 20

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/11/03/timeshiftgrafanabuzz-1w-issue-20/

This week, in addition to rolling out a Grafana 4.6.1 release, we’ve been busy prepping for upcoming events. In Europe, we’ll be speaking at and sponsoring the sold-out Øredev Conference in Malmö, Sweden, Nov 7-11, and on the west coast, we’ll be speaking at and sponsoring InfluxDays, Nov 14 in San Francisco, CA. We hope to get a chance to say hi to you at one of these events.

We also closed the CFP window this week for talks at GrafanaCon EU. We received a tremendous number of great submissions, and will spend the next few weeks making our selections. As speakers are confirmed, we’ll add them to the website, so be sure to keep an eye out. We’re thrilled that the community is so excited to share their knowledge of Grafana and open source monitoring.


Latest Release

Grafana 4.6.1 adds some bug fixes:

  • Singlestat: Lost thresholds when using save dashboard as #96816
  • Graph: Fix for series override color picker #97151
  • Go: build using golang 1.9.2 #97134
  • Plugins: Fixed problem with loading plugin js files behind auth proxy #95092
  • Graphite: Annotation tooltip should render empty string when undefined #9707

Download Grafana 4.6.1 Now


From the Blogosphere

FOSDEM 2018 Monitoring & Cloud Devroom CFP: The CFP is now open for the Monitoring & Cloud Devroom at FOSDEM 2018, held in Brussels, Belgium, Feb 3-4, 2018. FOSDEM is the premier open source conference in europe, and covers a broad range of topics. The Monitoring and Cloud devroom was a popular devroom last year, so be sure to submit your talk before the November 26 deadline!

PRTG plus Grafana FTW!: @neuralfraud has written a plugin for PRTG that allows you to view PRTG data directly in Grafana. This article goes over the features of the plugin, beautiful dashboards and visualization options, and how to get started.

Grafana-based GUI for mgstat, a system monitoring tool for InterSystems Caché, Ensemble or HealthShare: This is a continuation of the previous article Making Prometheus Monitoring for InterSystems Caché where we examine how to visualize the results from the mgstat tool. This article is broken down into which stats are collected and how these stats are collected.

Using Grafana & InfluxDB to view XIV Host Performance Metrics: Allan wanted to get an unified view of what his hosts were doing, however, the XIV GUI only allowed 12 hosts to be displayed at a given time– which was extremely limiting. This is the first in a series of articles on how to gather and parse host data and visualize it in Grafana.

Service telemetry with Grafana and InfluxDB – Part I: Introduction: This is the first in a new series of posts which will walk you through the process of building a production-ready solution for monitoring real-time metrics for any application or service, complete with useful and beautiful dashboards.


GrafanaCon General Admission Now Available!

Early bird tickets are no longer available, but you can still lock in your seat for GrafanaCon! Join us March 1-2, 2018 in Amsterdam for 2 days of talks centered around Grafana and the surrounding monitoring ecosystem including Graphite, Prometheus, InfluxData, Elasticsearch, Kubernetes, and more.

Get Your Ticket Now


Grafana Plugins

Keeping your Grafana plugins up to date is important. Plugin authors are often adding new features and fixing bugs, which will make your plugin perform better. We’ve made updating easy; for on-prem Grafana, use the Grafana-cli tool, or update with 1 click if you’re using Hosted Grafana.

UPDATED PLUGIN

Piechart Panel – The latest version of the Piechart Panel has the following fixes:

  • Add “No data points” description for pie chart with 0 value
  • Donut now works with transparent panel
  • Can toggle to hide series from piechart
  • On graph legend can show values. Can choose how many decimals
  • Sort pie slices upon sorting of legend entries
  • Fix for color picker for Grafana 4.6

Update


Contribution of the Week:

Each week we highlight some of the important contributions from our amazing open source community. Thank you for helping make Grafana better!

@akshaychhajed
We got an amazing PR this week. Grafana has lots of docker-compose files for internal testing that were created using a very old version of docker-compose. @akshaychhajed sent a PR converting them all to the latest version of docker-compose. Huge thanks from the Grafana team!


Upcoming Events:

In between code pushes we like to speak at, sponsor and attend all kinds of conferences and meetups. We have some awesome talks lined up this November. Hope to see you at one of these events!


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Beautiful – I want to build a real-life version of this using a block of wood, some nails and colored string… or maybe have it cross-stitched on a pillow 🙂


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


How are we doing?

Well, that wraps up another week! How we’re doing? Submit a comment on this article below, or post something at our community forum. Help us make these weekly roundups better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

New – AWS Direct Connect Gateway – Inter-Region VPC Access

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-direct-connect-gateway-inter-region-vpc-access/

As I was preparing to write this post, I took a nostalgic look at the blog post I wrote when we launched AWS Direct Connect back in 2012. We created Direct Connect after our enterprise customers asked us to allow them to establish dedicated connections to an AWS Region in pursuit of enhanced privacy, additional data transfer bandwidth, and more predictable data transfer performance. Starting from one AWS Region and a single colo, Direct Connect is now available in every public AWS Region and accessible from dozens of colos scattered across the world (over 60 locations at last count). Our customers have taken to Direct Connect wholeheartedly and we have added features such as Link Aggregation, Amazon EFS support, CloudWatch monitoring, and HIPAA eligibility. In the past five weeks alone we have added Direct Connect locations in Houston (Texas), Vancouver (Canada), Manchester (UK), Canberra (Australia), and Perth (Australia).

Today we are making Direct Connect simpler and more powerful with the addition of the Direct Connect Gateway. We are also giving Direct Connect customers in any Region the ability to create public virtual interfaces that receive our global IP routes and enable access to the public endpoints for our services and updating the Direct Connect pricing model.

Let’s take a look at each one!

New Direct Connect Gateway
You can use the new Direct Connect Gateway to establish connectivity that spans Virtual Private Clouds (VPCs) spread across multiple AWS Regions. You no longer need to establish multiple BGP sessions for each VPC; this reduces your administrative workload as well as the load on your network devices.

This feature also allows you to connect to any of the participating VPCs from any Direct Connect location, further reducing your costs for making using AWS services on a cross-region basis.

Here is a diagram that illustrates the simplification that you can achieve with a Direct Connect Gateway (each “lock” icon represents a Virtual Private Gateway). Start with this:

And end up like this:

The VPCs that reference a particular Direct Connect Gateway must have IP address ranges that do not overlap. Today, the VPCs must all be in the same AWS account; we plan to make this more flexible in the future.

Each Gateway is a global object that exists across all of the public AWS Regions. All communication between the Regions via the Gateways takes place across the AWS network backbone.

Creating a Direct Connect Gateway
You can create a Direct Connect Gateway from the Direct Connect Console or by calling the CreateDirectConnectGateway function from your code. I’ll use the Console!

I open the Direct Connect Console and click on Direct Connect Gateways to get started:

The list is empty since I don’t have any Gateways yet. Click on Create Direct Connect Gateway to change that:

I give my Gateway a name, enter a private ASN for my network, then click on Create. The ASN (Autonomous System Number) must be in one of the ranges defined as private in RFC 6996:

My new Gateway will appear in the other AWS Regions within a moment or two:

I have a Direct Connect Connection in Ohio that I will use to create my VIF:

Now I create a private VIF that references the Gateway and the Connection:

It is ready to use within seconds:

I already have a pair of VPCs with non-overlapping CIDRs, and a Virtual Private Gateway attached to each one. Here are the VPCs (since this is a demo I’ll show both in the same Region for convenience):

And the Virtual Private Gateways:

I return to the Direct Connect Console and navigate to the Direct Connect Gateways. I select my Gateway and choose Associate Virtual Private Gateway from the Actions menu:

Then I select both of my Virtual Private Gateways and click on Associate:

If, as would usually be the case, my VPCs are in distinct AWS Regions, the same procedure would apply. For this blog post it was easier to show you the operations once rather than twice.

The Virtual Gateway association is complete within a minute or so (the state starts out as associating):

When the state transitions to associated, traffic can flow between your on-premises network and your VPCs, over your AWS Direct Connect connection, regardless of the AWS Regions where your VPCs reside.

Public Virtual Interfaces for Service Endpoints
You can now create Public Virtual Interfaces that will allow you to access AWS public service endpoints for AWS services running in any AWS Region (except AWS China Region) over Direct Connect. These interfaces receive (via BGP) Amazon’s global IP routes. You can create these interfaces in the Direct Connect Console; start by selecting the Public option:

After you create it you will need to associate it with a VPC.

Updated Pricing Model
In light of the ever-expanding number of AWS Regions and AWS Direct Connect locations, data transfer pricing is now based on the location of the Direct Connect and the source AWS Region. The new pricing is simpler that the older model which was based on AWS Direct Connect locations.

Now Available
This new feature is available today and you can start to use it right now. You can create and use Direct Connect Gateways at no charge; you pay the usual Direct Connect prices for port hours and data transfer.

Jeff;

 

AWS Online Tech Talks – November 2017

Post Syndicated from Sara Rodas original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-november-2017/

Leaves are crunching under my boots, Halloween is tomorrow, and pumpkin is having its annual moment in the sun – it’s fall everybody! And just in time to celebrate, we have whipped up a fresh batch of pumpkin spice Tech Talks. Grab your planner (Outlook calendar) and pencil these puppies in. This month we are covering re:Invent, serverless, and everything in between.

November 2017 – Schedule

Noted below are the upcoming scheduled live, online technical sessions being held during the month of November. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts.

Webinars featured this month are:

Monday, November 6

Compute

9:00 – 9:40 AM PDT: Set it and Forget it: Auto Scaling Target Tracking Policies

Tuesday, November 7

Big Data

9:00 – 9:40 AM PDT: Real-time Application Monitoring with Amazon Kinesis and Amazon CloudWatch

Compute

10:30 – 11:10 AM PDT: Simplify Microsoft Windows Server Management with Amazon Lightsail

Mobile

12:00 – 12:40 PM PDT: Deep Dive on Amazon SES What’s New

Wednesday, November 8

Databases

10:30 – 11:10 AM PDT: Migrating Your Oracle Database to PostgreSQL

Compute

12:00 – 12:40 PM PDT: Run Your CI/CD Pipeline at Scale for a Fraction of the Cost

Thursday, November 9

Databases

10:30 – 11:10 AM PDT: Migrating Your Oracle Database to PostgreSQL

Containers

9:00 – 9:40 AM PDT: Managing Container Images with Amazon ECR

Big Data

12:00 – 12:40 PM PDT: Amazon Elasticsearch Service Security Deep Dive

Monday, November 13

re:Invent

10:30 – 11:10 AM PDT: AWS re:Invent 2017: Know Before You Go

5:00 – 5:40 PM PDT: AWS re:Invent 2017: Know Before You Go

Tuesday, November 14

AI

9:00 – 9:40 AM PDT: Sentiment Analysis Using Apache MXNet and Gluon

10:30 – 11:10 AM PDT: Bringing Characters to Life with Amazon Polly Text-to-Speech

IoT

12:00 – 12:40 PM PDT: Essential Capabilities of an IoT Cloud Platform

Enterprise

2:00 – 2:40 PM PDT: Everything you wanted to know about licensing Windows workloads on AWS, but were afraid to ask

Wednesday, November 15

Security & Identity

9:00 – 9:40 AM PDT: How to Integrate AWS Directory Service with Office365

Storage

10:30 – 11:10 AM PDT: Disaster Recovery Options with AWS

Hands on Lab

12:30 – 2:00 PM PDT: Hands on Lab: Windows Workloads

Thursday, November 16

Serverless

9:00 – 9:40 AM PDT: Building Serverless Websites with [email protected]

Hands on Lab

12:30 – 2:00 PM PDT: Hands on Lab: Deploy .NET Code to AWS from Visual Studio

– Sara

timeShift(GrafanaBuzz, 1w) Issue 19

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/10/27/timeshiftgrafanabuzz-1w-issue-19/

This week, we were busy prepping for our latest stable release, Grafana 4.6! This is a sizeable release that adds some key new functionality, but there’s no time to pat ourselves on the back – now it’s time to focus on Grafana 5.0! In the meantime, find out more about what’s in 4.6 in our release blog post, and let us know what you think of the new features and enhancements.


Latest Release

Grafana 4.6 Stable is now available! The Grafana 4.6 release contains some exciting and much anticipated new additions:

  • The new Postgres Data Source
  • Create your own Annotations from the Graph panel
  • Cloudwatch Alerting Support
  • Prometheus query editor enhancements

Download Grafana 4.6 Stable Now


From the Blogosphere

Lyft’s Envoy dashboards: Lyft developed Envoy to relieve operational and reliability headaches. Envoy is a “service mesh” substrate that provides common utilities such as service discovery, load balancing, rate limiting, circuit breaking, stats, logging, tracing, etc. to application architectures. They’ve recently shared their Envoy dashboards, and walk you through their setup.

Monitoring Data in a SQL Table with Prometheus and Grafana Joseph recently built a proof-of-concept to add monitoring and alerting on the results of a Microsoft SQL Server query. Since he knew he’d eventually want to monitor many other things, from many other sources, he chose Prometheus and Grafana as his starting point. In this article, he walks us through his steps of exposing SQL queries to Prometheus, collecting metrics, alerting, and visualizing the results in Grafana.

Crypto Exchange Trading Data Discovering interesting public Grafana dashboards has been happening more and more lately. This week, I came across a dashboard visualizing trading data on the crypto exchanges. If you have a public dashboard you’d like shared, Let us know.


GrafanaCon EU Early Bird is Ending

Early bird discounts will be ending October 31; this is your last chance to take advantage of the discounted tickets!

Get Your Early Bird Ticket Now


Grafana Plugins

Each week we review updated plugins to ensure code quality and compatibility before publishing them on grafana.com. This process can take time, and we appreciate all of the communication from plugin authors. This week we have two plugins that received some major TLC. These are two very popular plugins, so we encourage you to update. We’ve made updating easy; for on-prem Grafana, use the Grafana-cli tool, or update with 1 click if you are using Hosted Grafana.

UPDATED PLUGIN

Zabbix App Plugin – The Zabbix App Plugin just got a big update! Here are just a few of the changes:

  • PostgreSQL support for Direct DB Connection.
  • Triggers query mode, which allows counting active alerts by group, host and application, #141.
  • sortSeries() function that sorts multiple timeseries by name, #447, thanks to @mdorenkamp.
  • percentil() function, thanks to @pedrohrf.
  • Zabbix System Status example dashboard.

Update

UPDATED PLUGIN

Wroldmap Panel Plugin – The Worldmap panel also got a new update. Zooming with the mouse wheel has been turned off, as it was too easy to accidentally zoom in when scrolling the page. You can zoom in with the mouse by either double-clicking or using shift+drag to zoom in on an area.

  • Support for new data source integration, the Dynamic JSON endpoint #103, thanks @LostInBrittany
  • Fix for using floats in thresholds #79, thanks @fabienpomerol
  • Turned off mouse wheel zoom

Update


Upcoming Events:

In between code pushes we like to speak at, sponsor and attend all kinds of conferences and meetups. We have some awesome talks lined up this November. Hope to see you at one of these events!


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Nice – but dashboards are meant for sharing! You should upload that to our list of Icinga2 dashboards.


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


How are we doing?

Well, that wraps up another week! How we’re doing? Submit a comment on this article below, or post something at our community forum. Help us make these weekly roundups better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Now Available – Amazon Aurora with PostgreSQL Compatibility

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-amazon-aurora-with-postgresql-compatibility/

Late last year I told you about our plans to add PostgreSQL compatibility to Amazon Aurora. We launched the private beta shortly after that announcement, and followed it up earlier this year with an open preview. We’ve received lots of great feedback during the beta and the preview and have done our best to make sure that the product meets your needs and exceeds your expectations!

Now Generally Available
I am happy to report that Amazon Aurora with PostgreSQL Compatibility is now generally available and that you can use it today in four AWS Regions, with more to follow. It is compatible with PostgreSQL 9.6.3 and scales automatically to support up to 64 TB of storage, with 6-way replication behind the scenes to improve performance and availability.

Just like Amazon Aurora with MySQL compatibility, this edition is fully managed and is very easy to set up and to use. On the performance side, you can expect up to 3x the throughput that you’d get if you ran PostgreSQL on your own (you can read Amazon Aurora: Design Considerations for High Throughput Cloud-Native Relational Databases to learn more about how we did this).

You can launch a PostgreSQL-compatible Amazon Aurora instance from the RDS Console by selecting Amazon Aurora as the engine and PostgreSQL-compatible as the edition, and clicking on Next:

Then choose your instance class, single or Multi-AZ deployment (good for dev/test and production, respectively), set the instance name, and the administrator credentials, and click on Next:

You can choose between six instance classes (2 to 64 vCPUs and 15.25 to 488 GiB of memory):

The db.r4 instance class is new addition to Aurora and to RDS, and gives you an additional size at the top-end. The db.r4.16xlarge will give you additional write performance, and may allow you to use a single Aurora database instead of two or more sharded databases.

You can also set many advanced options on the next page, starting with network options such as the VPC and public accessibility:

You can set the cluster name and other database options. Encryption is easy to use and enabled by default; you can use the built-in default master key or choose one of your own:

You can also set failover behavior, the retention period for snapshot backups, and choose to enable collection of detailed (OS-level) metrics via Enhanced Monitoring:

After you have set it up to your liking, click on Launch DB Instance to proceed!

The new instances (primary and secondary since I specified Multi-AZ) are up and running within minutes:

Each PostgreSQL-compatible instance publishes 44 metrics to CloudWatch automatically:

With enhanced monitoring enabled, each instance collects additional per-instance and per-process metrics. It can be enabled when the instance is launched, or afterward, via Modify Instance. Here are some of the metrics collected when enhanced monitoring is enabled:

Clicking on Manage Graphs lets you choose which metrics are shown:

Per-process metrics are also available:

You can scale your read capacity by creating up to 15 Aurora replicas:

The cluster provides a single reader endpoint that you can access in order to load-balance requests across the replicas:

Performance Insights
As I noted earlier, Performance Insights is turned on automatically. This Amazon Aurora feature is wired directly into the database engine and allows you to look deep inside of each query, seeing the database resources that it uses and how they contribute to the overall response time. Here’s the initial view:

I can slice the view by SQL query in order to see how many concurrent copies of each query are running:

There are more views and options than I can fit in this post; to learn more take a look at Using Performance Insights.

Migrating to Amazon Aurora with PostgreSQL Compatibility
AWS Database Migration Service and the Schema Conversion Tool are ready to help you to move data stored in commercial and open-source databases to Amazon Aurora. The Schema Conversion Tool will perform a quick assessment of your database schemas and your code in order to help you to choose between MySQL and PostgreSQL. Our new, limited-time, Free DMS program allows you to use DMS and SCT to migrate to Aurora at no cost, with access to several types of DMS Instances for up to 6 months.

If you are already using PostgreSQL, you will be happy to hear that we support a long list of extensions including PostGIS and dblink.

Available Now
You can use Amazon Aurora with PostgreSQL Compatibility today in the US East (Northern Virginia), EU (Ireland), US West (Oregon), and US East (Ohio) Regions, with others to follow as soon as possible.

Jeff;

Russian Site-Blocking Chiefs Under Investigation For Fraud

Post Syndicated from Andy original https://torrentfreak.com/russian-site-blocking-chiefs-under-investigation-for-fraud-171024/

Over the past several years, Rozcomnadzor has become a highly controversial government body in Russia. With responsibility for ordering web-blockades against sites the country deems disruptive, it’s effectively Russia’s online censorship engine.

In total, Rozcomnadzor has ordered the blocking of more than 82,000 sites. Within that total, at least 4,000 have been rendered inaccessible on copyright grounds, with an additional 41,000 innocent platforms blocked as collateral damage.

This massive over-blocking has been widely criticized in Russia but until now, Rozcomnadzor has appeared pretty much untouchable. However, a scandal is now engulfing the organization after at least four key officials were charged with fraud offenses.

News that something was potentially amiss began leaking out two weeks ago, when Russian publication Vedomosti reported on a court process in which the initials of the defendants appeared to coincide with officials at Rozcomnadzor.

The publication suspected that three men were involved; Roskomnadzor spokesman Vadim Ampelonsky, head of the legal department Boris Yedidin, and Alexander Veselchakov, who acts as an advisor to the head of the department monitoring radio frequencies.

The prosecution’s case indicated that the defendants were involved in “fraud committed by an organized group either on an especially large scale or entailing the deprivation of citizen’s rights.” Indeed, no further details were made available, with the head of Rozcomnadzor Alexander Zharov claiming he knew nothing about a criminal case and refusing to answer questions.

It later transpired that four employees had been charged with fraud, including Anastasiya Zvyagintseva, who acts as the general director of CRFC, an agency under the control of Rozcomnadzor.

According to Kommersant, Zvyagintseva’s involvement is at the core of the matter. She claims to have been forced to put “ghost employees” on the payroll, whose salaries were then paid to existing employees in order to increase their salaries.

The investigation into the scandal certainly runs deep. It’s reported that FSB officers have been spying on Rozcomnadzor officials for six months, listening to their phone conversations, monitoring their bank accounts, and even watching the ATM machines they used.

Local media reports indicate that the illegal salary scheme ran from 2012 until February 2017 and involved some 20 million rubles ($347,000) of illegal payments. These were allegedly used to retain ‘valuable’ employees when their regular salaries were not lucrative enough to keep them at the site-blocking body.

While Zvyagintseva has been released pending trial, Ampelonsky, Yedidin, and Veselchakov have been placed under house arrest by the Chertanovsky Court of Moscow until November 7.

Rozcomnadzor’s website is currently inaccessible.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.