Tag Archives: autocomplete

Grafana v5.1 Released

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/04/26/grafana-v5.1-released/

v5.1 Stable Release

The recent 5.0 major release contained a lot of new features so the Grafana 5.1 release is focused on smoothing out the rough edges and iterating over some of the new features.

Download Grafana 5.1 Now

Release Highlights

There are two new features included, Heatmap Support for Prometheus and a new core data source for Microsoft SQL Server.

Another highlight is the revamp of the Grafana docker container that makes it easier to run and control but be aware there is a breaking change to file permissions that will affect existing containers with data volumes.

We got tons of useful improvement suggestions, bug reports and Pull Requests from our amazing community. Thank you all! See the full changelog for more details.

Improved Scrolling Experience

In Grafana v5.0 we introduced a new scrollbar component. Unfortunately this introduced a lot of issues and in some scenarios removed
the native scrolling functionality. Grafana v5.1 ships with a native scrollbar for all pages together with a scrollbar component for
the dashboard grid and panels that does not override the native scrolling functionality. We hope that these changes and improvements should
make the Grafana user experience much better!

Improved Docker Image

Grafana v5.1 brings an improved official docker image which should make it easier to run and use the Grafana docker image and at the same time give more control to the user how to use/run it.

We have switched the id of the grafana user running Grafana inside a docker container. Unfortunately this means that files created prior to 5.1 will not have the correct permissions for later versions and thereby introduces a breaking change. We made this change so that it would be easier for you to control what user Grafana is executed as.

Please read the updated documentation which includes migration instructions and more information.

Heatmap Support for Prometheus

The Prometheus datasource now supports transforming Prometheus histograms to the heatmap panel. The Prometheus histogram is a powerful feature, and we’re
really happy to finally allow our users to render those as heatmaps. The Heatmap panel documentation
contains more information on how to use it.

Another improvement is that the Prometheus query editor now supports autocomplete for template variables. More information in the Prometheus data source documentation.

Microsoft SQL Server

Grafana v5.1 now ships with a built-in Microsoft SQL Server (MSSQL) data source plugin that allows you to query and visualize data from any
Microsoft SQL Server 2005 or newer, including Microsoft Azure SQL Database. Do you have metric or log data in MSSQL? You can now visualize
that data and define alert rules on it as with any of Grafana’s other core datasources.

The using Microsoft SQL Server in Grafana documentation has more detailed information on how to get started.

Adding New Panels to Dashboards

The control for adding new panels to dashboards now includes panel search and it is also now possible to copy and paste panels between dashboards.

By copying a panel in a dashboard it will be displayed in the Paste tab. When you switch to a new dashboard you can paste the
copied panel.

Align Zero-Line for Right and Left Y-axes

The feature request to align the zero-line for right and left Y-axes on the Graph panel is more than 3 years old. It has finally been implemented – more information in the Graph panel documentation.

Other Highlights

  • Table Panel: New enhancements includes support for mapping a numeric value/range to text and additional units. More information in the Table panel documentation.
  • New variable interpolation syntax: We now support a new option for rendering variables that gives the user full control of how the value(s) should be rendered. More details in the in the Variables documentation.
  • Improved workflow for provisioned dashboards. More details here.

Changelog

Checkout the CHANGELOG.md file for a complete list
of new features, changes, and bug fixes.

Grafana 4.6 Released

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/10/26/grafana-4.6-released/

Release Highlights

The Grafana 4.6 release contains some exciting and much anticipated new additions:

This is a big release so check out the other features and fixes in the Changelog section below.

Annotations

Annotations provide a way to mark points on the graph with rich events. You can now add annotation events and regions right from the graph panel! Just hold CTRL/CMD + click or drag region to open the Add Annotation view. The
Annotations documentation is updated to include details on this new exciting feature.

Cloudwatch

Cloudwatch now supports alerting. You can now setup alert rules for any Cloudwatch metric!

Postgres

Grafana v4.6 now ships with a built-in datasource plugin for PostgreSQL. Have logs or metric data in Postgres? You can now visualize that data and
define alert rules on it like any of our other data sources.

Prometheus

New enhancements include support for instant queries (for a single point in time instead of a time range) and improvements to query editor in the form of autocomplete for label names and label values.

This makes exploring and filtering Prometheus data much easier.

Changelog

Here are just a few highlights from the Changelog.

New Features

  • Annotations: Add support for creating annotations from graph panel #8197
  • GCS: Adds support for Google Cloud Storage #8370 thx @chuhlomin
  • Prometheus: Adds /metrics endpoint for exposing Grafana metrics. #9187
  • Jaeger: Add support for open tracing using jaeger in Grafana. #9213
  • Unit types: New date & time unit types added, useful in singlestat to show dates & times. #3678, #6710, #2764
  • Prometheus: Add support for instant queries #5765, thx @mtanda
  • Cloudwatch: Add support for alerting using the cloudwatch datasource #8050, thx @mtanda
  • Pagerduty: Include triggering series in pagerduty notification #8479, thx @rickymoorhouse
  • Prometheus: Align $__interval with the step parameters. #9226, thx @alin-amana
  • Prometheus: Autocomplete for label name and label value #9208, thx @mtanda
  • Postgres: New Postgres data source #9209, thx @svenklemm
  • Datasources: Make datasource HTTP requests verify TLS by default. closes #9371, #5334, #8812, thx @mattbostock

Minor

  • SMTP: Make it possible to set specific HELO for smtp client. #9319
  • Alerting: Add diff and percent diff as series reducers #9386, thx @shanhuhai5739
  • Slack: Allow images to be uploaded to slack when Token is present #7175, thx @xginn8
  • Table: Add support for displaying the timestamp with milliseconds #9429, thx @s1061123
  • Hipchat: Add metrics, message and image to hipchat notifications #9110, thx @eloo
  • Kafka: Add support for sending alert notifications to kafka #7104, thx @utkarshcmu

Tech

  • Webpack: Changed from systemjs to webpack (see readme or building from source guide for new build instructions). Systemjs is still used to load plugins but now plugins can only import a limited set of dependencies. See PLUGIN_DEV.md for more details on how this can effect some plugins.

Download

Head to the v4.6 download page for download links & instructions.

Thanks

A big thanks to all the Grafana users who contribute by submitting PRs, bug reports, helping out on our community site and providing feedback!

How to Query Personally Identifiable Information with Amazon Macie

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/how-to-query-personally-identifiable-information-with-amazon-macie/

Amazon Macie logo

In August 2017 at the AWS Summit New York, AWS launched a new security and compliance service called Amazon Macie. Macie uses machine learning to automatically discover, classify, and protect sensitive data in AWS. In this blog post, I demonstrate how you can use Macie to help enable compliance with applicable regulations, starting with data retention.

How to query retained PII with Macie

Data retention and mandatory data deletion are common topics across compliance frameworks, so knowing what is stored and how long it has been or needs to be stored is of critical importance. For example, you can use Macie for Payment Card Industry Data Security Standard (PCI DSS) 3.2, requirement 3, “Protect stored cardholder data,” which mandates a “quarterly process for identifying and securely deleting stored cardholder data that exceeds defined retention.” You also can use Macie for ISO 27017 requirement 12.3.1, which calls for “retention periods for backup data.” In each of these cases, you can use Macie’s built-in queries to identify the age of data in your Amazon S3 buckets and to help meet your compliance needs.

To get started with Macie and run your first queries of personally identifiable information (PII) and sensitive data, follow the initial setup as described in the launch post on the AWS Blog. After you have set up Macie, walk through the following steps to start running queries. Start by focusing on the S3 buckets that you want to inventory and capture important compliance related activity and data.

To start running Macie queries:

  1. In the AWS Management Console, launch the Macie console (you can type macie to find the console).
  2. Click Dashboard in the navigation pane. This shows you an overview of the risk level and data classification type of all inventoried S3 buckets, categorized by date and type.
    Screenshot of "Dashboard" in the navigation pane
  3. Choose S3 objects by PII priority. This dashboard lets you sort by PII priority and PII types.
    Screenshot of "S3 objects by PII priority"
  4. In this case, I want to find information about credit card numbers. I choose the magnifying glass for the type cc_number (note that PII types can be used for custom queries). This view shows the events where PII classified data has been uploaded to S3. When I scroll down, I see the individual files that have been identified.
    Screenshot showing the events where PII classified data has been uploaded to S3
  5. Before looking at the files, I want to continue to build the query by only showing items with high priority. To do so, I choose the row called Object PII Priority and then the magnifying glass icon next to High.
    Screenshot of refining the query for high priority events
  6. To view the results matching these queries, I scroll down and choose any file listed. This shows vital information such as creation date, location, and object access control list (ACL).
  7. The piece I am most interested in this case is the Object PII details line to understand more about what was found in the file. In this case, I see name and credit card information, which is what caused the high priority. Scrolling up again, I also see that the query fields have updated as I interacted with the UI.
    Screenshot showing "Object PII details"

Let’s say that I want to get an alert every time Macie finds new data matching this query. This alert can be used to automate response actions by using AWS Lambda and Amazon CloudWatch Events.

  1. I choose the left green icon called Save query as alert.
    Screenshot of "Save query as alert" button
  2. I can customize the alert and change things like category or severity to fit my needs based on the alert data.
  3. Another way to find the information I am looking for is to run custom queries. To start using custom queries, I choose Research in the navigation pane.
    1. To learn more about custom Macie queries and what you can do on the Research tab, see Using the Macie Research Tab.
  4. I change the type of query I want to run from CloudTrail data to S3 objects in the drop-down list menu.
    Screenshot of choosing "S3 objects" from the drop-down list menu
  5. Because I want PII data, I start typing in the query box, which has an autocomplete feature. I choose the pii_types: query. I can now type the data I want to look for. In this case, I want to see all files matching the credit card filter so I type cc_number and press Enter. The query box now says, pii_types:cc_number. I press Enter again to enable autocomplete, and then I type AND pii_types:email to require both a credit card number and email address in a single object.
    The query looks for all files matching the credit card filter ("cc_number")
  6. I choose the magnifying glass to search and Macie shows me all S3 objects that are tagged as PII of type Credit Cards. I can further specify that I only want to see PII of type Credit Card that are classified as High priority by adding AND and pii_impact:high to the query.
    Screenshot showing narrowing the query results furtherAs before, I can save this new query as an alert by clicking Save query as alert, which will be triggered by data matching the query going forward.

Advanced tip

Try the following advanced queries using Lucene query syntax and save the queries as alerts in Macie.

  • Use a regular-expression based query to search for a minimum of 10 credit card numbers and 10 email addresses in a single object:
    • pii_explain.cc_number:/([1-9][0-9]|[0-9]{3,}) distinct Credit Card Numbers.*/ AND pii_explain.email:/([1-9][0-9]|[0-9]{3,}) distinct Email Addresses.*/
  • Search for objects containing at least one credit card, name, and email address that have an object policy enabling global access (searching for S3 AllUsers or AuthenticatedUsers permissions):
    • (object_acl.Grants.Grantee.URI:”http\://acs.amazonaws.com/groups/global/AllUsers” OR  object_acl.Grants.Grantee.URI:”http\://acs.amazonaws.com/groups/global/AllUsers”) AND (pii_types.cc_number AND pii_types.email AND pii_types.name)

These are two ways to identify and be alerted about PII by using Macie. In a similar way, you can create custom alerts for various AWS CloudTrail events by choosing a different data set on which to run the queries again. In the examples in this post, I identified credit cards stored in plain text (all data in this post is example data only), determined how long they had been stored in S3 by viewing the result details, and set up alerts to notify or trigger actions on new sensitive data being stored. With queries like these, you can build a reliable data validation program.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about how to use Macie, start a new thread on the Macie forum or contact AWS Support.

-Chad

Grafana 4.0 Stable Release

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2016/12/12/grafana-4.0-stable-release/

Grafana v4.0.2 is stable is now available for download. After about 4 weeks of beta fixes and testing
are proud to announce that Grafana v4.0 stable is now released and production ready. This release contains a ton of minor
new features, fixes and improved UX. But on top of the usual new goodies is a core new feature: Alerting!
Read on below for a detailed description of what’s new in Grafana v4!

Alerting

Alerting is a really revolutionary feature for Grafana. It transforms Grafana from a
visualization tool into a truly mission critical monitoring tool. The alert rules are very easy to
configure using your existing graph panels and threshold levels can be set simply by dragging handles to
the right side of the graph. The rules will continually be evaluated by grafana-server and
notifications will be sent out when the rule conditions are met.

This feature has been worked on for over a year with many iterations and rewrites
just to make sure the foundations are really solid. We are really proud to finally release it!
Since the alerting execution is processed in the backend all data source plugins are not supported.
Right now Graphite, Prometheus, InfluxDB and OpenTSDB are supported. Elasticsearch is being worked
on but will not ready for v4 release.

Rules

The rule config allows you to specify a name, how often the rule should be evaluated and a series
of conditions that all need to be true for the alert to fire.

Currently the only condition type that exists is a Query condition that allows you to
specify a query letter, time range and an aggregation function. The letter refers to
a query you already have added in the Metrics tab. The result from the
query and the aggregation function is a single value that is then used in the threshold check.

We plan to add other condition types in the future, like Other Alert, where you can include the state
of another alert in your conditions, and Time Of Day.

Notifications

Alerting would not be very useful if there was no way to send notifications when rules trigger and change state. You
can setup notifications of different types. We currently have Slack, PagerDuty, Email and Webhook with more in the
pipe that will be added during beta period. The notifications can then be added to your alert rules.
If you have configured an external image store in the grafana.ini config file (s3 and webdav options available)
you can get very rich notifications with an image of the graph and the metric
values all included in the notification.

Annotations

Alert state changes are recorded in a new annotation store that is built into Grafana. This store
currently only supports storing annotations in Grafana’s own internal database (mysql, postgres or sqlite).
The Grafana annotation storage is currently only used for alert state changes but we hope to add the ability for users
to add graph comments in the form of annotations directly from within Grafana in a future release.

Alert List Panel

This new panel allows you to show alert rules or a history of alert rule state changes. You can filter based on states your
interested in. Very useful panel for overview style dashboards.

Ad-hoc filter variable

This is a new and very different type of template variable. It will allow you to create new key/value filters on the fly.
With autocomplete for both key and values. The filter condition will be automatically applied to all
queries that use that data source. This feature opens up more exploratory dashboards. In the gif animation to the right
you have a dashboard for Elasticsearch log data. It uses one query variable that allow you to quickly change how the data
is grouped, and an interval variable for controlling the granularity of the time buckets. What was missing
was a way to dynamically apply filters to the log query. With the Ad-Hoc Filters variable you can
dynamically add filters to any log property!

UX Improvements

We always try to bring some UX/UI refinements & polish in every release.

TV-mode & Kiosk mode

Grafana is so often used on wall mounted TVs that we figured a clean TV mode would be
really nice. In TV mode the top navbar, row & panel controls will all fade to transparent.

This happens automatically after one minute of user inactivity but can also be toggled manually
with the d v sequence shortcut. Any mouse movement or keyboard action will
restore navbar & controls.

Another feature is the kiosk mode. This can be enabled with d k
shortcut or by adding &kiosk to the URL when you load a dashboard.
In kiosk mode the navbar is completely hidden/removed from view.

New row menu & add panel experience

We spent a lot of time improving the dashboard building experience. Trying to make it both
more efficient and easier for beginners. After many good but not great experiments
with a build mode we eventually decided to just improve the green row menu and
continue work on a build mode for a future release.

The new row menu automatically slides out when you mouse over the edge of the row. You no longer need
to hover over the small green icon and the click it to expand the row menu.

There is some minor improvements to drag and drop behaviour. Now when dragging a panel from one row
to another you will insert the panel and Grafana will automatically make room for it.
When you drag a panel within a row you will simply reorder the panels.

If you look at the animation to the right you can see that you can drag and drop a new panel. This is not
required, you can also just click the panel type and it will be inserted at the end of the row
automatically. Dragging a new panel has an advantage in that you can insert a new panel where ever you want
not just at the end of the row.

We plan to further improve dashboard building in the future with a more rich grid & layout system.

Keyboard shortcuts

Grafana v4 introduces a number of really powerful keyboard shortcuts. You can now focus a panel
by hovering over it with your mouse. With a panel focused you can simple hit e to toggle panel
edit mode, or v to toggle fullscreen mode. p r removes the panel. p s opens share
modal.

Some nice navigation shortcuts are:

  • g h for go to home dashboard
  • s s open search with starred pre-selected
  • s t open search in tags list view

Upgrade & Breaking changes

There are no breaking changes. Old dashboards and features should work the same. Grafana-server will automatically upgrade it’s db
schema on restart. It’s advisable to do a backup of Grafana’s database before updating.

If your are using plugins make sure to update your plugins as some might not work perfectly v4.

You can update plugins using grafana-cli

grafana-cli plugins update-all

Changelog

Checkout the CHANGELOG.md file for a complete list
of new features, changes, and bug fixes.

Download

Head to v4 download page for download links & instructions.

Big thanks to all the Grafana users and devs out there who have helped with bug reports, feature
requests and pull requests!

Until next time, keep on graphing!
Torkel Ödegaard

Refactoring Components for Redux Performance

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/152078809581

By Shahzad Aziz

Front-end web development is evolving fast; a lot of new tools and libraries are published which challenge best practices everyday. It’s exciting, but also overwhelming. One of those new tools that can make a developer’s life easier is Redux, a popular open source state container. This past year, our team at Yahoo Search has been using Redux to refresh a legacy tool used for data analytics. We paired Redux with another popular library for building front-end components called React. Due to the scale of Yahoo Search, we measure and store thousands of metrics on the data grid every second. Our data analytics tool allows internal users from our Search team to query stored data, analyze traffic, and compare A/B testing results. The biggest goal of creating the new tool was to deliver ease of use and speed. When we started, we knew our application would grow complex and we would have a lot of state living inside it. During the course of development we got hit by unforeseen performance bottlenecks. It took us some digging and refactoring to achieve the performance we expected from our technology stack. We want to share our experience and encourage developers who use Redux, that by reasoning about your React components and your state structure, you will make your application performant and easy to scale.

Redux wants your application state changes to be more predictable and easier to debug. It achieves this by extending the ideas of the Flux pattern by making your application’s state live in a single store. You can use reducers to split the state logically and manage responsibility. Your React components subscribe to changes from the Redux store. For front-end developers this is very appealing and you can imagine the cool debugging tools that can be paired with Redux (see Time Travel).

With React it is easier to reason and create components into two different categories, Container vs Presentational. You can read more about it (here). The gist is that your Container components will usually subscribe to state and manage flow, while Presentational are concerned with rendering markup using the properties passed to them. Taking guidance from early Redux documentation, we started by adding container components at the top of our component tree. The most critical part of our application is the interactive ResultsTable component; for the sake of brevity, this will be the focus for the rest of the post.

React Component Tree

To achieve optimal performance from our API, we make a lot of simple calls to the backend and combine and filter data in the reducers. This means we dispatch a lot of async actions to fetch bits and pieces of the data and use redux-thunk to manage the control flow. However, with any change in user selections it invalidates most of the things we have fetched in the state, and we need to then re-fetch. Overall this works great, but it also means we are mutating state many times as responses come in.

The Problem

While we were getting great performance from our API, the browser flame graphs started revealing performance bottlenecks on the client. Our MainPage component sitting high up in the component tree triggered re-renders against every dispatch. While your component render should not be a costly operation, on a number of occasions we had a huge amount of data rows to be rendered. The delta time for re-render in such cases was in seconds.

So how could we make render more performant? A well-advertized method is the shouldComponentUpdate lifecycle method, and that is where we started. This is a common method used to increase performance that should be implemented carefully across your component tree. We were able to filter out access re-renders where our desired props to a component had not changed.

Despite the shouldComponentUpdate improvement, our whole UI still seemed clogged. User actions got delayed responses, our loading indicators would show up after a delay, autocomplete lists would take time to close, and small user interactions with the application were slow and heavy. At this point it was not about React render performance anymore.

In order to determine bottlenecks, we used two tools: React Performance Tools and React Render Visualizer. The experiment was to study users performing different actions and then creating a table of the count of renders and instances created for the key components.

Below is one such table that we created. We analyzed two frequently-used actions. For this table we are looking at how many renders were triggered for our main table components.

Change Dates: Users can change dates to fetch historical data. To fetch data, we make parallel API calls for all metrics and merge everything in the Reducers.

Open Chart: Users can select a metric to see a daily breakdown plotted on a line chart. This is opened in a modal dialog.

Experiments revealed that state needed to travel through the tree and recompute a lot of things repeatedly along the way before affecting the relevant component. This was costly and CPU intensive. While switching products from the UI, React spent 1080ms on table and row components. It was important to realize this was more of a problem of shared state which made our thread busy. While React’s virtual DOM is performant, you should still strive to minimize virtual DOM creation. Which means minimizing renders.

Performance Refactor

The idea was to look at container components and try and distribute the state changes more evenly in the tree. We also wanted to put more thought into the state, make it less shared, and more derived. We wanted to store the most essential items in the state while computing state for the components as many times as we wanted.

We executed the refactor in two steps:

Step 1. We added more container components subscribing to the state across the component hierarchy. Components consuming exclusive state did not need a container component at the top threading props to it; they can subscribe to the state and become a container component. This dramatically reduces the amount of work React has to do against Redux actions.

React Component Tree with potential container components

We identified how the state was being utilized across the tree. The important question was, “How could we make sure there are more container components with exclusive states?” For this we had to split a few key components and wrap them in containers or make them containers themselves.

React Component Tree after refactor

In the tree above, notice how the MainPage container is no longer responsible for any Table renders. We extracted another component ControlsFooter out of ResultsTable which has its own exclusive state. Our focus was to reduce re-renders on all table-related components.

Step 2. Derived state and should component update.

It is critical to make sure your state is well defined and flat in nature. It is easier if you think of Redux state as a database and start to normalize it. We have a lot of entities like products, metrics, results, chart data, user selections, UI state, etc. For most of them we store them as Key/Value pairs and avoid nesting. Each container component queries the state object and de-normalizes data. For example, our navigation menu component needs a list of metrics for the product which we can easily extract from the metrics state by filtering the list of metrics with the product ID.

The whole process of deriving state and querying it repeatedly can be optimized. Enter Redux Reselect. Reselect allows you to define selectors to compute derived state. They are memoized for efficiency and can also be cascaded. So a function like the one below can be defined as a selector, and if there is no change in metrics or productId in state, it will return a memoized copy.

/* CODE

getMetrics(productId, metrics) {

 metrics.find(m => m.productId === productId)

}

*/

We wanted to make sure all our async actions resolved and our state had everything we needed before triggering a re-render. We created selectors to define such behaviours for shouldComponentUpdate in our container components (e.g our table will only render if metrics and results were all loaded). While you can still do all of this directly in shouldComponentUpdate, we felt that using selectors allowed you to think differently. It makes your containers predict state and wait on it before rendering while the state that it needs to render can be a subset of it. Not to mention, it is more performant.

The Results

Once we finished refactoring we ran our experiment again to compare the impact from all our tweaks. We were expecting some improvement and better flow of state mutations across the component tree.

A quick glance at the table above tells you how the refactor has clearly come into effect. In the new results, observe how changing dates distributes the render count between comparisonTable (7) and controlsHeader (2). It still adds up to 9. Such logical refactors have allowed us to to speed up our application rendering up to 4 times. It has also allowed us to optimize time wasted by React easily. This has been a significant improvement for us and a step in the right direction.

What’s Next

Redux is a great pattern for front-end applications. It has allowed us to reason about our application state better, and as the app grows more complex, the single store makes it much easier to scale and debug.

Going forward we want to explore Immutability for Redux State. We want to enforce the discipline for Redux state mutations and also make a case for faster shallow comparison props in shouldComponentUpdate.

Autocomplete poetry

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/autocomplete-poetry/

Raspberry Pi integrated into the world of art. I hadn’t come across much of this before, and I like it a lot. As a self-proclaimed ‘artist of stuff’, it’s always exciting to see something arty that calls to the maker inside. With Glaciers, NYC-based Zach Gage has achieved exactly that.

Glaciers was an art instillation that, like the landforms from which it takes its name, slowly developed over time. I say ‘was’, but with each of its constituent pieces still running and a majority already sold, Glaciers continues indefinitely. Using forty Raspberry Pis attached to forty plainly presented Adafruit e-ink screens, Gage used Google Search’s auto-complete function to create poetry.

install4_lg

We’ve all noticed occasional funny or poignant results of the way Google tries to complete your search query for you based on the vast amount of data that passes through its search engine daily. Gage has programmed the Raspberry Pis to select the top three suggestions that follow various chosen phrases and display them on the screens. The results are striking, often moving, and usually something that most people would acknowledge as poetry, or at least poetic.

The screens refresh daily as the Pis check Google for changes and update accordingly. For some search phrases, the autocompletions can change daily; for others, it could take years. A poem you’ve had upon your wall for months on end could suddenly change unexpectedly, updating to reflect the evolving trends of user queries on the internet.

“The best paintings you can look at a thousand times and you keep seeing new things.” – Zach Gage

Glaciers is certainly an intriguing installation, with pithy observations of the vulnerability of anonymous internet users in pieces such as:

Glacier03_lg

and the (somewhat) more light-hearted:

Glacier04_lg

Zach Gage is an indie video game creator, responsible for titles such as SpellTower and the somewhat fear-inducing Lose/Lose (Space Invaders meets permanent file deletion with some 17000 files already lost to the game since launch). He’s previously used Raspberry Pis in other projects, such as his Twitter-fuelled best day ever and Fortune. I bet this isn’t the last time he does something fabulous with a Pi.

The post Autocomplete poetry appeared first on Raspberry Pi.