Tag Archives: uber

timeShift(GrafanaBuzz, 1w) Issue 15

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/09/29/timeshiftgrafanabuzz-1w-issue-15/

This week the Grafana Labs team converged on Stockholm. In addition to taking advantage of the beautiful weather, which was perfect for team outings, we were also hard at work setting objectives for the next Grafana release, finalizing details for GrafanaCon EU, and enjoying some good old-fashioned face time in an otherwise post-geographic company. This issue of TimeShift covers a few recent and upcoming talks, monitoring Kubernetes and plugin updates.

All Systems Go! 2017 Schedule Published

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/all-systems-go-2017-schedule-published.html

The All Systems Go! 2017 schedule has been published!

I am happy to announce that we have published the All Systems Go! 2017 schedule!
We are very happy with the large number and the quality of the
submissions we got, and the resulting schedule is exceptionally
strong.

Without further ado:

Here’s the schedule for the first day (Saturday, 21st of October).

And here’s the schedule for the second day (Sunday, 22nd of October).

Here are a couple of keywords from the topics of the talks:
1password, azure, bluetooth, build systems,
casync, cgroups, cilium, cockpit, containers,
ebpf, flatpak, habitat, IoT, kubernetes,
landlock, meson, OCI, rkt, rust, secureboot,
skydive, systemd, testing, tor, varlink,
virtualization, wifi, and more.

Our speakers are from all across the industry: Chef CoreOS, Covalent,
Facebook, Google, Intel, Kinvolk, Microsoft, Mozilla, Pantheon,
Pengutronix, Red Hat, SUSE and more.

For further information about All Systems Go! visit our conference web site.

Make sure to buy your ticket for All Systems Go! 2017 now! A limited
number of tickets are left at this point, so make sure you get yours
before we are all sold out! Find all details here.

See you in Berlin!

timeShift(GrafanaBuzz, 1w) Issue 14

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/09/22/timeshiftgrafanabuzz-1w-issue-14/

Summer is officially in the rear-view mirror, but we at Grafana Labs are excited. Next week, the team will gather in Stockholm, Sweden where we’ll be discussing Grafana 5.0, GrafanaCon EU and setting other goals. If you’re attending Percona Live Europe 2017 in Dublin, be sure and catch Grafana developer, Daniel Lee on Tuesday, September 26. He’ll be showing off the new MySQL data source and a sneak peek of Grafana 5.0.

And with that – we hope you enjoy this issue of TimeShift!


Latest Release

Grafana 4.5.2 is now available! Various fixes to the Graphite data source, HTTP API, and templating.

To see details on what’s been fixed in the newest version, please see the release notes.

Download Grafana 4.5.2 Now


From the Blogosphere

A Monitoring Solution for Docker Hosts, Containers and Containerized Services: Stefan was searching for an open source, self-hosted monitoring solution. With an ever-growing number of open source TSDBs, Stefan outlines why he chose Prometheus and provides a rundown of how he’s monitoring his Docker hosts, containers and services.

Real-time API Performance Monitoring with ES, Beats, Logstash and Grafana: As APIs become a centerpiece for businesses, monitoring API performance is extremely important. Hiren recently configured real time API response time monitoring for a project and shares his implementation plan and configurations.

Monitoring SSL Certificate Expiry in GCP and Kubernetes: This article discusses how to use Prometheus and Grafana to automatically monitor SSL certificates in use by load balancers across GCP projects.

Node.js Performance Monitoring with Prometheus: This is a good primer for monitoring in general. It discusses what monitoring is, important signals to know, instrumentation, and things to consider when selecting a monitoring tool.

DIY Dashboard with Grafana and MariaDB: Mark was interested in testing out the new beta MySQL support in Grafana, so he wrote a short article on how he is using Grafana with MariaDB.

Collecting Temperature Data with Raspberry Pi Computers: Many of us use monitoring for tracking mission-critical systems, but setting up environment monitoring can be a fun way to improve your programming skills as well.


GrafanaCon EU CFP is Open

Have a big idea to share? A shorter talk or a demo you’d like to show off? We’re looking for technical and non-technical talks of all sizes. The proposals are rolling in, but we are happy to save a speaking slot for you!

I’d Like to Speak at GrafanaCon


Grafana Plugins

There were a lot of plugin updates to highlight this week, many of which were due to changes in Grafana 4.5. It’s important to keep your plugins up to date, since bug fixes and new features are added frequently. We’ve made the process of installing and updating plugins simple. On an on-prem instance, use the Grafana-cli, or on Hosted Grafana, install and update with 1-click.

NEW PLUGIN

Linksmart HDS Data Source – The LinkSmart Historical Data Store is a new Grafana data source plugin. LinkSmart is an open source IoT platform for developing IoT applications. IoT applications need to deal with large amounts of data produced by a growing number of sensors and other devices. The Historical Datastore is for storing, querying, and aggregating (time-series) sensor data.

Install Now

UPDATED PLUGIN

Simple JSON Data Source – This plugin received a bug fix for the query editor.

Update Now

UPDATED PLUGIN

Stagemonitor Elasticsearch App – Numerous small updates and the version updated to match the StageMonitor version number.

Update Now

UPDATED PLUGIN

Discrete Panel – Update to fix breaking change in Grafana 4.5.

Update Now

UPDATED PLUGIN

Status Dot Panel – Minor HTML Update in this version.

Update Now

UPDATED PLUGIN

Alarm Box Panel – This panel was updated to fix breaking changes in Grafana 4.5.

Update Now


This week’s MVC (Most Valuable Contributor)

Each week we highlight a contributor to Grafana or the surrounding ecosystem as a thank you for their participation in making open source software great.

Sven Klemm opened a PR for adding a new Postgres data source and has been very quick at implementing proposed changes. The Postgres data source is on our roadmap for Grafana 5.0 so this PR really helps. Thanks Sven!


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Glad you’re finding Grafana useful! Curious about that annotation just before midnight 🙂

We Need Your Help

Last week we announced an experiment we were conducting, and need your help! Do you have a graph that you love because the data is beautiful or because the graph provides interesting information? Please get in touch. Tweet or send us an email with a screenshot, and we’ll tell you about this fun experiment.

I Want to Help


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


What do you think?

What would you like to see here? Submit a comment on this article below, or post something at our community forum. Help us make these weekly roundups better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Manage Kubernetes Clusters on AWS Using CoreOS Tectonic

Post Syndicated from Arun Gupta original https://aws.amazon.com/blogs/compute/kubernetes-clusters-aws-coreos-tectonic/

There are multiple ways to run a Kubernetes cluster on Amazon Web Services (AWS). The first post in this series explained how to manage a Kubernetes cluster on AWS using kops. This second post explains how to manage a Kubernetes cluster on AWS using CoreOS Tectonic.

Tectonic overview

Tectonic delivers the most current upstream version of Kubernetes with additional features. It is a commercial offering from CoreOS and adds the following features over the upstream:

  • Installer
    Comes with a graphical installer that installs a highly available Kubernetes cluster. Alternatively, the cluster can be installed using AWS CloudFormation templates or Terraform scripts.
  • Operators
    An operator is an application-specific controller that extends the Kubernetes API to create, configure, and manage instances of complex stateful applications on behalf of a Kubernetes user. This release includes an etcd operator for rolling upgrades and a Prometheus operator for monitoring capabilities.
  • Console
    A web console provides a full view of applications running in the cluster. It also allows you to deploy applications to the cluster and start the rolling upgrade of the cluster.
  • Monitoring
    Node CPU and memory metrics are powered by the Prometheus operator. The graphs are available in the console. A large set of preconfigured Prometheus alerts are also available.
  • Security
    Tectonic ensures that cluster is always up to date with the most recent patches/fixes. Tectonic clusters also enable role-based access control (RBAC). Different roles can be mapped to an LDAP service.
  • Support
    CoreOS provides commercial support for clusters created using Tectonic.

Tectonic can be installed on AWS using a GUI installer or Terraform scripts. The installer prompts you for the information needed to boot the Kubernetes cluster, such as AWS access and secret key, number of master and worker nodes, and instance size for the master and worker nodes. The cluster can be created after all the options are specified. Alternatively, Terraform assets can be downloaded and the cluster can be created later. This post shows using the installer.

CoreOS License and Pull Secret

Even though Tectonic is a commercial offering, a cluster for up to 10 nodes can be created by creating a free account at Get Tectonic for Kubernetes. After signup, a CoreOS License and Pull Secret files are provided on your CoreOS account page. Download these files as they are needed by the installer to boot the cluster.

IAM user permission

The IAM user to create the Kubernetes cluster must have access to the following services and features:

  • Amazon Route 53
  • Amazon EC2
  • Elastic Load Balancing
  • Amazon S3
  • Amazon VPC
  • Security groups

Use the aws-policy policy to grant the required permissions for the IAM user.

DNS configuration

A subdomain is required to create the cluster, and it must be registered as a public Route 53 hosted zone. The zone is used to host and expose the console web application. It is also used as the static namespace for the Kubernetes API server. This allows kubectl to be able to talk directly with the master.

The domain may be registered using Route 53. Alternatively, a domain may be registered at a third-party registrar. This post uses a kubernetes-aws.io domain registered at a third-party registrar and a tectonic subdomain within it.

Generate a Route 53 hosted zone using the AWS CLI. Download jq to run this command:

ID=$(uuidgen) && \
aws route53 create-hosted-zone \
--name tectonic.kubernetes-aws.io \
--caller-reference $ID \
| jq .DelegationSet.NameServers

The command shows an output such as the following:

[
  "ns-1924.awsdns-48.co.uk",
  "ns-501.awsdns-62.com",
  "ns-1259.awsdns-29.org",
  "ns-749.awsdns-29.net"
]

Create NS records for the domain with your registrar. Make sure that the NS records can be resolved using a utility like dig web interface. A sample output would look like the following:

The bottom of the screenshot shows NS records configured for the subdomain.

Download and run the Tectonic installer

Download the Tectonic installer (version 1.7.1) and extract it. The latest installer can always be found at coreos.com/tectonic. Start the installer:

./tectonic/tectonic-installer/$PLATFORM/installer

Replace $PLATFORM with either darwin or linux. The installer opens your default browser and prompts you to select the cloud provider. Choose Amazon Web Services as the platform. Choose Next Step.

Specify the Access Key ID and Secret Access Key for the IAM role that you created earlier. This allows the installer to create resources required for the Kubernetes cluster. This also gives the installer full access to your AWS account. Alternatively, to protect the integrity of your main AWS credentials, use a temporary session token to generate temporary credentials.

You also need to choose a region in which to install the cluster. For the purpose of this post, I chose a region close to where I live, Northern California. Choose Next Step.

Give your cluster a name. This name is part of the static namespace for the master and the address of the console.

To enable in-place update to the Kubernetes cluster, select the checkbox next to Automated Updates. It also enables update to the etcd and Prometheus operators. This feature may become a default in future releases.

Choose Upload “tectonic-license.txt” and upload the previously downloaded license file.

Choose Upload “config.json” and upload the previously downloaded pull secret file. Choose Next Step.

Let the installer generate a CA certificate and key. In this case, the browser may not recognize this certificate, which I discuss later in the post. Alternatively, you can provide a CA certificate and a key in PEM format issued by an authorized certificate authority. Choose Next Step.

Use the SSH key for the region specified earlier. You also have an option to generate a new key. This allows you to later connect using SSH into the Amazon EC2 instances provisioned by the cluster. Here is the command that can be used to log in:

ssh –i <key> [email protected]<ec2-instance-ip>

Choose Next Step.

Define the number and instance type of master and worker nodes. In this case, create a 6 nodes cluster. Make sure that the worker nodes have enough processing power and memory to run the containers.

An etcd cluster is used as persistent storage for all of Kubernetes API objects. This cluster is required for the Kubernetes cluster to operate. There are three ways to use the etcd cluster as part of the Tectonic installer:

  • (Default) Provision the cluster using EC2 instances. Additional EC2 instances are used in this case.
  • Use an alpha support for cluster provisioning using the etcd operator. The etcd operator is used for automated operations of the etcd master nodes for the cluster itself, in addition to for etcd instances that are created for application usage. The etcd cluster is provisioned within the Tectonic installer.
  • Bring your own pre-provisioned etcd cluster.

Use the first option in this case.

For more information about choosing the appropriate instance type, see the etcd hardware recommendation. Choose Next Step.

Specify the networking options. The installer can create a new public VPC or use a pre-existing public or private VPC. Make sure that the VPC requirements are met for an existing VPC.

Give a DNS name for the cluster. Choose the domain for which the Route 53 hosted zone was configured earlier, such as tectonic.kubernetes-aws.io. Multiple clusters may be created under a single domain. The cluster name and the DNS name would typically match each other.

To select the CIDR range, choose Show Advanced Settings. You can also choose the Availability Zones for the master and worker nodes. By default, the master and worker nodes are spread across multiple Availability Zones in the chosen region. This makes the cluster highly available.

Leave the other values as default. Choose Next Step.

Specify an email address and password to be used as credentials to log in to the console. Choose Next Step.

At any point during the installation, you can choose Save progress. This allows you to save configurations specified in the installer. This configuration file can then be used to restore progress in the installer at a later point.

To start the cluster installation, choose Submit. At another time, you can download the Terraform assets by choosing Manually boot. This allows you to boot the cluster later.

The logs from the Terraform scripts are shown in the installer. When the installation is complete, the console shows that the Terraform scripts were successfully applied, the domain name was resolved successfully, and that the console has started. The domain works successfully if the DNS resolution worked earlier, and it’s the address where the console is accessible.

Choose Download assets to download assets related to your cluster. It contains your generated CA, kubectl configuration file, and the Terraform state. This download is an important step as it allows you to delete the cluster later.

Choose Next Step for the final installation screen. It allows you to access the Tectonic console, gives you instructions about how to configure kubectl to manage this cluster, and finally deploys an application using kubectl.

Choose Go to my Tectonic Console. In our case, it is also accessible at http://cluster.tectonic.kubernetes-aws.io/.

As I mentioned earlier, the browser does not recognize the self-generated CA certificate. Choose Advanced and connect to the console. Enter the login credentials specified earlier in the installer and choose Login.

The Kubernetes upstream and console version are shown under Software Details. Cluster health shows All systems go and it means that the API server and the backend API can be reached.

To view different Kubernetes resources in the cluster choose, the resource in the left navigation bar. For example, all deployments can be seen by choosing Deployments.

By default, resources in the all namespace are shown. Other namespaces may be chosen by clicking on a menu item on the top of the screen. Different administration tasks such as managing the namespaces, getting list of the nodes and RBAC can be configured as well.

Download and run Kubectl

Kubectl is required to manage the Kubernetes cluster. The latest version of kubectl can be downloaded using the following command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl

It can also be conveniently installed using the Homebrew package manager. To find and access a cluster, Kubectl needs a kubeconfig file. By default, this configuration file is at ~/.kube/config. This file is created when a Kubernetes cluster is created from your machine. However, in this case, download this file from the console.

In the console, choose admin, My Account, Download Configuration and follow the steps to download the kubectl configuration file. Move this file to ~/.kube/config. If kubectl has already been used on your machine before, then this file already exists. Make sure to take a backup of that file first.

Now you can run the commands to view the list of deployments:

~ $ kubectl get deployments --all-namespaces
NAMESPACE         NAME                                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system       etcd-operator                           1         1         1            1           43m
kube-system       heapster                                1         1         1            1           40m
kube-system       kube-controller-manager                 3         3         3            3           43m
kube-system       kube-dns                                1         1         1            1           43m
kube-system       kube-scheduler                          3         3         3            3           43m
tectonic-system   container-linux-update-operator         1         1         1            1           40m
tectonic-system   default-http-backend                    1         1         1            1           40m
tectonic-system   kube-state-metrics                      1         1         1            1           40m
tectonic-system   kube-version-operator                   1         1         1            1           40m
tectonic-system   prometheus-operator                     1         1         1            1           40m
tectonic-system   tectonic-channel-operator               1         1         1            1           40m
tectonic-system   tectonic-console                        2         2         2            2           40m
tectonic-system   tectonic-identity                       2         2         2            2           40m
tectonic-system   tectonic-ingress-controller             1         1         1            1           40m
tectonic-system   tectonic-monitoring-auth-alertmanager   1         1         1            1           40m
tectonic-system   tectonic-monitoring-auth-prometheus     1         1         1            1           40m
tectonic-system   tectonic-prometheus-operator            1         1         1            1           40m
tectonic-system   tectonic-stats-emitter                  1         1         1            1           40m

This output is similar to the one shown in the console earlier. Now, this kubectl can be used to manage your resources.

Upgrade the Kubernetes cluster

Tectonic allows the in-place upgrade of the cluster. This is an experimental feature as of this release. The clusters can be updated either automatically, or with manual approval.

To perform the update, choose Administration, Cluster Settings. If an earlier Tectonic installer, version 1.6.2 in this case, is used to install the cluster, then this screen would look like the following:

Choose Check for Updates. If any updates are available, choose Start Upgrade. After the upgrade is completed, the screen is refreshed.

This is an experimental feature in this release and so should only be used on clusters that can be easily replaced. This feature may become a fully supported in a future release. For more information about the upgrade process, see Upgrading Tectonic & Kubernetes.

Delete the Kubernetes cluster

Typically, the Kubernetes cluster is a long-running cluster to serve your applications. After its purpose is served, you may delete it. It is important to delete the cluster as this ensures that all resources created by the cluster are appropriately cleaned up.

The easiest way to delete the cluster is using the assets downloaded in the last step of the installer. Extract the downloaded zip file. This creates a directory like <cluster-name>_TIMESTAMP. In that directory, give the following command to delete the cluster:

TERRAFORM_CONFIG=$(pwd)/.terraformrc terraform destroy --force

This destroys the cluster and all associated resources.

You may have forgotten to download the assets. There is a copy of the assets in the directory tectonic/tectonic-installer/darwin/clusters. In this directory, another directory with the name <cluster-name>_TIMESTAMP contains your assets.

Conclusion

This post explained how to manage Kubernetes clusters using the CoreOS Tectonic graphical installer.  For more details, see Graphical Installer with AWS. If the installation does not succeed, see the helpful Troubleshooting tips. After the cluster is created, see the Tectonic tutorials to learn how to deploy, scale, version, and delete an application.

Future posts in this series will explain other ways of creating and running a Kubernetes cluster on AWS.

Arun

Kodi Declares ‘War’ on Trademark Trolls

Post Syndicated from Ernesto original https://torrentfreak.com/kodi-declares-war-on-trademark-trolls-170908/

More and more people are starting to use Kodi-powered set-top boxes to stream video content to their TVs.

While Kodi itself is a neutral platform, unauthorized add-ons give it a bad name. This is one of the reasons why the Kodi team is actively going after vendors who sell “fully loaded” pirate boxes and YouTubers who misuse their name to promote copyright infringement.

However, these “pirates” are not the only intellectual property problem the team is facing; trademark trolls are a serious threat as well.

When XBMC changed its name to Kodi, they noticed that several parties swiftly registered the Kodi trademark around the world, presumably to make money off it. This came as a total surprise to the foundation, which never faced any trademark issues before, and it continues to cause problems today.

The Kodi team has since convinced some of these “trolls” to hand over the trademarks, but not all are willing to give in. This is causing problems, particularly in Canada, where the local trademark owner is actively blackmailing hardware vendors and removing content from Amazon, the Kodi team says.

The Canadian trademark is owned by Geoff Gavora, who is no stranger to the XBMC Foundation. Before the trouble started, Gavora had already sent several emails to the Kodi team, expressing how important the software was to his sales. After the trademark registration, however, the friendly tone changed.

“We had hoped, given the positive nature of his past emails, that perhaps he was doing this for the benefit of the Foundation. We learned, unfortunately, that this was not the case,” XBMC Foundation President Nathan Betzen notes.

“Instead, companies like Mygica and our sponsor Minix have been delisted by Gavora on Amazon, so that only Gavora’s hardware can be sold, unless those companies pay him a fee to stay on the store,” he adds.

Gavora is actively using his trademark to stop the sales of other Kodi based devices in Canada, the XBMC Foundation warns. This means that people who buy a Kodi product in the local Amazon store may end up filling the pocket of the local trademark owner.

“Now, if you do a search for Kodi on Amazon.ca, there’s a very real chance that every box you see is giving Gavora money to advertise that they can run what should be the entirely free and open Kodi. Gavora and his company are behaving in true trademark troll fashion,” Betzen writes.

There are several reasons why the Kodi team is making this problem public now. For one, they want the public to be aware of the situation. At some point, trademark trolls may even try to stop Kodi from distributing the software through their own site, they warn.

However, the foundation is not going to let this happen without a fight. They are ready to deal with the problem head on. Trademark trolls should not be allowed to exploit the Kodi name for financial profit.

“We want to let the trolls know that we have caught on to this game and will not accept it. We are actively taking the necessary steps to ensure that the Kodi trademark trolls are dealt with appropriately. There is no value proposition in trolling the Kodi name,’ Betzen writes.

If this means that the foundation has to go to court, they are prepared to do so, hoping that the community will have their back.

“While our goal has always been to avoid going to the court to ensure Kodi remains free in countries where trolls are attempting to get rich off of the Kodi name, we will not back down from protecting the free, open source nature of our software.

“If that time comes for legal action, we hope to have the community’s support,” Betzen concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Kim Dotcom Wants K.im to Trigger a “Copyright Revolution”

Post Syndicated from Ernesto original https://torrentfreak.com/kim-dotcom-wants-k-im-to-trigger-a-copyright-revolution-170831/

For many people Kim Dotcom is synonymous with Megaupload, the file-sharing giant that was taken down by the U.S. Government early 2012.

While Megaupload is no more, the New Zealand Internet entrepreneur is working on a new file-sharing site. Initially dubbed Megaupload 2, the new service will be called K.im, and it will be quite different from its predecessor.

This week Dotcom, who’s officially the chief “evangelist” of the service, showed a demo to a few thousand people revealing more about what it’s going to offer.

K.im is not a central hosting service, quite the contrary. It will allow users to upload content and distribute it to dozens of other services, including Dropbox, Google, Reddit, Storj, and even torrent sites.

The files are distributed across the Internet where they can be accessed freely. However, there is a catch. The uploaders set a price for each download and people who want a copy can only unlock it through the K.im app or browser addon, after they’ve paid.

Pick your price

K.im, paired with Bitcache, is basically a micropayment solution. It allows creators to charge the public for everything they upload. Every download is tied to a Bitcoin transaction, turning files into their own “stores.”

Kim Dotcom tells TorrentFreak that he sees the service as a copyright revolution. It should be a win-win solution for independent creators, rightsholders, and people who are used to pirating stuff.

“I’m working for both sides. For the copyright holders and also for the people who what to pay for content but have been geo-blocked and then are forced to download for free,” Dotcom says.

Like any other site that allows user uploaded content, K.im can also be used by pirates who want to charge a small fee for spreading infringing content. This is something Dotcom is aware of, but he has a solution in mind.

Much like YouTube, which allows rightsholders to “monetize” videos that use their work, K.im will provide an option to claim pirated content. Rightsholders can then change the price and all revenue will go to them.

So, if someone uploads a pirated copy of the Game of Thrones season finale through K.im, HBO can claim that file, charge an appropriate fee, and profit from it. The uploader, meanwhile, maintains his privacy.

“It is the holy grail of copyright enforcement. It is my gift to Hollywood, the movie studios, and everyone else,” Dotcom says.

Dotcom believes that piracy is in large part caused by an availability problem. People can often not find the content they’re looking for so it’s K.im’s goal to distribute files as widely as possible. This includes several torrent sites, which are currently featured in the demo.

Torrent uploads?

Interestingly, it will be hard to upload content to sites such as YTS, EZTV, KickassTorrents, and RARBG, as they’ve been shut down or don’t allow user uploads. However, Dotcom stresses that the names are just examples, and that they are still working on partnering with various sites.

Whether torrent sites will be eager to cooperate has yet to be seen. It’s possible that the encrypted files, which can’t be opened without paying, will be seen as “spam” by traditional torrent sites.

Also, from a user perspective, one has to wonder how many people are willing to pay for something if they set out to pirate it. After all, there will always be plenty of free options for those who refuse to or can’t pay.

Dotcom, however, is convinced that K.im can create a “copyright revolution.” He stresses that site owners and uploaders can greatly benefit from it as they receive affiliate fees, even after a pirated file is claimed by a rightsholder.

In addition, he says it will revolutionize copyright enforcement, as copyright holders can monetize the work of pirates. That is, if they are willing to work with the service.

“Rightsholders can turn piracy traffic into revenue and users can access the content on any platform. Since every file is a store, it doesn’t matter where it ends up,” Dotcom says.

Dotcom does have a very valid point here. Many people have simply grown used to pirating because it’s much more convenient than using a dozen different services. In Dotcom’s vision, people can just use one site to access everything.

The ideas don’t stop at sharing files either. In the future, Dotcom also wants to use the micropayment option to offer YouTubers and media organizations to accept payments from the public, BBC notes.

There’s still a long way to go before K.im and Bitcache go public though. The expected launch date is not final yet, but the services are expected to go live in mid-to-late 2018.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

3D print your own Rubik’s Cube Solver

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/rubiks-cube-solver/

Why use logic and your hands to solve a Rubik’s Cube, when you could 3D print your own Rubik’s Cube Solver and thus avoid overexerting your fingers and brain cells? Here to help you with this is Otvinta‘s new robotic make:

Fully 3D-Printed Rubik’s Cube Solving Robot

This 3D-printed Raspberry PI-powered Rubik’s Cube solving robot has everything any serious robot does — arms, servos, gears, vision, artificial intelligence and a task to complete. If you want to introduce robotics to your kids or your students, this is the perfect machine for it. This robot is fully 3D-printable.

Rubik’s Cubes

As Liz has said before, we have a lot of Rubik’s cubes here at Pi Towers. In fact, let me just…hold on…I’ll be right back.

Okay, these are all the ones I found on Gordon’s desk, and I’m 99% sure there are more in his drawers.

Raspberry Pi Rubik's Cube Solver

And that’s just Gordon. Given that there’s a multitude of other Pi Towers staff members who are also obsessed with the little twisty cube of wonder, you could use what you find in our office to restock an entire toy shop for the pre-Christmas rush!

So yeah, we like Rubik’s Cubes.

The 3D-Printable Rubik’s Cube Solver

Aside from the obvious electronic elements, Otvinta’s Rubik’s Cube Solving Robot is completely 3D-printable. While it may take a whopping 70 hours of print time and a whole spool of filament to make your solving robot a reality, we’ve seen far more time-consuming prints with a lot less purpose than this.

(If you’ve clicked the link above, I’d just like to point out that, while that build might be 3D printing overkill, I want one anyway.)

Rubik's Cube Solver

After 3D printing all the necessary parts of your Rubik’s Cube Solving Robot, you’ll need to run the Windows 10 IoT Core on your Raspberry Pi. Once connected to your network, you can select the Pi from the IoT Dashboard on your main PC and install the RubiksCubeRobot app.

Raspberry Pi Rubik's Cube Solver

Then simply configure the robot via the app, and you’re good to go!

You might not necessarily need a Raspberry Pi to create this build, since you could simply run the app on your main PC. However, using a Pi will make your project more manageable and less bulky.

You can find all the details of how to make your own Rubik’s Cube Solving Robot on Otvinta’s website, so do make sure to head over there if you want to learn more.

All the robots!

This isn’t the first Raspberry Pi-powered Rubik’s Cube out there, and it surely won’t be the last. There’s this one by Francesco Georg using LEGO Mindstorms; this one was originally shared on Reddit; Liz wrote about this one; and there’s one more which I can’t seem to find but I swear exists, and it looks like the Eye of Sauron! Ten House Points to whoever shares it with me in the comments below.

The post 3D print your own Rubik’s Cube Solver appeared first on Raspberry Pi.

YouTube Fair Use Case Ends in Victory For h3h3Productions

Post Syndicated from Andy original https://torrentfreak.com/youtube-fair-use-case-ends-in-victory-for-h3h3productions-170824/

Hila and Ethan Klein are the duo behind h3h3productions, a YouTube channel dedicated to the comedy found in Internet culture.

With more than 4.6 million subscribers the channel is doing extremely well, but last year faced a fight for its very existence. In May 2016, the pair announced they were being sued by YouTuber Matt Hosseinzadeh, who also produces comedy content and publishes it on his MattHossZone channel.

The problem lay in a video produced by H3h3productions (published on a sister channel in February 2016) in which they poked fun at Hosseinzadeh, using clips of their rival’s videos for material. The Kleins said this was fair use but Hosseinzadeh viewed it as copyright infringement.

According to the Kleins, Hosseinzadeh initially approached them with settlement offers of a few thousand dollars and also gave them the opportunity to publicly apologize and promote his content. The Kleins refused to back down so at this point, Hosseinzadeh sued for copyright infringement and defamation.

Fighting a lawsuit is extremely expensive but the Kleins received overwhelming support online, particularly via a GoFundMe campaign that pulled in over $170,000.

With the legal battle well underway, both sides asked the court for a summary judgment in their favor in advance of a full trial. In a decision handed down yesterday by District Judge Katherine B. Forrest, the Kleins prevailed. In a 21-page ruling, the Judge wastes no time in getting straight to the point.

Describing the Klein’s video and the critique contained within as “quintessential criticism and comment” equivalent to the kind one might find in a film studies class, Judge Forrest goes on to examine the cornerstones of fair use, including the purpose of the work, the amount of copyrighted content used, and the effect of the use of the content on its potential market.

“Any review of the Klein video leaves no doubt that it constitutes critical commentary of the Hoss video; there is also no doubt that the Klein video is decidedly not a market substitute for the Hoss video,” Judge Forrest writes, noting that the former effectively transformed the latter into “fodder for caustic, moment-by-moment commentary and mockery.”

“For these and the other reasons set forth below, defendants’ use of clips from the Hoss video constitutes fair use as a matter of law,” the Judge concludes.

On the defamation front, Hosseinzadeh fared no better, with the Judge noting that truth is an absolute defense to a defamation claim.

“Further, it is clear that defendants’ comments regarding the lawsuit are either non-actionable opinions or substantially true as a matter of law. For these and the other reasons set forth below, plaintiff’s defamation claim fails. Defendants’ motion for summary judgment is therefore GRANTED, and plaintiff’s motion is DENIED.”

The news was quickly welcomed by Ethan Klein.

What happens next is largely up to Hosseinzadeh. He still has the opportunity to appeal the case but whether he will take that opportunity given the clarity of the ruling and the additional costs involved will remain to be seen.

In the meantime, the decision (via Techcrunch) provides food for thought and guidance for other YouTubers making reaction videos.

The ruling doesn’t give YouTubers blanket clearance to continue with impunity but does clarify how much content can be used, provided adequate commentary and criticism is present. They’re valuable guidelines in a notoriously difficult area of copyright law.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Michael Reeves and the ridiculous Subscriber Robot

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/michael-reeves-subscriber-robot/

At the beginning of his new build’s video, YouTuber Michael Reeves discusses a revelation he had about why some people don’t subscribe to his channel:

The real reason some people don’t subscribe is that when you hit this button, that’s all, that’s it, it’s done. It’s not special, it’s not enjoyable. So how do we make subscribing a fun, enjoyable process? Well, we do it by slowly chipping away at the content creator’s psyche every time someone subscribes.

His fix? The ‘fun’ interactive Subscriber Robot that is the subject of the video.

Be aware that Michael uses a couple of mild swears in this video, so maybe don’t watch it with a child.

The Subscriber Robot

Just showing that subscriber dedication My Patreon Page: https://www.patreon.com/michaelreeves Personal Site: https://michaelreeves.us/ Twitter: https://twitter.com/michaelreeves08 Song: Summer Salt – Sweet To Me

Who is Michael Reeves?

Software developer and student Michael Reeves started his YouTube account a mere four months ago, with the premiere of his robot that shines lasers into your eyes – now he has 110k+ subscribers. At only 19, Michael co-owns and manages a company together with friends, and is set on his career path in software and computing. So when he is not making videos, he works a nine-to-five job “to pay for college and, y’know, live”.

The Subscriber Robot

Michael shot to YouTube fame with the aforementioned laser robot built around an Arduino. But by now he has also be released videos for a few Raspberry Pi-based contraptions.

Michael Reeves Raspberry Pi Subscriber Robot

Michael, talking us through the details of one of the worst ideas ever made

His Subscriber Robot uses a series of Python scripts running on a Raspberry Pi to check for new subscribers to Michael’s channel via the YouTube API. When it identifies one, the Pi uses a relay to make the ceiling lights in Michael’s office flash ten times a second while ear-splitting noise is emitted by a 102-decibel-rated buzzer. Needless to say, this buzzer is not recommended for home use, work use, or any use whatsoever! Moreover, the Raspberry Pi also connects to a speaker that announces the name of the new subscriber, so Michael knows who to thank.

Michael Reeves Raspberry Pi Subscriber Robot

Subscriber Robot: EEH! EEH! EEH! MoistPretzels has subscribed.
Michael: Thank you, MoistPretzels…

Given that Michael has gained a whopping 30,000 followers in the ten days since the release of this video, it’s fair to assume he is currently curled up in a ball on the office floor, quietly crying to himself.

If you think Michael only makes videos about ridiculous builds, you’re mistaken. He also uses YouTube to provide educational content, because he believes that “it’s super important for people to teach themselves how to program”. For example, he has just released a new C# beginners tutorial, the third in the series.

Support Michael

If you’d like to help Michael in his mission to fill the world with both tutorials and ridiculous robot builds, make sure to subscribe to his channel. You can also follow him on Twitter and support him on Patreon.

You may also want to check out the Useless Duck Company and Simone Giertz if you’re in the mood for more impractical, yet highly amusing, robot builds.

Good luck with your channel, Michael! We are looking forward to, and slightly dreading, more videos from one of our favourite new YouTubers.

The post Michael Reeves and the ridiculous Subscriber Robot appeared first on Raspberry Pi.

Uber Drivers Hacking the System to Cause Surge Pricing

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/08/uber_drivers_ha.html

Interesting story about Uber drivers who have figured out how to game the company’s algorithms to cause surge pricing:

According to the study. drivers manipulate Uber’s algorithm by logging out of the app at the same time, making it think that there is a shortage of cars.

[…]

The study said drivers have been coordinating forced surge pricing, after interviews with drivers in London and New York, and research on online forums such as Uberpeople.net. In a post on the website for drivers, seen by the researchers, one person said: “Guys, stay logged off until surge. Less supply high demand = surge.”

.

Passengers, of course, have long had tricks to avoid surge pricing.

I expect to see more of this sort of thing as algorithms become more prominent in our lives.

Concerns About The Blockchain Technology

Post Syndicated from Bozho original https://techblog.bozho.net/concerns-blockchain-technology/

The so-called (and marketing-branded) “blockchain technology” is promised to revolutionize every industry. Anything, they say, will become decentralized, free from middle men or government control. Services will thrive on various installments of the blockchain, and smart contracts will automatically enforce any logic that is related to the particular domain.

I don’t mind having another technological leap (after the internet), and given that I’m technically familiar with the blockchain, I may even be part of it. But I’m not convinced it will happen, and I’m not convinced it’s going to be the next internet.

If we strip the hype, the technology behind Bitcoin is indeed a technical masterpiece. It combines existing techniques (likes hash chains and merkle trees) with a very good proof-of-work based consensus algorithm. And it creates a digital currency, which ontop of being worth billions now, is simply cool.

But will this technology be mass-adopted, and will mass adoption allow it to retain the technological benefits it has?

First, I’d like to nitpick a little bit – if anyone is speaking about “decentralized software” when referring to “the blockchain”, be suspicious. Bitcon and other peer-to-peer overlay networks are in fact “distributed” (see the pictures here). “Decentralized” means having multiple providers, but doesn’t mean each user will be full-featured nodes on the network. This nitpicking is actually part of another argument, but we’ll get to that.

If blockchain-based applications want to reach mass adoption, they have to be user-friendly. I know I’m being captain obvious here (and fortunately some of the people in the area have realized that), but with the current state of the technology, it’s impossible for end users to even get it, let alone use it.

My first serious concern is usability. To begin with, you need to download the whole blockchain on your machine. When I got my first bitcoin several years ago (when it was still 10 euro), the blockchain was kind of small and I didn’t notice that problem. Nowadays both the Bitcoin and Ethereum blockchains take ages to download. I still haven’t managed to download the ethereum one – after several bugs and reinstalls of the client, I’m still at 15%. And we are just at the beginning. A user just will not wait for days to download something in order to be able to start using a piece of technology.

I recently proposed downloading snapshots of the blockchain via bittorrent to be included in the Ethereum protocol itself. I know that snapshots of the Bitcoin blockchain have been distributed that way, but it has been a manual process. If a client can quickly download the huge file up to a recent point, and then only donwload the latest ones in the the traditional way, starting up may be easier. Of course, the whole chain would have to be verified, but maybe that can be a background process that doesn’t stop you from using whatever is built ontop of the particular blockchain. (I’m not sure if that will be secure enough, and that, say potential Sybil attacks on the bittorrent part won’t make it undesirable, it’s just an idea).

But even if such an approach works and is adopted, that would still mean that for every service you’d have to download a separate blockchain. Of course, projects like Ethereum may seem like the “one stop shop” for cool blockchain-based applications, but fragmentation is already happening – there are alt-coins bundled with various services like file storage, DNS, etc. That will not be workable for end-users. And it’s certainly not an option for mobile, which is the dominant client now. If instead of downloading the entire chain, something like consistent hashing is used to distribute the content in small portions among clients, it might be workable. But how will trust work in that case, I don’t know. Maybe it’s possible, maybe not.

And yes, I know that you don’t necessarily have to install a wallet/client in order to make use of a given blockchain – you can just have a cloud-based wallet. Which is fairly convenient, but that gets me to my nitpicking from a few paragraphs above and to may second concern – this effectively turns a distributed system into a decentralized one – a limited number of cloud providers hold most of the data (just as a limited number of miners hold most of the processing power). And then, even though the underlying technology allows for a distributed deployment, we’ll end-up again with simply decentralized or even de-facto cenetralized, if mergers and acquisitions lead us there (and they probably will). And in order to be able to access our wallets/accounts from multiple devices, we’d use a convenient cloud service where we’d login with our username and password (because the private key is just too technical and hard for regular users). And that seems to defeat the whole idea.

Not only that, but there is an inevitable centralization of decisions (who decides on the size of the block, who has commit rights to the client repository) as well as a hidden centralization of power – how much GPU power does the Chinese mining “farms” control and can they influence the network significantly? And will the average user ever know that or care (as they don’t care that Google is centralized). I think that overall, distributed technologies will follow the power law, and the majority of data/processing power/decision power will be controller by a minority of actors. And so our distributed utopia will not happen in its purest form we dream of.

My third concern is incentive. Distributed technologies that have been successful so far have a pretty narrow set of incentives. The internet was promoted by large public institutions, including government agencies and big universitives. Bittorrent was successful mainly because it allowed free movies and songs with 2 clicks of the mouse. And Bitcoin was successful because it offered financial benefits. I’m oversimplifying of course, but “government effort”, “free & easy” and “source of more money” seem to have been the successful incentives. On the other side of the fence there are dozens of failed distributed technologies. I’ve tried many of them – alternative search engines, alternative file storage, alternative ride-sharings, alternative social networks, alternative “internets” even. None have gained traction. Because they are not easier to use than their free competitors and you can’t make money out of them (and no government bothers promoting them).

Will blockchain-based services have sufficient incentives to drive customers? Will centralized competitors just easily crush the distributed alternatives by being cheaper, more-user friendly, having sales departments that can target more than hardcore geeks who have no problem syncing their blockchain via the command line? The utopian slogans seem very cool to idealists and futurists, but don’t sell. “Free from centralized control, full control over your data” – we’d have to go through a long process of cultural change before these things make sense to more than a handful of people.

Speaking of services, often examples include “the sharing economy”, where one stranger offers a service to another stranger. Blockchain technology seems like a good fit here indeed – the services are by nature distributed, why should the technology be centralized? Here comes my fourth concern – identity. While for the cryptocurrencies it’s actually beneficial to be anonymous, for most of the real-world services (i.e. the industries that ought to be revolutionized) this is not an option. You can’t just go in the car of publicKey=5389BC989A342…. “But there are already distributed reputation systems”, you may say. Yes, and they are based on technical, not real-world identities. That doesn’t build trust. I don’t trust that publicKey=5389BC989A342… is the same person that got the high reputation. There may be five people behind that private key. The private key may have been stolen (e.g. in a cloud-provider breach).

The values of companies like Uber and AirBNB is that they serve as trust brokers. They verify and vouch for their drivers and hosts (and passengers and guests). They verify their identity through government-issued documents, skype calls, selfies, compare pictures to documents, get access to government databases, credit records, etc. Can a fully distributed service do that? No. You’d need a centralized provider to do it. And how would the blockchain make any difference then? Well, I may not be entirely correct here. I’ve actually been thinking quite a lot about decentralized identity. E.g. a way to predictably generate a private key based on, say biometrics+password+government-issued-documents, and use the corresponding public key as your identifier, which is then fed into reputation schemes and ultimately – real-world services. But we’re not there yet.

And that is part of my fifth concern – the technology itself. We are not there yet. There are bugs, there are thefts and leaks. There are hard-forks. There isn’t sufficient understanding of the technology (I confess I don’t fully grasp all the implementation details, and they are always the key). Often the technology is advertised as “just working”, but it isn’t. The other day I read an article (lost the link) that clarifies a common misconception about smart contracts – they cannot interact with the outside world – they can’t call APIs (e.g. stock market prices, bank APIs), they can’t push or fetch data from anywhere but the blockchain. That mandates the need, again, for a centralized service that pushes the relevant information before smart contracts can pick it up. I’m pretty sure that all cool-sounding applications are not possible without extensive research. And even if/when they are, writing distributed code is hard. Debugging a smart contract is hard. Yes, hard is cool, but that doesn’t drive economic value.

I have mostly been referring to public blockchains so far. Private blockchains may have their practical application, but there’s one catch – they are not exactly the cool distributed technology that the Bitcoin uses. They may be called “blockchains” because they…chain blocks, but they usually centralize trust. For example the Hyperledger project uses PKI, with all its benefits and risks. In these cases, a centralized authority issues the identity “tokens”, and then nodes communicate and form a shared ledger. That’s a bit easier problem to solve, and the nodes would usually be on actual servers in real datacenters, and not on your uncle’s Windows XP.

That said, hash chaining has been around for quite a long time. I did research on the matter because of a side-project of mine and it seems providing a tamper-proof/tamper-evident log/database on semi-trusted machines has been discussed in many computer science papers since the 90s. That alone is not “the magic blockchain” that will solve all of our problems, no matter what gossip protocols you sprinkle ontop. I’m not saying that’s bad, on the contrary – any variation and combinations of the building blocks of the blockchain (the hash chain, the consensus algorithm, the proof-of-work (or stake), possibly smart contracts), has potential for making useful products.

I know I sound like the a naysayer here, but I hope I’ve pointed out particular issues, rather than aimlessly ranting at the hype (though that’s tempting as well). I’m confident that blockchain-like technologies will have their practical applications, and we will see some successful, widely-adopted services and solutions based on that, just as pointed out in this detailed report. But I’m not convinced it will be revolutionizing.

I hope I’m proven wrong, though, because watching a revolutionizing technology closely and even being part of it would be quite cool.

The post Concerns About The Blockchain Technology appeared first on Bozho's tech blog.

timeShift(GrafanaBuzz, 1w) Issue 5

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/07/21/timeshiftgrafanabuzz-1w-issue-5/

We cover a lot of ground in this week’s timeShift. From diving into building your own plugin, finding the right dashboard, configuration options in the alerting feature, to monitoring your local weather, there’s something for everyone. Are you writing an article about Grafana, or have you come across an article you found interesting? Please get in touch, we’ll add it to our roundup.


From the Blogosphere

  • Going open-source in monitoring, part III: 10 most useful Grafana dashboards to monitor Kubernetes and services: We have hundreds of pre-made dashboards ready for you to install into your on-prem or hosted Grafana, but not every one will fit your specific monitoring needs. In part three of the series, Sergey discusses is experiences with finding useful dashboards and shows off ten of the best dashboards you can install for monitoring Kubernetes clusters and the services deployed on them.

  • Using AWS Lambda and API gateway for server-less Grafana adapters: Sometimes you’ll want to visualize metrics from a data source that may not yet be supported in Grafana natively. With the plugin functionality introduced in Grafana 3.0, anyone can create their own data sources. Using the SimpleJson data source, Jonas describes how he used AWS Lambda and AWS API gateway to write data source adapters for Grafana.

  • How to Use Grafana to Monitor JMeter Non-GUI Results – Part 2: A few issues ago we listed an article for using Grafana to monitor JMeter Non-GUI results, which required a number of non-trivial steps to complete. This article shows of an easier way to accomplish this that doesn’t require any additional configuration of InfluxDB.

  • Programming your Personal Weather Chart: It’s always great to see Grafana used outside of the typical dev-ops usecase. This article runs you through the steps to create your own weather chart and show off your local weather stats in Grafana. BONUS: Rob shows off a magic mirror he created, which can display this data.

  • vSphere Performance data – Part 6 – The Dashboard(s): This 6-part series goes into a ton of detail and walks you through the various methods of retrieving vSphere performance data, storing the data in a TSDB, and creating dashboards for the metrics. Part 6 deals specifically with Grafana, but I highly recommend reading all of the articles, as it chronicles the journey of metrics exploration, storage, and visualization from someone who had no prior experience with time series data.

  • Alerting in Grafana: Alerting in Grafana is a fairly new feature and one that we’re continuing to iterate on. We’re soon adding additional data source support, new notification channels, clustering, silencing rules, and more. This article steps you through all the configuration options to get you to your first alert.


Plugins and Dashboards

It can seem like work slows during July and August, but we’re still seeing a lot of activity in the community. This week we have a new graph panel to show off that gives you some unique looking dashboards, and an update to the Zabbix data source, which adds some really great features. You can install both of the plugins now on your on-prem Grafana via our cli, or with one-click on GrafanaCloud.

NEW PLUGIN

Bubble Chart Panel This super-cool looking panel groups your tag values into clusters of circles. The size of the circle represents the aggregated value of the time series data. There are also multiple color schemes to make those bubbles POP (pun intended)! Currently it works against OpenTSDB and Bosun, so give it a try!

Install Now

UPDATED PLUGIN

Zabbix Alex has been hard at work, making improvements on the Zabbix App for Grafana. This update adds annotations, template variables, alerting and more. Thanks Alex! If you’d like to try out the app, head over to http://play.grafana-zabbix.org/dashboard/db/zabbix-db-mysql?orgId=2

Install 3.5.1 Now


This week’s MVC (Most Valuable Contributor)

Open source software can’t thrive without the contributions from the community. Each week we’ll recognize a Grafana contributor and thank them for all of their PRs, bug reports and feedback.

mk-dhia (Dhia)
Thank you so much for your improvements to the Elasticsearch data source!


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

This week’s tweet comes from @geek_dave

Great looking dashboard Dave! And thank you for adding new features and keeping it updated. It’s creators like you who make the dashboard repository so awesome!


Upcoming Events

We love when people talk about Grafana at meetups and conferences.

Monday, July 24, 2017 – 7:30pm | Google Campus Warsaw


Ząbkowska 27/31, Warsaw, Poland

Iot & HOME AUTOMATION #3 openHAB, InfluxDB, Grafana:
If you are interested in topics of the internet of things and home automation, this might be a good occasion to meet people similar to you. If you are into it, we will also show you how we can all work together on our common projects.

RSVP


Tell us how we’re Doing.

We’d love your feedback on what kind of content you like, length, format, etc – so please keep the comments coming! You can submit a comment on this article below, or post something at our community forum. Help us make this better.

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

timeShift(GrafanaBuzz, 1w) Issue 4

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/07/14/timeshiftgrafanabuzz-1w-issue-4/

The summer seems to be flying by! This week’s timeShift has a lot of great articles to share, including a Grafana presentation from one of our software engineers, Kubernetes monitoring, dashboard exports and backups via grafcli, scaling Graphite on AWS and a lot more. If you’ve come across a recent article about Grafana, or are writing one yourself, please get in touch, we’d be happy to feature it here. From the Blogosphere Democratizing Metrics with Grafana: Grafana Labs software developer Alexander Zobnin, recently gave a great talk at the Big Monitoring Meetup in St.

Manage Kubernetes Clusters on AWS Using Kops

Post Syndicated from Arun Gupta original https://aws.amazon.com/blogs/compute/kubernetes-clusters-aws-kops/

Any containerized application typically consists of multiple containers. There is a container for the application itself, one for database, possibly another for web server, and so on. During development, its normal to build and test this multi-container application on a single host. This approach works fine during early dev and test cycles but becomes a single point of failure for production where the availability of the application is critical. In such cases, this multi-container application is deployed on multiple hosts. There is a need for an external tool to manage such a multi-container multi-host deployment. Container orchestration frameworks provides the capability of cluster management, scheduling containers on different hosts, service discovery and load balancing, crash recovery and other related functionalities. There are multiple options for container orchestration on Amazon Web Services: Amazon ECS, Docker for AWS, and DC/OS.

Another popular option for container orchestration on AWS is Kubernetes. There are multiple ways to run a Kubernetes cluster on AWS. This multi-part blog series provides a brief overview and explains some of these approaches in detail. This first post explains how to create a Kubernetes cluster on AWS using kops.

Kubernetes and Kops overview

Kubernetes is an open source, container orchestration platform. Applications packaged as Docker images can be easily deployed, scaled, and managed in a Kubernetes cluster. Some of the key features of Kubernetes are:

  • Self-healing
    Failed containers are restarted to ensure that the desired state of the application is maintained. If a node in the cluster dies, then the containers are rescheduled on a different node. Containers that do not respond to application-defined health check are terminated, and thus rescheduled.
  • Horizontal scaling
    Number of containers can be easily scaled up and down automatically based upon CPU utilization, or manually using a command.
  • Service discovery and load balancing
    Multiple containers can be grouped together discoverable using a DNS name. The service can be load balanced with integration to the native LB provided by the cloud provider.
  • Application upgrades and rollbacks
    Applications can be upgraded to a newer version without an impact to the existing one. If something goes wrong, Kubernetes rolls back the change.

Kops, short for Kubernetes Operations, is a set of tools for installing, operating, and deleting Kubernetes clusters in the cloud. A rolling upgrade of an older version of Kubernetes to a new version can also be performed. It also manages the cluster add-ons. After the cluster is created, the usual kubectl CLI can be used to manage resources in the cluster.

Download Kops and Kubectl

There is no need to download the Kubernetes binary distribution for creating a cluster using kops. However, you do need to download the kops CLI. It then takes care of downloading the right Kubernetes binary in the cloud, and provisions the cluster.

The different download options for kops are explained at github.com/kubernetes/kops#installing. On MacOS, the easiest way to install kops is using the brew package manager.

brew update && brew install kops

The version of kops can be verified using the kops version command, which shows:

Version 1.6.1

In addition, download kubectl. This is required to manage the Kubernetes cluster. The latest version of kubectl can be downloaded using the following command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl

Make sure to include the directory where kubectl is downloaded in your PATH.

IAM user permission

The IAM user to create the Kubernetes cluster must have the following permissions:

  • AmazonEC2FullAccess
  • AmazonRoute53FullAccess
  • AmazonS3FullAccess
  • IAMFullAccess
  • AmazonVPCFullAccess

Alternatively, a new IAM user may be created and the policies attached as explained at github.com/kubernetes/kops/blob/master/docs/aws.md#setup-iam-user.

Create an Amazon S3 bucket for the Kubernetes state store

Kops needs a “state store” to store configuration information of the cluster.  For example, how many nodes, instance type of each node, and Kubernetes version. The state is stored during the initial cluster creation. Any subsequent changes to the cluster are also persisted to this store as well. As of publication, Amazon S3 is the only supported storage mechanism. Create a S3 bucket and pass that to the kops CLI during cluster creation.

This post uses the bucket name kubernetes-aws-io. Bucket names must be unique; you have to use a different name. Create an S3 bucket:

aws s3api create-bucket --bucket kubernetes-aws-io

I strongly recommend versioning this bucket in case you ever need to revert or recover a previous version of the cluster. This can be enabled using the AWS CLI as well:

aws s3api put-bucket-versioning --bucket kubernetes-aws-io --versioning-configuration Status=Enabled

For convenience, you can also define KOPS_STATE_STORE environment variable pointing to the S3 bucket. For example:

export KOPS_STATE_STORE=s3://kubernetes-aws-io

This environment variable is then used by the kops CLI.

DNS configuration

As of Kops 1.6.1, a top-level domain or a subdomain is required to create the cluster. This domain allows the worker nodes to discover the master and the master to discover all the etcd servers. This is also needed for kubectl to be able to talk directly with the master.

This domain may be registered with AWS, in which case a Route 53 hosted zone is created for you. Alternatively, this domain may be at a different registrar. In this case, create a Route 53 hosted zone. Specify the name server (NS) records from the created zone as NS records with the domain registrar.

This post uses a kubernetes-aws.io domain registered at a third-party registrar.

Generate a Route 53 hosted zone using the AWS CLI. Download jq to run this command:

ID=$(uuidgen) && \
aws route53 create-hosted-zone \
--name cluster.kubernetes-aws.io \
--caller-reference $ID \
| jq .DelegationSet.NameServers

This shows an output such as the following:

[
"ns-94.awsdns-11.com",
"ns-1962.awsdns-53.co.uk",
"ns-838.awsdns-40.net",
"ns-1107.awsdns-10.org"
]

Create NS records for the domain with your registrar. Different options on how to configure DNS for the cluster are explained at github.com/kubernetes/kops/blob/master/docs/aws.md#configure-dns.

Experimental support to create a gossip-based cluster was added in Kops 1.6.2. This post uses a DNS-based approach, as that is more mature and well tested.

Create the Kubernetes cluster

The Kops CLI can be used to create a highly available cluster, with multiple master nodes spread across multiple Availability Zones. Workers can be spread across multiple zones as well. Some of the tasks that happen behind the scene during cluster creation are:

  • Provisioning EC2 instances
  • Setting up AWS resources such as networks, Auto Scaling groups, IAM users, and security groups
  • Installing Kubernetes.

Start the Kubernetes cluster using the following command:

kops create cluster \
--name cluster.kubernetes-aws.io \
--zones us-west-2a \
--state s3://kubernetes-aws-io \
--yes

In this command:

  • --zones
    Defines the zones in which the cluster is going to be created. Multiple comma-separated zones can be specified to span the cluster across multiple zones.
  • --name
    Defines the cluster’s name.
  • --state
    Points to the S3 bucket that is the state store.
  • --yes
    Immediately creates the cluster. Otherwise, only the cloud resources are created and the cluster needs to be started explicitly using the command kops update --yes. If the cluster needs to be edited, then the kops edit cluster command can be used.

This starts a single master and two worker node Kubernetes cluster. The master is in an Auto Scaling group and the worker nodes are in a separate group. By default, the master node is m3.medium and the worker node is t2.medium. Master and worker nodes are assigned separate IAM roles as well.

Wait for a few minutes for the cluster to be created. The cluster can be verified using the command kops validate cluster --state=s3://kubernetes-aws-io. It shows the following output:

Using cluster from kubectl context: cluster.kubernetes-aws.io

Validating cluster cluster.kubernetes-aws.io

INSTANCE GROUPS
NAME                 ROLE      MACHINETYPE    MIN    MAX    SUBNETS
master-us-west-2a    Master    m3.medium      1      1      us-west-2a
nodes                Node      t2.medium      2      2      us-west-2a

NODE STATUS
NAME                                           ROLE      READY
ip-172-20-38-133.us-west-2.compute.internal    node      True
ip-172-20-38-177.us-west-2.compute.internal    master    True
ip-172-20-46-33.us-west-2.compute.internal     node      True

Your cluster cluster.kubernetes-aws.io is ready

It shows the different instances started for the cluster, and their roles. If multiple cluster states are stored in the same bucket, then --name <NAME> can be used to specify the exact cluster name.

Check all nodes in the cluster using the command kubectl get nodes:

NAME                                          STATUS         AGE       VERSION
ip-172-20-38-133.us-west-2.compute.internal   Ready,node     14m       v1.6.2
ip-172-20-38-177.us-west-2.compute.internal   Ready,master   15m       v1.6.2
ip-172-20-46-33.us-west-2.compute.internal    Ready,node     14m       v1.6.2

Again, the internal IP address of each node, their current status (master or node), and uptime are shown. The key information here is the Kubernetes version for each node in the cluster, 1.6.2 in this case.

The kubectl value included in the PATH earlier is configured to manage this cluster. Resources such as pods, replica sets, and services can now be created in the usual way.

Some of the common options that can be used to override the default cluster creation are:

  • --kubernetes-version
    The version of Kubernetes cluster. The exact versions supported are defined at github.com/kubernetes/kops/blob/master/channels/stable.
  • --master-size and --node-size
    Define the instance of master and worker nodes.
  • --master-count and --node-count
    Define the number of master and worker nodes. By default, a master is created in each zone specified by --master-zones. Multiple master nodes can be created by a higher number using --master-count or specifying multiple Availability Zones in --master-zones.

A three-master and five-worker node cluster, with master nodes spread across different Availability Zones, can be created using the following command:

kops create cluster \
--name cluster2.kubernetes-aws.io \
--zones us-west-2a,us-west-2b,us-west-2c \
--node-count 5 \
--state s3://kubernetes-aws-io \
--yes

Both the clusters are sharing the same state store but have different names. This also requires you to create an additional Amazon Route 53 hosted zone for the name.

By default, the resources required for the cluster are directly created in the cloud. The --target option can be used to generate the AWS CloudFormation scripts instead. These scripts can then be used by the AWS CLI to create resources at your convenience.

Get a complete list of options for cluster creation with kops create cluster --help.

More details about the cluster can be seen using the command kubectl cluster-info:

Kubernetes master is running at https://api.cluster.kubernetes-aws.io
KubeDNS is running at https://api.cluster.kubernetes-aws.io/api/v1/proxy/namespaces/kube-system/services/kube-dns

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Check the client and server version using the command kubectl version:

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Both client and server version are 1.6 as shown by the Major and Minor attribute values.

Upgrade the Kubernetes cluster

Kops can be used to create a Kubernetes 1.4.x, 1.5.x, or an older version of the 1.6.x cluster using the --kubernetes-version option. The exact versions supported are defined at github.com/kubernetes/kops/blob/master/channels/stable.

Or, you may have used kops to create a cluster a while ago, and now want to upgrade to the latest recommended version of Kubernetes. Kops supports rolling cluster upgrades where the master and worker nodes are upgraded one by one.

As of kops 1.6.1, upgrading a cluster is a three-step process.

First, check and apply the latest recommended Kubernetes update.

kops upgrade cluster \
--name cluster2.kubernetes-aws.io \
--state s3://kubernetes-aws-io \
--yes

The --yes option immediately applies the changes. Not specifying the --yes option shows only the changes that are applied.

Second, update the state store to match the cluster state. This can be done using the following command:

kops update cluster \
--name cluster2.kubernetes-aws.io \
--state s3://kubernetes-aws-io \
--yes

Lastly, perform a rolling update for all cluster nodes using the kops rolling-update command:

kops rolling-update cluster \
--name cluster2.kubernetes-aws.io \
--state s3://kubernetes-aws-io \
--yes

Previewing the changes before updating the cluster can be done using the same command but without specifying the --yes option. This shows the following output:

NAME                 STATUS        NEEDUPDATE    READY    MIN    MAX    NODES
master-us-west-2a    NeedsUpdate   1             0        1      1      1
nodes                NeedsUpdate   2             0        2      2      2

Using --yes updates all nodes in the cluster, first master and then worker. There is a 5-minute delay between restarting master nodes, and a 2-minute delay between restarting nodes. These values can be altered using --master-interval and --node-interval options, respectively.

Only the worker nodes may be updated by using the --instance-group node option.

Delete the Kubernetes cluster

Typically, the Kubernetes cluster is a long-running cluster to serve your applications. After its purpose is served, you may delete it. It is important to delete the cluster using the kops command. This ensures that all resources created by the cluster are appropriately cleaned up.

The command to delete the Kubernetes cluster is:

kops delete cluster --state=s3://kubernetes-aws-io --yes

If multiple clusters have been created, then specify the cluster name as in the following command:

kops delete cluster cluster2.kubernetes-aws.io --state=s3://kubernetes-aws-io --yes

Conclusion

This post explained how to manage a Kubernetes cluster on AWS using kops. Kubernetes on AWS users provides a self-published list of companies using Kubernetes on AWS.

Try starting a cluster, create a few Kubernetes resources, and then tear it down. Kops on AWS provides a more comprehensive tutorial for setting up Kubernetes clusters. Kops docs are also helpful for understanding the details.

In addition, the Kops team hosts office hours to help you get started, from guiding you with your first pull request. You can always join the #kops channel on Kubernetes slack to ask questions. If nothing works, then file an issue at github.com/kubernetes/kops/issues.

Future posts in this series will explain other ways of creating and running a Kubernetes cluster on AWS.

— Arun

Kubernetes 1.7 released

Post Syndicated from corbet original https://lwn.net/Articles/726900/rss

Version
1.7
of the Kubernetes orchestration system is out.
At-a-glance, security enhancements in this release include encrypted secrets, network policy for pod-to-pod communication, node authorizer to limit kubelet access and client / server TLS certificate rotation.

For those of you running scale-out databases on Kubernetes, this release has a major feature that adds automated updates to StatefulSets and enhances updates for DaemonSets. We are also announcing alpha support for local storage and a burst mode for scaling StatefulSets faster.”

DevOps Cafe Episode 72 – Kelsey Hightower

Post Syndicated from DevOpsCafeAdmin original http://devopscafe.org/show/2017/6/18/devops-cafe-episode-72-kelsey-hightower.html

You can’t contain(er) Kelsey.

John and Damon chat with Kelsey Hightower (Google) about the future of operations, kubernetes, docker, containers, self-learning, and more!
  

  

Direct download

Follow John Willis on Twitter: @botchagalupe
Follow Damon Edwards on Twitter: @damonedwards 
Follow Kelsey Hightower on Twitter: @kelseyhightower

Notes:

 

Please tweet or leave comments or questions below and we’ll read them on the show!

“Only a year? It’s felt like forever”: a twelve-month retrospective

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/12-months-raspberry-pi/

This weekend saw my first anniversary at Raspberry Pi, and this blog marks my 100th post written for the company. It would have been easy to let one milestone or the other slide had they not come along hand in hand, begging for some sort of acknowledgement.

Alex, Matt, and Courtney in a punt on the Cam

The day Liz decided to keep me

So here it is!

Joining the crew

Prior to my position in the Comms team as Social Media Editor, my employment history was largely made up of retail sales roles and, before that, bit parts in theatrical backstage crews. I never thought I would work for the Raspberry Pi Foundation, despite its firm position on my Top Five Awesome Places I’d Love to Work list. How could I work for a tech company when my knowledge of tech stretched as far as dismantling my Game Boy when I was a kid to see how the insides worked, or being the one friend everyone went to when their phone didn’t do what it was meant to do? I never thought about the other side of the Foundation coin, or how I could find my place within the hidden workings that turned the cogs that brought everything together.

… when suddenly, as if out of nowhere, a new job with a dream company. #raspberrypi #positive #change #dosomething

12 Likes, 1 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “… when suddenly, as if out of nowhere, a new job with a dream company. #raspberrypi #positive…”

A little luck, a well-written though humorous resumé, and a meeting with Liz and Helen later, I found myself the newest member of the growing team at Pi Towers.

Ticking items off the Bucket List

I thought it would be fun to point out some of the chances I’ve had over the last twelve months and explain how they fit within the world of Raspberry Pi. After all, we’re about more than just a $35 credit card-sized computer. We’re a charitable Foundation made up of some wonderful and exciting projects, people, and goals.

High altitude ballooning (HAB)

Skycademy offers educators in the UK the chance to come to Pi Towers Cambridge to learn how to plan a balloon launch, build a payload with onboard Raspberry Pi and Camera Module, and provide teachers with the skills needed to take their students on an adventure to near space, with photographic evidence to prove it.

All the screens you need to hunt balloons. . We have our landing point and are now rushing to Therford to find the payload in a field. . #HAB #RasppberryPi

332 Likes, 5 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “All the screens you need to hunt balloons. . We have our landing point and are now rushing to…”

I was fortunate enough to join Sky Captain James, along with Dan Fisher, Dave Akerman, and Steve Randell on a test launch back in August last year. Testing out new kit that James had still been tinkering with that morning, we headed to a field in Elsworth, near Cambridge, and provided Facebook Live footage of the process from payload build to launch…to the moment when our balloon landed in an RAF shooting range some hours later.

RAF firing range sign

“Can we have our balloon back, please, mister?”

Having enjoyed watching Blue Peter presenters send up a HAB when I was a child, I marked off the event on my bucket list with a bold tick, and I continue to show off the photographs from our Raspberry Pi as it reached near space.

Spend the day launching/chasing a high-altitude balloon. Look how high it went!!! #HAB #ballooning #space #wellspacekinda #ish #photography #uk #highaltitude

13 Likes, 2 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “Spend the day launching/chasing a high-altitude balloon. Look how high it went!!! #HAB #ballooning…”

You can find more information on Skycademy here, plus more detail about our test launch day in Dan’s blog post here.

Dear Raspberry Pi Friends…

My desk is slowly filling with stuff: notes, mementoes, and trinkets that find their way to me from members of the community, both established and new to the life of Pi. There are thank you notes, updates, and more from people I’ve chatted to online as they explore their way around the world of Pi.

Letter of thanks to Raspberry Pi from a young fan

*heart melts*

By plugging myself into social media on a daily basis, I often find hidden treasures that go unnoticed due to the high volume of tags we receive on Facebook, Twitter, Instagram, and so on. Kids jumping off chairs in delight as they complete their first Scratch project, newcomers to the Raspberry Pi shedding a tear as they make an LED blink on their kitchen table, and seasoned makers turning their hobby into something positive to aid others.

It’s wonderful to join in the excitement of people discovering a new skill and exploring the community of Raspberry Pi makers: I’ve been known to shed a tear as a result.

Meeting educators at Bett, chatting to teen makers at makerspaces, and sharing a cupcake or three at the birthday party have been incredible opportunities to get to know you all.

You’re all brilliant.

The Queens of Robots, both shoddy and otherwise

Last year we welcomed the Queen of Shoddy Robots, Simone Giertz to Pi Towers, where we chatted about making, charity, and space while wandering the colleges of Cambridge and hanging out with flat Tim Peake.

Queen of Robots @simonegiertz came to visit #PiTowers today. We hung out with cardboard @astro_timpeake and ate chelsea buns at @fitzbillies #Cambridge. . We also had a great talk about the educational projects of the #RaspberryPi team, #AstroPi and how not enough people realise we’re a #charity. . If you’d like to learn more about the Raspberry Pi Foundation and the work we do with #teachers and #education, check out our website – www.raspberrypi.org. . How was your day? Get up to anything fun?

597 Likes, 3 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “Queen of Robots @simonegiertz came to visit #PiTowers today. We hung out with cardboard…”

And last month, the wonderful Estefannie ‘Explains it All’ de La Garza came to hang out, make things, and discuss our educational projects.

Estefannie on Twitter

Ahhhh!!! I still can’t believe I got to hang out and make stuff at the @Raspberry_Pi towers!! Thank you thank you!!

Meeting such wonderful, exciting, and innovative YouTubers was a fantastic inspiration to work on my own projects and to try to do more to help others discover ways to connect with tech through their own interests.

Those ‘wow’ moments

Every Raspberry Pi project I see on a daily basis is awesome. The moment someone takes an idea and does something with it is, in my book, always worthy of awe and appreciation. Whether it be the aforementioned flashing LED, or sending Raspberry Pis to the International Space Station, if you have turned your idea into reality, I applaud you.

Some of my favourite projects over the last twelve months have not only made me say “Wow!”, they’ve also inspired me to want to do more with myself, my time, and my growing maker skill.

Museum in a Box on Twitter

Great to meet @alexjrassic today and nerd out about @Raspberry_Pi and weather balloons and @Space_Station and all things #edtech 🎈⛅🛰📚🤖

Projects such as Museum in a Box, a wonderful hands-on learning aid that brings the world to the hands of children across the globe, honestly made me tear up as I placed a miniaturised 3D-printed Virginia Woolf onto a wooden box and gasped as she started to speak to me.

Jill Ogle’s Let’s Robot project had me in awe as Twitch-controlled Pi robots tackled mazes, attempted to cut birthday cake, or swung to slap Jill in the face over webcam.

Jillian Ogle on Twitter

@SryAbtYourCats @tekn0rebel @Beam Lol speaking of faces… https://t.co/1tqFlMNS31

Every day I discover new, wonderful builds that both make me wish I’d thought of them first, and leave me wondering how they manage to make them work in the first place.

Space

We have Raspberry Pis in space. SPACE. Actually space.

Raspberry Pi on Twitter

New post: Mission accomplished for the European @astro_pi challenge and @esa @Thom_astro is on his way home 🚀 https://t.co/ycTSDR1h1Q

Twelve months later, this still blows my mind.

And let’s not forget…

  • The chance to visit both the Houses of Parliment and St James’s Palace

Raspberry Pi team at the Houses of Parliament

  • Going to a Doctor Who pre-screening and meeting Peter Capaldi, thanks to Clare Sutcliffe

There’s no need to smile when you’re #DoctorWho.

13 Likes, 2 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “There’s no need to smile when you’re #DoctorWho.”

We’re here. Where are you? . . . . . #raspberrypi #vidconeu #vidcon #pizero #zerow #travel #explore #adventure #youtube

1,944 Likes, 30 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “We’re here. Where are you? . . . . . #raspberrypi #vidconeu #vidcon #pizero #zerow #travel #explore…”

  • Making a GIF Cam and other builds, and sharing them with you all via the blog

Made a Gif Cam using a Raspberry Pi, Pi camera, button and a couple LEDs. . When you press the button, it takes 8 images and stitches them into a gif file. The files then appear on my MacBook. . Check out our Twitter feed (Raspberry_Pi) for examples! . Next step is to fit it inside a better camera body. . #DigitalMaking #Photography #Making #Camera #Gif #MakersGonnaMake #LED #Creating #PhotosofInstagram #RaspberryPi

19 Likes, 1 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “Made a Gif Cam using a Raspberry Pi, Pi camera, button and a couple LEDs. . When you press the…”

The next twelve months

Despite Eben jokingly firing me near-weekly across Twitter, or Philip giving me the ‘Dad glare’ when I pull wires and buttons out of a box under my desk to start yet another project, I don’t plan on going anywhere. Over the next twelve months, I hope to continue discovering awesome Pi builds, expanding on my own skills, and curating some wonderful projects for you via the Raspberry Pi blog, the Raspberry Pi Weekly newsletter, my submissions to The MagPi Magazine, and the occasional video interview or two.

It’s been a pleasure. Thank you for joining me on the ride!

The post “Only a year? It’s felt like forever”: a twelve-month retrospective appeared first on Raspberry Pi.

Съд на ЕС: Uber u юберизацията

Post Syndicated from nellyo original https://nellyo.wordpress.com/2017/05/15/uber/

 Стана известно заключението на Генералния адвокат Szpunar   по делото C‑434/15  Asociación Profesional Elite Taxi срещу Uber Systems Spain, SL.

Uber е наименованието на електронна платформа, разработена от дружеството Uber Technologies Inc. със седалище в Сан Франциско (Съединени щати). В Европейския съюз платформата Uber се поддържа от Uber BV, учредено по нидерландското право дружество, което е дъщерно на Uber Technologies. Платформата позволява посредством смартфон с инсталирано приложение Uber да се заяви услуга по градски превоз в обслужваните градове. Приложението разпознава местонахождението на ползвателя и открива намиращите се в близост свободни шофьори. Когато шофьор приеме да извърши превоза, приложението уведомява ползвателя, като показва профила на шофьора, както и приблизителна цена на пътуването до посочената от ползвателя дестинация. След извършване на превоза сумата автоматично се изтегля от банковата карта, която ползвателят е длъжен да посочи при регистрация в приложението. Приложението има също възможност за оценяване — както пътниците могат да оценяват шофьорите, така и шофьорите могат да оценяват пътниците. Средна оценка под определен праг може да доведе до отстраняване от платформата.

Предмет на главното производство:

услугата, известна като UberPop, в рамките на която физически лица, непрофесионални шофьори, осигуряват превоз на пътници със собствените си превозни средства.   Тарифите се определят от оператора на платформата въз основа на разстоянието и продължителността на курса. Те варират в зависимост от търсенето в даден момент, така че в часове на голямо натоварване цената на курса може неколкократно да надвиши базовите тарифи. Приложението изчислява цената на курса, която автоматично се изтегля от оператора на платформата, след което той задържа част от нея като комисиона, обикновено между 20 % и 25 %, и изплаща останалата част на шофьора.

Тълкуването, което се иска от Съда, се отнася единствено до правното положение на Uber от гледна точка на правото на Съюза, за да може да се определи дали и в каква степен това право е приложимо по отношение на развиваната от него дейност:  дали евентуалното регламентиране на условията за функциониране на Uber трябва да бъде съобразено с изискванията на правото на Съюза и, на първо място, с това за свободно предоставяне на услуги, или регламентирането на тези условия попада в обхвата на споделената компетентност на Европейския съюз и на държавите  в областта на местния превоз.

Спорът:

тъй като   нито Uber Spain, нито собствениците, нито шофьорите на съответните превозни средства имат лицензите и разрешенията, предвидени в Наредбата за таксиметровите превози на   Барселона, професионалната организация на таксиметровите шофьори предявява иск срещу Uber Systems Spain,   за нелоялна конкуренция,  да му бъде разпоредено да преустанови нелоялното си поведение, състоящо се в  предоставяне на услуги по извършване на резервации по заявка чрез мобилни устройства и по интернет,  чрез цифровата платформа Uber в Испания, както и да му бъде забранено да извършва тази дейност в бъдеще.

Преюдициални въпроси, поставени от Търговския съд – Барселона (общо са четири):

 Следва ли — доколкото член 2, параграф 2, буква г) от [Директива 2006/123] изключва от приложното поле на тази директива транспортните дейности — извършваната от ответника с цел печалба дейност по посредничество между собственика на превозно средство и лицето, нуждаещо се от превоз в рамките на определен град, при която се управляват информационни технологии — интерфейс и софтуерно приложение („смартфони и технологична платформа“ според ответника) — позволяващи на посочените лица да влязат във връзка едно с друго, да се счита просто за транспортна дейност, или тази дейност следва да се разглежда като електронна посредническа услуга, тоест като услуга на информационното общество по смисъла на член 1, параграф 2 от [Директива 98/34]?

При определяне на правното естество на тази дейност може ли последната да се счита отчасти за услуга на информационното общество или е транспортна услуга?

Заключението:

Обичайно Uber се определя като предприятие (или платформа) от т.нар. „икономика на споделянето“. То със сигурност не може да се счита за платформа за споделено пътуване – защото  шофьори предлагат на пътници услуга по превоз до избрана от пътника дестинация и за това получават възнаграждение в размер, който значително надхвърля простото възстановяване на направените разходи. Следователно става дума за класическа услуга по превоз.

Член 2, буква a) от Директива 2000/31/ЕО (Директива за електронната търговия) следва да се тълкува в смисъл, че услуга, състояща се в свързване чрез софтуер за мобилни телефони на потенциални пътници с шофьори, предлагащи индивидуален градски превоз по заявка, не представлява услуга на информационното общество при положение че доставчикът на услугата упражнява контрол върху основните условия на извършвания в тази връзка превоз, по-специално върху цената му.

Това е транспортна услуга.

Filed under: Digital, EU Law, Media Law Tagged: съд на ес

Analyze your GitHub Project With Elasticsearch And Grafana

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/05/10/analyze-your-github-project-with-elasticsearch-and-grafana/

The Dream

I have, for a long time, wished there was a way to easily export GitHub issues and comments to
Elasticsearch. The standard GitHub graphs for commits and traffic are great but I have
really been missing graphs and analytics on issues and comments.

If we had issues & comments in Elasticsearch, with a well-defined index mapping, we could do some
interesting analytics. For example:

  • Look at project history in terms of issues created
  • Look at project history in terms of comments (can be a measure of community engagement)
  • See how different labels trend over time.
  • Look at distributions (histograms) on the number of issues or comments created per user. Are there a few very active users that represent 70% or 90% of all issues & comments?
  • How long do PRs stay open?
  • How long until issues get their first response?

Why Elasticsearch?

Grafana is most often used with time series databases like Graphite, but for this sort of use case,
it’s about much more than measurements. Part of the power of Grafana is bringing together data from
many different places, and leveraging the strengths of its diverse set of data sources.

Elasticsearch isn’t technically a time series database, but it’s been one of our fastest growing data source
because it really shines for use cases like this. Plus, Grafana’s support for Elasticsearch is getting
better and better.

Elasticsearch is not only a document search DB. Its real power is in the kinds of aggregations you can do. It’s not ideal
for the high volume & high-resolution time series workloads that most time series databases can handle, but for
data with high cardinality (like documents with usernames, issue numbers, etc) it can really shine. It also allows
you to do ad-hoc filtering in a way that time series would not allow, as it would require a unique time series
for every possible filter condition and value.

The GitHub API Crawler

So a few weekends ago I had some left over programming energy and spent a few hours hacking together
this node.js app that uses the GitHub API to crawl all issues and comments which it
then saves as separate documents in Elasticsearch.

It stores them in Elasticsearch with this index mapping:

"mappings": {
  "issue": {
    "properties": {
      "title":            { "type": "text"  },
        "state":            { "type": "keyword"  },
        "repo":             { "type": "keyword"  },
        "labels":           { "type": "keyword"  },
        "number":           { "type": "keyword"  },
        "comments":         { "type": "long"  },
        "assignee":         { "type": "keyword"  },
        "user_login":       { "type": "keyword"  },
        "milestone":        { "type": "keyword"  },
        "created_at":       { "type": "date"  },
        "closed_at":        { "type": "date"  },
        "updated_at":       { "type": "date"  },
        "is_pull_request":  { "type": "boolean"  },
    }
  },
    "comment": {
      "properties": {
        "issue":           { "type": "keyword"  },
        "repo":            { "type": "keyword"  },
        "user_login":      { "type": "keyword"  },
        "created_at":      { "type": "date"     },
      }
    }
}

There are some more numeric fields being saved for reactions that do not need to be defined
in the index mapping.

The Dashboards

With the data finally collected, I built two dashboards; one focused on issues and another one
focused on comments. Both dashboards are templated and allow you to specify which repository
to look at and the granularity (group by time) of the data. You can also add any ad-hoc filter. For example,
only look at issues created by a specific user, or only look at issues with no comments.

Check out the dasboard on our play site. I configured the
github-to-es collector to fetch issues and comments for the main Kubernetes repo, the
main Grafana repo and the Microsoft VS Code editor repository.

The second dashboard shows comment analytics:

Useful How?

I am not exactly sure how useful this data & dashboards are yet. It was mostly a fun hobby project to see some trends and stats
for issue and comment volume. But this could also be useful data that can help you track things like issue label stats. Stats that could
be used to improve categorizing issues and visualizing changes in labeling trends. For example, the graphs could answer questions like:
How did a concerted effort to improve docs change the trend of issues labeled question?

Try it and help me improve it

Check out the GitHub repo grafana/github-to-es it has a basic README with instructions
for how to get started.

Once you have the import working you need to add an Elasticsearch data source in Grafana. For index name you specify github
and for the Timestamp field you specify created_at. Then you can import the the two dashboards i published on Grafana.com:

There are some limitations for how many issues and comments that can be imported in the initial full import due to the paging limit
in the GitHub API. GitHub API returns a maximum of 100 issues or comments per “page” and has a page limit of a maximum of 400 pages. This
means that the full import can only handle 40,000 issues and 40,000 comments.

More data & more cool graphs

There are probably many more interesting queries you can build and the collector could also be improved to fetch and store more fields.

For example:

  • Collect stars & fork stats (needs to be recorded as snapshot docs as there is no API to get historical data for this)
  • Calculate time between issue created and first comment during issue fetching to have that as a field on the issue docs
  • PR details, currently issue API does not include merge status (only a flag if its a PR)
  • Commit docs

There are probably a lot more cool things you can collect & query.

Until next time, keep on graphing!
Torkel Ödegaard
Grafana Creator & Project Lead