Tag Archives: uber

Uber Drivers Hacking the System to Cause Surge Pricing

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/08/uber_drivers_ha.html

Interesting story about Uber drivers who have figured out how to game the company’s algorithms to cause surge pricing:

According to the study. drivers manipulate Uber’s algorithm by logging out of the app at the same time, making it think that there is a shortage of cars.

[…]

The study said drivers have been coordinating forced surge pricing, after interviews with drivers in London and New York, and research on online forums such as Uberpeople.net. In a post on the website for drivers, seen by the researchers, one person said: “Guys, stay logged off until surge. Less supply high demand = surge.”

.

Passengers, of course, have long had tricks to avoid surge pricing.

I expect to see more of this sort of thing as algorithms become more prominent in our lives.

timeShift(GrafanaBuzz, 1w) Issue 5

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/07/21/timeshiftgrafanabuzz-1w-issue-5/

We cover a lot of ground in this week’s timeShift. From diving into building your own plugin, finding the right dashboard, configuration options in the alerting feature, to monitoring your local weather, there’s something for everyone. Are you writing an article about Grafana, or have you come across an article you found interesting? Please get in touch, we’ll add it to our roundup.


From the Blogosphere

  • Going open-source in monitoring, part III: 10 most useful Grafana dashboards to monitor Kubernetes and services: We have hundreds of pre-made dashboards ready for you to install into your on-prem or hosted Grafana, but not every one will fit your specific monitoring needs. In part three of the series, Sergey discusses is experiences with finding useful dashboards and shows off ten of the best dashboards you can install for monitoring Kubernetes clusters and the services deployed on them.

  • Using AWS Lambda and API gateway for server-less Grafana adapters: Sometimes you’ll want to visualize metrics from a data source that may not yet be supported in Grafana natively. With the plugin functionality introduced in Grafana 3.0, anyone can create their own data sources. Using the SimpleJson data source, Jonas describes how he used AWS Lambda and AWS API gateway to write data source adapters for Grafana.

  • How to Use Grafana to Monitor JMeter Non-GUI Results – Part 2: A few issues ago we listed an article for using Grafana to monitor JMeter Non-GUI results, which required a number of non-trivial steps to complete. This article shows of an easier way to accomplish this that doesn’t require any additional configuration of InfluxDB.

  • Programming your Personal Weather Chart: It’s always great to see Grafana used outside of the typical dev-ops usecase. This article runs you through the steps to create your own weather chart and show off your local weather stats in Grafana. BONUS: Rob shows off a magic mirror he created, which can display this data.

  • vSphere Performance data – Part 6 – The Dashboard(s): This 6-part series goes into a ton of detail and walks you through the various methods of retrieving vSphere performance data, storing the data in a TSDB, and creating dashboards for the metrics. Part 6 deals specifically with Grafana, but I highly recommend reading all of the articles, as it chronicles the journey of metrics exploration, storage, and visualization from someone who had no prior experience with time series data.

  • Alerting in Grafana: Alerting in Grafana is a fairly new feature and one that we’re continuing to iterate on. We’re soon adding additional data source support, new notification channels, clustering, silencing rules, and more. This article steps you through all the configuration options to get you to your first alert.


Plugins and Dashboards

It can seem like work slows during July and August, but we’re still seeing a lot of activity in the community. This week we have a new graph panel to show off that gives you some unique looking dashboards, and an update to the Zabbix data source, which adds some really great features. You can install both of the plugins now on your on-prem Grafana via our cli, or with one-click on GrafanaCloud.

NEW PLUGIN

Bubble Chart Panel This super-cool looking panel groups your tag values into clusters of circles. The size of the circle represents the aggregated value of the time series data. There are also multiple color schemes to make those bubbles POP (pun intended)! Currently it works against OpenTSDB and Bosun, so give it a try!

Install Now

UPDATED PLUGIN

Zabbix Alex has been hard at work, making improvements on the Zabbix App for Grafana. This update adds annotations, template variables, alerting and more. Thanks Alex! If you’d like to try out the app, head over to http://play.grafana-zabbix.org/dashboard/db/zabbix-db-mysql?orgId=2

Install 3.5.1 Now


This week’s MVC (Most Valuable Contributor)

Open source software can’t thrive without the contributions from the community. Each week we’ll recognize a Grafana contributor and thank them for all of their PRs, bug reports and feedback.

mk-dhia (Dhia)
Thank you so much for your improvements to the Elasticsearch data source!


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

This week’s tweet comes from @geek_dave

Great looking dashboard Dave! And thank you for adding new features and keeping it updated. It’s creators like you who make the dashboard repository so awesome!


Upcoming Events

We love when people talk about Grafana at meetups and conferences.

Monday, July 24, 2017 – 7:30pm | Google Campus Warsaw


Ząbkowska 27/31, Warsaw, Poland

Iot & HOME AUTOMATION #3 openHAB, InfluxDB, Grafana:
If you are interested in topics of the internet of things and home automation, this might be a good occasion to meet people similar to you. If you are into it, we will also show you how we can all work together on our common projects.

RSVP


Tell us how we’re Doing.

We’d love your feedback on what kind of content you like, length, format, etc – so please keep the comments coming! You can submit a comment on this article below, or post something at our community forum. Help us make this better.

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

timeShift(GrafanaBuzz, 1w) Issue 4

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/07/14/timeshiftgrafanabuzz-1w-issue-4/

The summer seems to be flying by! This week’s timeShift has a lot of great articles to share, including a Grafana presentation from one of our software engineers, Kubernetes monitoring, dashboard exports and backups via grafcli, scaling Graphite on AWS and a lot more. If you’ve come across a recent article about Grafana, or are writing one yourself, please get in touch, we’d be happy to feature it here. From the Blogosphere Democratizing Metrics with Grafana: Grafana Labs software developer Alexander Zobnin, recently gave a great talk at the Big Monitoring Meetup in St.

Manage Kubernetes Clusters on AWS Using Kops

Post Syndicated from Arun Gupta original https://aws.amazon.com/blogs/compute/kubernetes-clusters-aws-kops/

Any containerized application typically consists of multiple containers. There is a container for the application itself, one for database, possibly another for web server, and so on. During development, its normal to build and test this multi-container application on a single host. This approach works fine during early dev and test cycles but becomes a single point of failure for production where the availability of the application is critical. In such cases, this multi-container application is deployed on multiple hosts. There is a need for an external tool to manage such a multi-container multi-host deployment. Container orchestration frameworks provides the capability of cluster management, scheduling containers on different hosts, service discovery and load balancing, crash recovery and other related functionalities. There are multiple options for container orchestration on Amazon Web Services: Amazon ECS, Docker for AWS, and DC/OS.

Another popular option for container orchestration on AWS is Kubernetes. There are multiple ways to run a Kubernetes cluster on AWS. This multi-part blog series provides a brief overview and explains some of these approaches in detail. This first post explains how to create a Kubernetes cluster on AWS using kops.

Kubernetes and Kops overview

Kubernetes is an open source, container orchestration platform. Applications packaged as Docker images can be easily deployed, scaled, and managed in a Kubernetes cluster. Some of the key features of Kubernetes are:

  • Self-healing
    Failed containers are restarted to ensure that the desired state of the application is maintained. If a node in the cluster dies, then the containers are rescheduled on a different node. Containers that do not respond to application-defined health check are terminated, and thus rescheduled.
  • Horizontal scaling
    Number of containers can be easily scaled up and down automatically based upon CPU utilization, or manually using a command.
  • Service discovery and load balancing
    Multiple containers can be grouped together discoverable using a DNS name. The service can be load balanced with integration to the native LB provided by the cloud provider.
  • Application upgrades and rollbacks
    Applications can be upgraded to a newer version without an impact to the existing one. If something goes wrong, Kubernetes rolls back the change.

Kops, short for Kubernetes Operations, is a set of tools for installing, operating, and deleting Kubernetes clusters in the cloud. A rolling upgrade of an older version of Kubernetes to a new version can also be performed. It also manages the cluster add-ons. After the cluster is created, the usual kubectl CLI can be used to manage resources in the cluster.

Download Kops and Kubectl

There is no need to download the Kubernetes binary distribution for creating a cluster using kops. However, you do need to download the kops CLI. It then takes care of downloading the right Kubernetes binary in the cloud, and provisions the cluster.

The different download options for kops are explained at github.com/kubernetes/kops#installing. On MacOS, the easiest way to install kops is using the brew package manager.

brew update && brew install kops

The version of kops can be verified using the kops version command, which shows:

Version 1.6.1

In addition, download kubectl. This is required to manage the Kubernetes cluster. The latest version of kubectl can be downloaded using the following command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl

Make sure to include the directory where kubectl is downloaded in your PATH.

IAM user permission

The IAM user to create the Kubernetes cluster must have the following permissions:

  • AmazonEC2FullAccess
  • AmazonRoute53FullAccess
  • AmazonS3FullAccess
  • IAMFullAccess
  • AmazonVPCFullAccess

Alternatively, a new IAM user may be created and the policies attached as explained at github.com/kubernetes/kops/blob/master/docs/aws.md#setup-iam-user.

Create an Amazon S3 bucket for the Kubernetes state store

Kops needs a “state store” to store configuration information of the cluster.  For example, how many nodes, instance type of each node, and Kubernetes version. The state is stored during the initial cluster creation. Any subsequent changes to the cluster are also persisted to this store as well. As of publication, Amazon S3 is the only supported storage mechanism. Create a S3 bucket and pass that to the kops CLI during cluster creation.

This post uses the bucket name kubernetes-aws-io. Bucket names must be unique; you have to use a different name. Create an S3 bucket:

aws s3api create-bucket --bucket kubernetes-aws-io

I strongly recommend versioning this bucket in case you ever need to revert or recover a previous version of the cluster. This can be enabled using the AWS CLI as well:

aws s3api put-bucket-versioning --bucket kubernetes-aws-io --versioning-configuration Status=Enabled

For convenience, you can also define KOPS_STATE_STORE environment variable pointing to the S3 bucket. For example:

export KOPS_STATE_STORE=s3://kubernetes-aws-io

This environment variable is then used by the kops CLI.

DNS configuration

As of Kops 1.6.1, a top-level domain or a subdomain is required to create the cluster. This domain allows the worker nodes to discover the master and the master to discover all the etcd servers. This is also needed for kubectl to be able to talk directly with the master.

This domain may be registered with AWS, in which case a Route 53 hosted zone is created for you. Alternatively, this domain may be at a different registrar. In this case, create a Route 53 hosted zone. Specify the name server (NS) records from the created zone as NS records with the domain registrar.

This post uses a kubernetes-aws.io domain registered at a third-party registrar.

Generate a Route 53 hosted zone using the AWS CLI. Download jq to run this command:

ID=$(uuidgen) && \
aws route53 create-hosted-zone \
--name cluster.kubernetes-aws.io \
--caller-reference $ID \
| jq .DelegationSet.NameServers

This shows an output such as the following:

[
"ns-94.awsdns-11.com",
"ns-1962.awsdns-53.co.uk",
"ns-838.awsdns-40.net",
"ns-1107.awsdns-10.org"
]

Create NS records for the domain with your registrar. Different options on how to configure DNS for the cluster are explained at github.com/kubernetes/kops/blob/master/docs/aws.md#configure-dns.

Experimental support to create a gossip-based cluster was added in Kops 1.6.2. This post uses a DNS-based approach, as that is more mature and well tested.

Create the Kubernetes cluster

The Kops CLI can be used to create a highly available cluster, with multiple master nodes spread across multiple Availability Zones. Workers can be spread across multiple zones as well. Some of the tasks that happen behind the scene during cluster creation are:

  • Provisioning EC2 instances
  • Setting up AWS resources such as networks, Auto Scaling groups, IAM users, and security groups
  • Installing Kubernetes.

Start the Kubernetes cluster using the following command:

kops create cluster \
--name cluster.kubernetes-aws.io \
--zones us-west-2a \
--state s3://kubernetes-aws-io \
--yes

In this command:

  • --zones
    Defines the zones in which the cluster is going to be created. Multiple comma-separated zones can be specified to span the cluster across multiple zones.
  • --name
    Defines the cluster’s name.
  • --state
    Points to the S3 bucket that is the state store.
  • --yes
    Immediately creates the cluster. Otherwise, only the cloud resources are created and the cluster needs to be started explicitly using the command kops update --yes. If the cluster needs to be edited, then the kops edit cluster command can be used.

This starts a single master and two worker node Kubernetes cluster. The master is in an Auto Scaling group and the worker nodes are in a separate group. By default, the master node is m3.medium and the worker node is t2.medium. Master and worker nodes are assigned separate IAM roles as well.

Wait for a few minutes for the cluster to be created. The cluster can be verified using the command kops validate cluster --state=s3://kubernetes-aws-io. It shows the following output:

Using cluster from kubectl context: cluster.kubernetes-aws.io

Validating cluster cluster.kubernetes-aws.io

INSTANCE GROUPS
NAME                 ROLE      MACHINETYPE    MIN    MAX    SUBNETS
master-us-west-2a    Master    m3.medium      1      1      us-west-2a
nodes                Node      t2.medium      2      2      us-west-2a

NODE STATUS
NAME                                           ROLE      READY
ip-172-20-38-133.us-west-2.compute.internal    node      True
ip-172-20-38-177.us-west-2.compute.internal    master    True
ip-172-20-46-33.us-west-2.compute.internal     node      True

Your cluster cluster.kubernetes-aws.io is ready

It shows the different instances started for the cluster, and their roles. If multiple cluster states are stored in the same bucket, then --name <NAME> can be used to specify the exact cluster name.

Check all nodes in the cluster using the command kubectl get nodes:

NAME                                          STATUS         AGE       VERSION
ip-172-20-38-133.us-west-2.compute.internal   Ready,node     14m       v1.6.2
ip-172-20-38-177.us-west-2.compute.internal   Ready,master   15m       v1.6.2
ip-172-20-46-33.us-west-2.compute.internal    Ready,node     14m       v1.6.2

Again, the internal IP address of each node, their current status (master or node), and uptime are shown. The key information here is the Kubernetes version for each node in the cluster, 1.6.2 in this case.

The kubectl value included in the PATH earlier is configured to manage this cluster. Resources such as pods, replica sets, and services can now be created in the usual way.

Some of the common options that can be used to override the default cluster creation are:

  • --kubernetes-version
    The version of Kubernetes cluster. The exact versions supported are defined at github.com/kubernetes/kops/blob/master/channels/stable.
  • --master-size and --node-size
    Define the instance of master and worker nodes.
  • --master-count and --node-count
    Define the number of master and worker nodes. By default, a master is created in each zone specified by --master-zones. Multiple master nodes can be created by a higher number using --master-count or specifying multiple Availability Zones in --master-zones.

A three-master and five-worker node cluster, with master nodes spread across different Availability Zones, can be created using the following command:

kops create cluster \
--name cluster2.kubernetes-aws.io \
--zones us-west-2a,us-west-2b,us-west-2c \
--node-count 5 \
--state s3://kubernetes-aws-io \
--yes

Both the clusters are sharing the same state store but have different names. This also requires you to create an additional Amazon Route 53 hosted zone for the name.

By default, the resources required for the cluster are directly created in the cloud. The --target option can be used to generate the AWS CloudFormation scripts instead. These scripts can then be used by the AWS CLI to create resources at your convenience.

Get a complete list of options for cluster creation with kops create cluster --help.

More details about the cluster can be seen using the command kubectl cluster-info:

Kubernetes master is running at https://api.cluster.kubernetes-aws.io
KubeDNS is running at https://api.cluster.kubernetes-aws.io/api/v1/proxy/namespaces/kube-system/services/kube-dns

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Check the client and server version using the command kubectl version:

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Both client and server version are 1.6 as shown by the Major and Minor attribute values.

Upgrade the Kubernetes cluster

Kops can be used to create a Kubernetes 1.4.x, 1.5.x, or an older version of the 1.6.x cluster using the --kubernetes-version option. The exact versions supported are defined at github.com/kubernetes/kops/blob/master/channels/stable.

Or, you may have used kops to create a cluster a while ago, and now want to upgrade to the latest recommended version of Kubernetes. Kops supports rolling cluster upgrades where the master and worker nodes are upgraded one by one.

As of kops 1.6.1, upgrading a cluster is a three-step process.

First, check and apply the latest recommended Kubernetes update.

kops upgrade cluster \
--name cluster2.kubernetes-aws.io \
--state s3://kubernetes-aws-io \
--yes

The --yes option immediately applies the changes. Not specifying the --yes option shows only the changes that are applied.

Second, update the state store to match the cluster state. This can be done using the following command:

kops update cluster \
--name cluster2.kubernetes-aws.io \
--state s3://kubernetes-aws-io \
--yes

Lastly, perform a rolling update for all cluster nodes using the kops rolling-update command:

kops rolling-update cluster \
--name cluster2.kubernetes-aws.io \
--state s3://kubernetes-aws-io \
--yes

Previewing the changes before updating the cluster can be done using the same command but without specifying the --yes option. This shows the following output:

NAME                 STATUS        NEEDUPDATE    READY    MIN    MAX    NODES
master-us-west-2a    NeedsUpdate   1             0        1      1      1
nodes                NeedsUpdate   2             0        2      2      2

Using --yes updates all nodes in the cluster, first master and then worker. There is a 5-minute delay between restarting master nodes, and a 2-minute delay between restarting nodes. These values can be altered using --master-interval and --node-interval options, respectively.

Only the worker nodes may be updated by using the --instance-group node option.

Delete the Kubernetes cluster

Typically, the Kubernetes cluster is a long-running cluster to serve your applications. After its purpose is served, you may delete it. It is important to delete the cluster using the kops command. This ensures that all resources created by the cluster are appropriately cleaned up.

The command to delete the Kubernetes cluster is:

kops delete cluster --state=s3://kubernetes-aws-io --yes

If multiple clusters have been created, then specify the cluster name as in the following command:

kops delete cluster cluster2.kubernetes-aws.io --state=s3://kubernetes-aws-io --yes

Conclusion

This post explained how to manage a Kubernetes cluster on AWS using kops. Kubernetes on AWS users provides a self-published list of companies using Kubernetes on AWS.

Try starting a cluster, create a few Kubernetes resources, and then tear it down. Kops on AWS provides a more comprehensive tutorial for setting up Kubernetes clusters. Kops docs are also helpful for understanding the details.

In addition, the Kops team hosts office hours to help you get started, from guiding you with your first pull request. You can always join the #kops channel on Kubernetes slack to ask questions. If nothing works, then file an issue at github.com/kubernetes/kops/issues.

Future posts in this series will explain other ways of creating and running a Kubernetes cluster on AWS.

— Arun

Kubernetes 1.7 released

Post Syndicated from corbet original https://lwn.net/Articles/726900/rss

Version
1.7
of the Kubernetes orchestration system is out.
At-a-glance, security enhancements in this release include encrypted secrets, network policy for pod-to-pod communication, node authorizer to limit kubelet access and client / server TLS certificate rotation.

For those of you running scale-out databases on Kubernetes, this release has a major feature that adds automated updates to StatefulSets and enhances updates for DaemonSets. We are also announcing alpha support for local storage and a burst mode for scaling StatefulSets faster.”

DevOps Cafe Episode 72 – Kelsey Hightower

Post Syndicated from DevOpsCafeAdmin original http://devopscafe.org/show/2017/6/18/devops-cafe-episode-72-kelsey-hightower.html

You can’t contain(er) Kelsey.

John and Damon chat with Kelsey Hightower (Google) about the future of operations, kubernetes, docker, containers, self-learning, and more!
  

  

Direct download

Follow John Willis on Twitter: @botchagalupe
Follow Damon Edwards on Twitter: @damonedwards 
Follow Kelsey Hightower on Twitter: @kelseyhightower

Notes:

 

Please tweet or leave comments or questions below and we’ll read them on the show!

“Only a year? It’s felt like forever”: a twelve-month retrospective

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/12-months-raspberry-pi/

This weekend saw my first anniversary at Raspberry Pi, and this blog marks my 100th post written for the company. It would have been easy to let one milestone or the other slide had they not come along hand in hand, begging for some sort of acknowledgement.

Alex, Matt, and Courtney in a punt on the Cam

The day Liz decided to keep me

So here it is!

Joining the crew

Prior to my position in the Comms team as Social Media Editor, my employment history was largely made up of retail sales roles and, before that, bit parts in theatrical backstage crews. I never thought I would work for the Raspberry Pi Foundation, despite its firm position on my Top Five Awesome Places I’d Love to Work list. How could I work for a tech company when my knowledge of tech stretched as far as dismantling my Game Boy when I was a kid to see how the insides worked, or being the one friend everyone went to when their phone didn’t do what it was meant to do? I never thought about the other side of the Foundation coin, or how I could find my place within the hidden workings that turned the cogs that brought everything together.

… when suddenly, as if out of nowhere, a new job with a dream company. #raspberrypi #positive #change #dosomething

12 Likes, 1 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “… when suddenly, as if out of nowhere, a new job with a dream company. #raspberrypi #positive…”

A little luck, a well-written though humorous resumé, and a meeting with Liz and Helen later, I found myself the newest member of the growing team at Pi Towers.

Ticking items off the Bucket List

I thought it would be fun to point out some of the chances I’ve had over the last twelve months and explain how they fit within the world of Raspberry Pi. After all, we’re about more than just a $35 credit card-sized computer. We’re a charitable Foundation made up of some wonderful and exciting projects, people, and goals.

High altitude ballooning (HAB)

Skycademy offers educators in the UK the chance to come to Pi Towers Cambridge to learn how to plan a balloon launch, build a payload with onboard Raspberry Pi and Camera Module, and provide teachers with the skills needed to take their students on an adventure to near space, with photographic evidence to prove it.

All the screens you need to hunt balloons. . We have our landing point and are now rushing to Therford to find the payload in a field. . #HAB #RasppberryPi

332 Likes, 5 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “All the screens you need to hunt balloons. . We have our landing point and are now rushing to…”

I was fortunate enough to join Sky Captain James, along with Dan Fisher, Dave Akerman, and Steve Randell on a test launch back in August last year. Testing out new kit that James had still been tinkering with that morning, we headed to a field in Elsworth, near Cambridge, and provided Facebook Live footage of the process from payload build to launch…to the moment when our balloon landed in an RAF shooting range some hours later.

RAF firing range sign

“Can we have our balloon back, please, mister?”

Having enjoyed watching Blue Peter presenters send up a HAB when I was a child, I marked off the event on my bucket list with a bold tick, and I continue to show off the photographs from our Raspberry Pi as it reached near space.

Spend the day launching/chasing a high-altitude balloon. Look how high it went!!! #HAB #ballooning #space #wellspacekinda #ish #photography #uk #highaltitude

13 Likes, 2 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “Spend the day launching/chasing a high-altitude balloon. Look how high it went!!! #HAB #ballooning…”

You can find more information on Skycademy here, plus more detail about our test launch day in Dan’s blog post here.

Dear Raspberry Pi Friends…

My desk is slowly filling with stuff: notes, mementoes, and trinkets that find their way to me from members of the community, both established and new to the life of Pi. There are thank you notes, updates, and more from people I’ve chatted to online as they explore their way around the world of Pi.

Letter of thanks to Raspberry Pi from a young fan

*heart melts*

By plugging myself into social media on a daily basis, I often find hidden treasures that go unnoticed due to the high volume of tags we receive on Facebook, Twitter, Instagram, and so on. Kids jumping off chairs in delight as they complete their first Scratch project, newcomers to the Raspberry Pi shedding a tear as they make an LED blink on their kitchen table, and seasoned makers turning their hobby into something positive to aid others.

It’s wonderful to join in the excitement of people discovering a new skill and exploring the community of Raspberry Pi makers: I’ve been known to shed a tear as a result.

Meeting educators at Bett, chatting to teen makers at makerspaces, and sharing a cupcake or three at the birthday party have been incredible opportunities to get to know you all.

You’re all brilliant.

The Queens of Robots, both shoddy and otherwise

Last year we welcomed the Queen of Shoddy Robots, Simone Giertz to Pi Towers, where we chatted about making, charity, and space while wandering the colleges of Cambridge and hanging out with flat Tim Peake.

Queen of Robots @simonegiertz came to visit #PiTowers today. We hung out with cardboard @astro_timpeake and ate chelsea buns at @fitzbillies #Cambridge. . We also had a great talk about the educational projects of the #RaspberryPi team, #AstroPi and how not enough people realise we’re a #charity. . If you’d like to learn more about the Raspberry Pi Foundation and the work we do with #teachers and #education, check out our website – www.raspberrypi.org. . How was your day? Get up to anything fun?

597 Likes, 3 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “Queen of Robots @simonegiertz came to visit #PiTowers today. We hung out with cardboard…”

And last month, the wonderful Estefannie ‘Explains it All’ de La Garza came to hang out, make things, and discuss our educational projects.

Estefannie on Twitter

Ahhhh!!! I still can’t believe I got to hang out and make stuff at the @Raspberry_Pi towers!! Thank you thank you!!

Meeting such wonderful, exciting, and innovative YouTubers was a fantastic inspiration to work on my own projects and to try to do more to help others discover ways to connect with tech through their own interests.

Those ‘wow’ moments

Every Raspberry Pi project I see on a daily basis is awesome. The moment someone takes an idea and does something with it is, in my book, always worthy of awe and appreciation. Whether it be the aforementioned flashing LED, or sending Raspberry Pis to the International Space Station, if you have turned your idea into reality, I applaud you.

Some of my favourite projects over the last twelve months have not only made me say “Wow!”, they’ve also inspired me to want to do more with myself, my time, and my growing maker skill.

Museum in a Box on Twitter

Great to meet @alexjrassic today and nerd out about @Raspberry_Pi and weather balloons and @Space_Station and all things #edtech 🎈⛅🛰📚🤖

Projects such as Museum in a Box, a wonderful hands-on learning aid that brings the world to the hands of children across the globe, honestly made me tear up as I placed a miniaturised 3D-printed Virginia Woolf onto a wooden box and gasped as she started to speak to me.

Jill Ogle’s Let’s Robot project had me in awe as Twitch-controlled Pi robots tackled mazes, attempted to cut birthday cake, or swung to slap Jill in the face over webcam.

Jillian Ogle on Twitter

@SryAbtYourCats @tekn0rebel @Beam Lol speaking of faces… https://t.co/1tqFlMNS31

Every day I discover new, wonderful builds that both make me wish I’d thought of them first, and leave me wondering how they manage to make them work in the first place.

Space

We have Raspberry Pis in space. SPACE. Actually space.

Raspberry Pi on Twitter

New post: Mission accomplished for the European @astro_pi challenge and @esa @Thom_astro is on his way home 🚀 https://t.co/ycTSDR1h1Q

Twelve months later, this still blows my mind.

And let’s not forget…

  • The chance to visit both the Houses of Parliment and St James’s Palace

Raspberry Pi team at the Houses of Parliament

  • Going to a Doctor Who pre-screening and meeting Peter Capaldi, thanks to Clare Sutcliffe

There’s no need to smile when you’re #DoctorWho.

13 Likes, 2 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “There’s no need to smile when you’re #DoctorWho.”

We’re here. Where are you? . . . . . #raspberrypi #vidconeu #vidcon #pizero #zerow #travel #explore #adventure #youtube

1,944 Likes, 30 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “We’re here. Where are you? . . . . . #raspberrypi #vidconeu #vidcon #pizero #zerow #travel #explore…”

  • Making a GIF Cam and other builds, and sharing them with you all via the blog

Made a Gif Cam using a Raspberry Pi, Pi camera, button and a couple LEDs. . When you press the button, it takes 8 images and stitches them into a gif file. The files then appear on my MacBook. . Check out our Twitter feed (Raspberry_Pi) for examples! . Next step is to fit it inside a better camera body. . #DigitalMaking #Photography #Making #Camera #Gif #MakersGonnaMake #LED #Creating #PhotosofInstagram #RaspberryPi

19 Likes, 1 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “Made a Gif Cam using a Raspberry Pi, Pi camera, button and a couple LEDs. . When you press the…”

The next twelve months

Despite Eben jokingly firing me near-weekly across Twitter, or Philip giving me the ‘Dad glare’ when I pull wires and buttons out of a box under my desk to start yet another project, I don’t plan on going anywhere. Over the next twelve months, I hope to continue discovering awesome Pi builds, expanding on my own skills, and curating some wonderful projects for you via the Raspberry Pi blog, the Raspberry Pi Weekly newsletter, my submissions to The MagPi Magazine, and the occasional video interview or two.

It’s been a pleasure. Thank you for joining me on the ride!

The post “Only a year? It’s felt like forever”: a twelve-month retrospective appeared first on Raspberry Pi.

Съд на ЕС: Uber u юберизацията

Post Syndicated from nellyo original https://nellyo.wordpress.com/2017/05/15/uber/

 Стана известно заключението на Генералния адвокат Szpunar   по делото C‑434/15  Asociación Profesional Elite Taxi срещу Uber Systems Spain, SL.

Uber е наименованието на електронна платформа, разработена от дружеството Uber Technologies Inc. със седалище в Сан Франциско (Съединени щати). В Европейския съюз платформата Uber се поддържа от Uber BV, учредено по нидерландското право дружество, което е дъщерно на Uber Technologies. Платформата позволява посредством смартфон с инсталирано приложение Uber да се заяви услуга по градски превоз в обслужваните градове. Приложението разпознава местонахождението на ползвателя и открива намиращите се в близост свободни шофьори. Когато шофьор приеме да извърши превоза, приложението уведомява ползвателя, като показва профила на шофьора, както и приблизителна цена на пътуването до посочената от ползвателя дестинация. След извършване на превоза сумата автоматично се изтегля от банковата карта, която ползвателят е длъжен да посочи при регистрация в приложението. Приложението има също възможност за оценяване — както пътниците могат да оценяват шофьорите, така и шофьорите могат да оценяват пътниците. Средна оценка под определен праг може да доведе до отстраняване от платформата.

Предмет на главното производство:

услугата, известна като UberPop, в рамките на която физически лица, непрофесионални шофьори, осигуряват превоз на пътници със собствените си превозни средства.   Тарифите се определят от оператора на платформата въз основа на разстоянието и продължителността на курса. Те варират в зависимост от търсенето в даден момент, така че в часове на голямо натоварване цената на курса може неколкократно да надвиши базовите тарифи. Приложението изчислява цената на курса, която автоматично се изтегля от оператора на платформата, след което той задържа част от нея като комисиона, обикновено между 20 % и 25 %, и изплаща останалата част на шофьора.

Тълкуването, което се иска от Съда, се отнася единствено до правното положение на Uber от гледна точка на правото на Съюза, за да може да се определи дали и в каква степен това право е приложимо по отношение на развиваната от него дейност:  дали евентуалното регламентиране на условията за функциониране на Uber трябва да бъде съобразено с изискванията на правото на Съюза и, на първо място, с това за свободно предоставяне на услуги, или регламентирането на тези условия попада в обхвата на споделената компетентност на Европейския съюз и на държавите  в областта на местния превоз.

Спорът:

тъй като   нито Uber Spain, нито собствениците, нито шофьорите на съответните превозни средства имат лицензите и разрешенията, предвидени в Наредбата за таксиметровите превози на   Барселона, професионалната организация на таксиметровите шофьори предявява иск срещу Uber Systems Spain,   за нелоялна конкуренция,  да му бъде разпоредено да преустанови нелоялното си поведение, състоящо се в  предоставяне на услуги по извършване на резервации по заявка чрез мобилни устройства и по интернет,  чрез цифровата платформа Uber в Испания, както и да му бъде забранено да извършва тази дейност в бъдеще.

Преюдициални въпроси, поставени от Търговския съд – Барселона (общо са четири):

 Следва ли — доколкото член 2, параграф 2, буква г) от [Директива 2006/123] изключва от приложното поле на тази директива транспортните дейности — извършваната от ответника с цел печалба дейност по посредничество между собственика на превозно средство и лицето, нуждаещо се от превоз в рамките на определен град, при която се управляват информационни технологии — интерфейс и софтуерно приложение („смартфони и технологична платформа“ според ответника) — позволяващи на посочените лица да влязат във връзка едно с друго, да се счита просто за транспортна дейност, или тази дейност следва да се разглежда като електронна посредническа услуга, тоест като услуга на информационното общество по смисъла на член 1, параграф 2 от [Директива 98/34]?

При определяне на правното естество на тази дейност може ли последната да се счита отчасти за услуга на информационното общество или е транспортна услуга?

Заключението:

Обичайно Uber се определя като предприятие (или платформа) от т.нар. „икономика на споделянето“. То със сигурност не може да се счита за платформа за споделено пътуване – защото  шофьори предлагат на пътници услуга по превоз до избрана от пътника дестинация и за това получават възнаграждение в размер, който значително надхвърля простото възстановяване на направените разходи. Следователно става дума за класическа услуга по превоз.

Член 2, буква a) от Директива 2000/31/ЕО (Директива за електронната търговия) следва да се тълкува в смисъл, че услуга, състояща се в свързване чрез софтуер за мобилни телефони на потенциални пътници с шофьори, предлагащи индивидуален градски превоз по заявка, не представлява услуга на информационното общество при положение че доставчикът на услугата упражнява контрол върху основните условия на извършвания в тази връзка превоз, по-специално върху цената му.

Това е транспортна услуга.

Filed under: Digital, EU Law, Media Law Tagged: съд на ес

Analyze your GitHub Project With Elasticsearch And Grafana

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/05/10/analyze-your-github-project-with-elasticsearch-and-grafana/

The Dream

I have, for a long time, wished there was a way to easily export GitHub issues and comments to
Elasticsearch. The standard GitHub graphs for commits and traffic are great but I have
really been missing graphs and analytics on issues and comments.

If we had issues & comments in Elasticsearch, with a well-defined index mapping, we could do some
interesting analytics. For example:

  • Look at project history in terms of issues created
  • Look at project history in terms of comments (can be a measure of community engagement)
  • See how different labels trend over time.
  • Look at distributions (histograms) on the number of issues or comments created per user. Are there a few very active users that represent 70% or 90% of all issues & comments?
  • How long do PRs stay open?
  • How long until issues get their first response?

Why Elasticsearch?

Grafana is most often used with time series databases like Graphite, but for this sort of use case,
it’s about much more than measurements. Part of the power of Grafana is bringing together data from
many different places, and leveraging the strengths of its diverse set of data sources.

Elasticsearch isn’t technically a time series database, but it’s been one of our fastest growing data source
because it really shines for use cases like this. Plus, Grafana’s support for Elasticsearch is getting
better and better.

Elasticsearch is not only a document search DB. Its real power is in the kinds of aggregations you can do. It’s not ideal
for the high volume & high-resolution time series workloads that most time series databases can handle, but for
data with high cardinality (like documents with usernames, issue numbers, etc) it can really shine. It also allows
you to do ad-hoc filtering in a way that time series would not allow, as it would require a unique time series
for every possible filter condition and value.

The GitHub API Crawler

So a few weekends ago I had some left over programming energy and spent a few hours hacking together
this node.js app that uses the GitHub API to crawl all issues and comments which it
then saves as separate documents in Elasticsearch.

It stores them in Elasticsearch with this index mapping:

"mappings": {
  "issue": {
    "properties": {
      "title":            { "type": "text"  },
        "state":            { "type": "keyword"  },
        "repo":             { "type": "keyword"  },
        "labels":           { "type": "keyword"  },
        "number":           { "type": "keyword"  },
        "comments":         { "type": "long"  },
        "assignee":         { "type": "keyword"  },
        "user_login":       { "type": "keyword"  },
        "milestone":        { "type": "keyword"  },
        "created_at":       { "type": "date"  },
        "closed_at":        { "type": "date"  },
        "updated_at":       { "type": "date"  },
        "is_pull_request":  { "type": "boolean"  },
    }
  },
    "comment": {
      "properties": {
        "issue":           { "type": "keyword"  },
        "repo":            { "type": "keyword"  },
        "user_login":      { "type": "keyword"  },
        "created_at":      { "type": "date"     },
      }
    }
}

There are some more numeric fields being saved for reactions that do not need to be defined
in the index mapping.

The Dashboards

With the data finally collected, I built two dashboards; one focused on issues and another one
focused on comments. Both dashboards are templated and allow you to specify which repository
to look at and the granularity (group by time) of the data. You can also add any ad-hoc filter. For example,
only look at issues created by a specific user, or only look at issues with no comments.

Check out the dasboard on our play site. I configured the
github-to-es collector to fetch issues and comments for the main Kubernetes repo, the
main Grafana repo and the Microsoft VS Code editor repository.

The second dashboard shows comment analytics:

Useful How?

I am not exactly sure how useful this data & dashboards are yet. It was mostly a fun hobby project to see some trends and stats
for issue and comment volume. But this could also be useful data that can help you track things like issue label stats. Stats that could
be used to improve categorizing issues and visualizing changes in labeling trends. For example, the graphs could answer questions like:
How did a concerted effort to improve docs change the trend of issues labeled question?

Try it and help me improve it

Check out the GitHub repo grafana/github-to-es it has a basic README with instructions
for how to get started.

Once you have the import working you need to add an Elasticsearch data source in Grafana. For index name you specify github
and for the Timestamp field you specify created_at. Then you can import the the two dashboards i published on Grafana.com:

There are some limitations for how many issues and comments that can be imported in the initial full import due to the paging limit
in the GitHub API. GitHub API returns a maximum of 100 issues or comments per “page” and has a page limit of a maximum of 400 pages. This
means that the full import can only handle 40,000 issues and 40,000 comments.

More data & more cool graphs

There are probably many more interesting queries you can build and the collector could also be improved to fetch and store more fields.

For example:

  • Collect stars & fork stats (needs to be recorded as snapshot docs as there is no API to get historical data for this)
  • Calculate time between issue created and first comment during issue fetching to have that as a field on the issue docs
  • PR details, currently issue API does not include merge status (only a flag if its a PR)
  • Commit docs

There are probably a lot more cool things you can collect & query.

Until next time, keep on graphing!
Torkel Ödegaard
Grafana Creator & Project Lead

Some notes on #MacronLeak

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/05/some-notes-on-macronleak.html

Tonight (Friday May 5 2017) hackers dumped emails (and docs) related to French presidential candidate Emmanuel Macron. He’s the anti-Putin candidate running against the pro-Putin Marin Le Pen. I thought I’d write up some notes.

Are they Macron’s emails?

No. They are e-mails from members of his staff/supporters, namely Alain Tourret, Pierre Person, Cedric O??, Anne-Christine Lang, and Quentin Lafay.
There are some documents labeled “Macron” which may have been taken from his computer, cloud drive — his own, or an assistant.

Who done it?
Obviously, everyone assumes that Russian hackers did it, but there’s nothing (so far) that points to anybody in particular.
It appears to be the most basic of phishing attacks, which means anyone could’ve done it, including your neighbor’s pimply faced teenager.

Update: Several people [*] have pointed out Trend Micro reporting that Russian/APT28 hackers were targeting Macron back on April 24. Coincidentally, this is also the latest that emails appear in the dump.

What’s the hacker’s evil plan?
Everyone is proposing theories about the hacker’s plan, but the most likely answer is they don’t have one. Hacking is opportunistic. They likely targeted everyone in the campaign, and these were the only victims they could hack. It’s probably not the outcome they were hoping for.
But since they’ve gone through all the work, it’d be a shame to waste it. Thus, they are likely releasing the dump not because they believe it will do any good, but because it’ll do them no harm. It’s a shame to waste all the work they put into it.
If there’s any plan, it’s probably a long range one, serving notice that any political candidate that goes against Putin will have to deal with Russian hackers dumping email.
Why now? Why not leak bits over time like with Clinton?

France has a campaign blackout starting tonight at midnight until the election on Sunday. Thus, it’s the perfect time to leak the files. Anything salacious, or even rumors of something bad, will spread viraly through Facebook and Twitter, without the candidate or the media having a good chance to rebut the allegations.
The last emails in the logs appear to be from April 24, the day after the first round vote (Sunday’s vote is the second, runoff, round). Thus, the hackers could’ve leaked this dump any time in the last couple weeks. They chose now to do it.
Are the emails verified?
Yes and no.
Yes, we have DKIM signatures between people’s accounts, so we know for certain that hackers successfully breached these accounts. DKIM is an anti-spam method that cryptographically signs emails by the sending domain (e.g. @gmail.com), and thus, can also verify the email hasn’t been altered or forged.
But no, when a salacious email or document is found in the dump, it’ll likely not have such a signature (most emails don’t), and thus, we probably won’t be able to verify the scandal. In other words, the hackers could have altered or forged something that becomes newsworthy.
What are the most salacious emails/files?

I don’t know. Before this dump, hackers on 4chan were already making allegations that Macron had secret offshore accounts (debunked). Presumably we need to log in to 4chan tomorrow for them to point out salacious emails/files from this dump.

Another email going around seems to indicate that Alain Tourret, a member of the French legislature, had his assistant @FrancoisMachado buy drugs online with Bitcoin and had them sent to his office in the legislature building. The drugs in question, 3-MMC, is a variant of meth that might be legal in France. The emails point to a tracking number which looks legitimate, at least, that a package was indeed shipped to that area of Paris. There is a bitcoin transaction that matches the address, time, and amount specified in the emails. Some claim these drug emails are fake, but so far, I haven’t seen any emails explaining why they should be fake. On the other hand, there’s nothing proving they are true (no DKIM sig), either.

Some salacious emails might be obvious, but some may take people with more expertise to find. For example, one email is a receipt from Uber (with proper DKIM validation) that shows the route that “Quenten” took on the night of the first round election. Somebody clued into the French political scene might be able to figure out he’s visiting his mistress, or something. (This is hypothetical — in reality, he’s probably going from one campaign rally to the next).

What’s the Macron camp’s response?

They have just the sort of response you’d expect.
They claim some of the documents/email are fake, without getting into specifics. They claim that information is needed to be understand in context. They claim that this was a “massive coordinated attack”, even though it’s something that any pimply faced teenager can do. They claim it’s an attempt to destabilize democracy. They call upon journalists to be “responsible”.

Ubertooth – Open Source Bluetooth Sniffer

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/8fG834VW8HA/

Ubertooth is an open source Bluetooth sniffer and is essentially a development platform for Bluetooth experimentation. It runs best as a native Linux install and should work fine from within a VM. Ubertooth ships with a capable BLE (Bluetooth Smart) sniffer and can sniff some data from Basic Rate (BR) Bluetooth Classic connections. Features The…

Read the full post at darknet.org.uk

HiveMQ 3.2.4 released

Post Syndicated from The HiveMQ Team original http://www.hivemq.com/blog/hivemq-3-2-4-released/

The HiveMQ team is pleased to announce the availability of HiveMQ 3.2.4. This is a maintenance release for the 3.2 series and brings the following improvements:

  • Fixed an issue with duplicate delivery for QoS=2 messages
  • Fixed an cluster issue that could cause the loss of queued messages with shared subscriptions
  • Fixed an authorization issue with retained messages
  • Fixed an issue that could cause the loss of queued messages in rare edge cases
  • Enabled the use of the publishToClient() for shared subscriptions in the Publish Service of the plugin SPI
  • Increased cluster stability in network split scenarios
  • Fixed wrongly calculated metrics for dropped messages
  • Fixed an issue preventing the OnPublishSendCallback being called for queued and retained messages
  • Deprecated client-bind-port for tcp transport*
  • Improved logging
  • Improved HiveMQ bootstrap behavior for Kubernetes and Openshift Environments
  • Performance improvements

You can download the new HiveMQ version here.
* see the upgrade guide if this effects your configuration.

We strongly recommend to upgrade if you are an HiveMQ 3.2.x user.

Have a great day,
The HiveMQ Team

[$] Kubernetes & security

Post Syndicated from jake original https://lwn.net/Articles/720215/rss

Every conference venue has problems with the mix of room sizes, but
I don’t recall ever going to a talk that so badly needed to be in a
bigger room as Jessie Frazelle and Alex Mohr’s talk
at CloudNativeCon/KubeCon Europe 2017 on securing Kubernetes.
The cause of the enthusiasm
was the opportunity to get “best practice” information on securing
Kubernetes, and how Kubernetes might be evolving to assist with this,
directly from the source.

[$] Connecting Kubernetes services with linkerd

Post Syndicated from corbet original https://lwn.net/Articles/719282/rss

When a monolithic application is divided up into microservices, one new
problem that must be solved is how to connect all those microservices
to provide the old application’s functionality.

Linkerd, which is now officially a Cloud-Native Computing Foundation project, is a transparent proxy which
solves this problem by
sitting between those microservices and routing their requests.
Two separate
CNC/KubeCon
events — a talk by Oliver Gould briefly joined by
Oliver Beattie, and a salon hosted by Gould — provided a view of linkerd
and what it can offer.

Kodi Wants to Beat Piracy With Legal Content and DRM

Post Syndicated from Ernesto original https://torrentfreak.com/kodi-wants-to-beat-piracy-with-legal-content-and-drm-170409/

Millions of people use Kodi as their main source of entertainment, often with help from add-ons that allow them to access pirated movies and TV-shows.

As Kodi’s popularity has increased drastically over the past two years, so have complaints from copyright holders.

While Kodi itself is a neutral platform, unauthorized add-ons give it a bad name. This is one of the reasons why the Kodi team is actively going after vendors who sell “fully loaded” pirate boxes and YouTubers who misuse their name to promote copyright infringement.

Interestingly, the Kodi team itself didn’t help its case by putting up an FBI seizure notice last week, as an April Fools gag.

The banner suggested that the site had been taken down by the US Department of Justice for copyright infringement. Downloads of the latest builds of the software were also blocked.

Kodi’s April Fools gag

This week TorrentFreak spoke with several members of the Kodi team, operating under the XBMC Foundation, who made it clear that they want to cooperate with rightsholders instead of being accused of facilitating piracy.

The team told us that copyright holders regularly approach them. Some are well informed and know that Kodi itself isn’t actively involved in anything piracy related. However, according to XBMC Foundation President Nathan Betzen, there are also those who are fooled by misleading media reports or YouTube videos.

“There are rightsholders that know who we are and realize we are distinct from the 3rd party add-on crowd,” Betzen says.

“And then there are the rights holders who have been successfully taken in by the propaganda, who write us very legal sounding letters because some random YouTuber or ‘news’ website described the author of a piracy add-on as a ‘Kodi developer’.”

The Kodi team doesn’t mind being approached by people who are misinformed, as it gives them an opportunity to set the record straight. It has proven to be more challenging to find a way forward with movie studios and other content creators that are aware of Kodi’s position.

These movie industry representatives sometimes ask Kodi to remove third-party repo installs and block certain pirate add-ons. However, according to XBMC Foundation’s Project lead Martijn Kaijser, this isn’t the direction Kodi wants to go in.

“Our view on this is that [removing code] would not help a bit, because the code is open-source and others can easily revert it. Blocking add-ons won’t help since they would instantly change the addon and the block would be in vain,” Kaijser tells us.

The Kodi team feels that pirates are leeching off their infrastructure and put the entire community at risk. But, instead of taking a repressive approach they would like to see more legal content providers join their platform. With an audience of millions of users, there is a lot of untapped potential on a platform that’s rapidly growing.

To facilitate this process, the media player is currently considering whether to add support for DRM so that content providers can offer their videos in a protected environment. While some users may cringe at the thought, Kodi believes it’ll help to get rightsholders on board.

“Our platform has a lot of potential and we are looking into attracting more legal and official content providers. Additionally, we’re looking into adding low-level DRM that would at least make it more feasible to gain trust from certain providers,” Kaijser tells TorrentFreak.

Kodi addons

Although Kodi does go after sellers of pirate boxes, Betzen personally doesn’t believe that this is the answer. The best way to deal with the piracy issue is to offer more legal content through official add-ons.

“We’d like to actually work with content providers to have official add-ons in our network. That’s much easier to do when we are proactively attempting to help them to fight copyright infringement,” Betzen says.

There are already plenty of legal uses for Kodi, including the DVR system, support for legal sports streaming, and a variety of add-ons such as Crunchyroll, HDHomeRun, Plex and Twitch. However, getting some major content providers on board has proven to be quite a challenge thus far.

Kaijser notes that rightsholders have been very reserved thus far. He tried to convince content providers to offer official add-ons, or even turn some community made ones into official ones, but hasn’t had much success.

In a way, the repeated piracy discussions and news items are both a blessing and a curse for Kodi. They help to grow the platform at a rate most competitors could only dream of, while at the same time keeping rightsholders at bay. Time will tell if Kodi can turn this around.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Shuttleworth: Growing Ubuntu for Cloud and IoT, rather than Phone and convergence

Post Syndicated from ris original https://lwn.net/Articles/719037/rss

Mark Shuttleworth reports
that Canonical is ending its investment in Unity8, the phone and
convergence shell. GNOME will be the default desktop for Ubuntu 18.04 LTS.
The choice, ultimately, is to invest in the areas which are
contributing to the growth of the company. Those are Ubuntu itself, for
desktops, servers and VMs, our cloud infrastructure products (OpenStack and
Kubernetes) our cloud operations capabilities (MAAS, LXD, Juju, BootStack),
and our IoT story in snaps and Ubuntu Core. All of those have communities,
customers, revenue and growth, the ingredients for a great and independent
company, with scale and momentum. This is the time for us to ensure, across
the board, that we have the fitness and rigour for that path.

(Thanks to Unnikrishnan Alathady Maloor)

Congress Removes FCC Privacy Protections on Your Internet Usage

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/congress_remove.html

Think about all of the websites you visit every day. Now imagine if the likes of Time Warner, AT&T, and Verizon collected all of your browsing history and sold it on to the highest bidder. That’s what will probably happen if Congress has its way.

This week, lawmakers voted to allow Internet service providers to violate your privacy for their own profit. Not only have they voted to repeal a rule that protects your privacy, they are also trying to make it illegal for the Federal Communications Commission to enact other rules to protect your privacy online.

That this is not provoking greater outcry illustrates how much we’ve ceded any willingness to shape our technological future to for-profit companies and are allowing them to do it for us.

There are a lot of reasons to be worried about this. Because your Internet service provider controls your connection to the Internet, it is in a position to see everything you do on the Internet. Unlike a search engine or social networking platform or news site, you can’t easily switch to a competitor. And there’s not a lot of competition in the market, either. If you have a choice between two high-speed providers in the US, consider yourself lucky.

What can telecom companies do with this newly granted power to spy on everything you’re doing? Of course they can sell your data to marketers — and the inevitable criminals and foreign governments who also line up to buy it. But they can do more creepy things as well.

They can snoop through your traffic and insert their own ads. They can deploy systems that remove encryption so they can better eavesdrop. They can redirect your searches to other sites. They can install surveillance software on your computers and phones. None of these are hypothetical.

They’re all things Internet service providers have done before, and they are some of the reasons the FCC tried to protect your privacy in the first place. And now they’ll be able to do all of these things in secret, without your knowledge or consent. And, of course, governments worldwide will have access to these powers. And all of that data will be at risk of hacking, either by criminals and other governments.

Telecom companies have argued that other Internet players already have these creepy powers — although they didn’t use the word “creepy” — so why should they not have them as well? It’s a valid point.

Surveillance is already the business model of the Internet, and literally hundreds of companies spy on your Internet activity against your interests and for their own profit.

Your e-mail provider already knows everything you write to your family, friends, and colleagues. Google already knows our hopes, fears, and interests, because that’s what we search for.

Your cellular provider already tracks your physical location at all times: it knows where you live, where you work, when you go to sleep at night, when you wake up in the morning, and — because everyone has a smartphone — who you spend time with and who you sleep with.

And some of the things these companies do with that power is no less creepy. Facebook has run experiments in manipulating your mood by changing what you see on your news feed. Uber used its ride data to identify one-night stands. Even Sony once installed spyware on customers’ computers to try and detect if they copied music files.

Aside from spying for profit, companies can spy for other purposes. Uber has already considered using data it collects to intimidate a journalist. Imagine what an Internet service provider can do with the data it collects: against politicians, against the media, against rivals.

Of course the telecom companies want a piece of the surveillance capitalism pie. Despite dwindling revenues, increasing use of ad blockers, and increases in clickfraud, violating our privacy is still a profitable business — especially if it’s done in secret.

The bigger question is: why do we allow for-profit corporations to create our technological future in ways that are optimized for their profits and anathema to our own interests?

When markets work well, different companies compete on price and features, and society collectively rewards better products by purchasing them. This mechanism fails if there is no competition, or if rival companies choose not to compete on a particular feature. It fails when customers are unable to switch to competitors. And it fails when what companies do remains secret.

Unlike service providers like Google and Facebook, telecom companies are infrastructure that requires government involvement and regulation. The practical impossibility of consumers learning the extent of surveillance by their Internet service providers, combined with the difficulty of switching them, means that the decision about whether to be spied on should be with the consumer and not a telecom giant. That this new bill reverses that is both wrong and harmful.

Today, technology is changing the fabric of our society faster than at any other time in history. We have big questions that we need to tackle: not just privacy, but questions of freedom, fairness, and liberty. Algorithms are making decisions about policing, healthcare.

Driverless vehicles are making decisions about traffic and safety. Warfare is increasingly being fought remotely and autonomously. Censorship is on the rise globally. Propaganda is being promulgated more efficiently than ever. These problems won’t go away. If anything, the Internet of things and the computerization of every aspect of our lives will make it worse.

In today’s political climate, it seems impossible that Congress would legislate these things to our benefit. Right now, regulatory agencies such as the FTC and FCC are our best hope to protect our privacy and security against rampant corporate power. That Congress has decided to reduce that power leaves us at enormous risk.

It’s too late to do anything about this bill — Trump will certainly sign it — but we need to be alert to future bills that reduce our privacy and security.

This post previously appeared on the Guardian.

EDITED TO ADD: Former FCC Commissioner Tom Wheeler wrote a good op-ed on the subject. And here’s an essay laying out what this all means to the average Internet user.

Kubernetes 1.6 released

Post Syndicated from corbet original https://lwn.net/Articles/718283/rss

Version
1.6
of the Kubernetes orchestration system is available. “In
this release the community’s focus is on scale and automation, to help you
deploy multiple workloads to multiple users on a cluster. We are announcing
that 5,000 node clusters are supported. We moved dynamic storage
provisioning to stable. Role-based access control (RBAC), kubefed, kubeadm,
and several scheduling features are moving to beta. We have also added
intelligent defaults throughout to enable greater automation out of the
box.

AWS Week in Review – March 6, 2017

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-6-2017/

This edition includes all of our announcements, content from all of our blogs, and as much community-generated AWS content as I had time for!

Monday

March 6

Tuesday

March 7

Wednesday

March 8

Thursday

March 9

Friday

March 10

Saturday

March 11

Sunday

March 12

Jeff;