Tag Archives: uber

AWS Online Tech Talks – June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-june-2018/

AWS Online Tech Talks – June 2018

Join us this month to learn about AWS services and solutions. New this month, we have a fireside chat with the GM of Amazon WorkSpaces and our 2nd episode of the “How to re:Invent” series. We’ll also cover best practices, deep dives, use cases and more! Join us and register today!

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

 

Analytics & Big Data

June 18, 2018 | 11:00 AM – 11:45 AM PTGet Started with Real-Time Streaming Data in Under 5 Minutes – Learn how to use Amazon Kinesis to capture, store, and analyze streaming data in real-time including IoT device data, VPC flow logs, and clickstream data.
June 20, 2018 | 11:00 AM – 11:45 AM PT – Insights For Everyone – Deploying Data across your Organization – Learn how to deploy data at scale using AWS Analytics and QuickSight’s new reader role and usage based pricing.

 

AWS re:Invent
June 13, 2018 | 05:00 PM – 05:30 PM PTEpisode 2: AWS re:Invent Breakout Content Secret Sauce – Hear from one of our own AWS content experts as we dive deep into the re:Invent content strategy and how we maintain a high bar.
Compute

June 25, 2018 | 01:00 PM – 01:45 PM PTAccelerating Containerized Workloads with Amazon EC2 Spot Instances – Learn how to efficiently deploy containerized workloads and easily manage clusters at any scale at a fraction of the cost with Spot Instances.

June 26, 2018 | 01:00 PM – 01:45 PM PTEnsuring Your Windows Server Workloads Are Well-Architected – Get the benefits, best practices and tools on running your Microsoft Workloads on AWS leveraging a well-architected approach.

 

Containers
June 25, 2018 | 09:00 AM – 09:45 AM PTRunning Kubernetes on AWS – Learn about the basics of running Kubernetes on AWS including how setup masters, networking, security, and add auto-scaling to your cluster.

 

Databases

June 18, 2018 | 01:00 PM – 01:45 PM PTOracle to Amazon Aurora Migration, Step by Step – Learn how to migrate your Oracle database to Amazon Aurora.
DevOps

June 20, 2018 | 09:00 AM – 09:45 AM PTSet Up a CI/CD Pipeline for Deploying Containers Using the AWS Developer Tools – Learn how to set up a CI/CD pipeline for deploying containers using the AWS Developer Tools.

 

Enterprise & Hybrid
June 18, 2018 | 09:00 AM – 09:45 AM PTDe-risking Enterprise Migration with AWS Managed Services – Learn how enterprise customers are de-risking cloud adoption with AWS Managed Services.

June 19, 2018 | 11:00 AM – 11:45 AM PTLaunch AWS Faster using Automated Landing Zones – Learn how the AWS Landing Zone can automate the set up of best practice baselines when setting up new

 

AWS Environments

June 21, 2018 | 11:00 AM – 11:45 AM PTLeading Your Team Through a Cloud Transformation – Learn how you can help lead your organization through a cloud transformation.

June 21, 2018 | 01:00 PM – 01:45 PM PTEnabling New Retail Customer Experiences with Big Data – Learn how AWS can help retailers realize actual value from their big data and deliver on differentiated retail customer experiences.

June 28, 2018 | 01:00 PM – 01:45 PM PTFireside Chat: End User Collaboration on AWS – Learn how End User Compute services can help you deliver access to desktops and applications anywhere, anytime, using any device.
IoT

June 27, 2018 | 11:00 AM – 11:45 AM PTAWS IoT in the Connected Home – Learn how to use AWS IoT to build innovative Connected Home products.

 

Machine Learning

June 19, 2018 | 09:00 AM – 09:45 AM PTIntegrating Amazon SageMaker into your Enterprise – Learn how to integrate Amazon SageMaker and other AWS Services within an Enterprise environment.

June 21, 2018 | 09:00 AM – 09:45 AM PTBuilding Text Analytics Applications on AWS using Amazon Comprehend – Learn how you can unlock the value of your unstructured data with NLP-based text analytics.

 

Management Tools

June 20, 2018 | 01:00 PM – 01:45 PM PTOptimizing Application Performance and Costs with Auto Scaling – Learn how selecting the right scaling option can help optimize application performance and costs.

 

Mobile
June 25, 2018 | 11:00 AM – 11:45 AM PTDrive User Engagement with Amazon Pinpoint – Learn how Amazon Pinpoint simplifies and streamlines effective user engagement.

 

Security, Identity & Compliance

June 26, 2018 | 09:00 AM – 09:45 AM PTUnderstanding AWS Secrets Manager – Learn how AWS Secrets Manager helps you rotate and manage access to secrets centrally.
June 28, 2018 | 09:00 AM – 09:45 AM PTUsing Amazon Inspector to Discover Potential Security Issues – See how Amazon Inspector can be used to discover security issues of your instances.

 

Serverless

June 19, 2018 | 01:00 PM – 01:45 PM PTProductionize Serverless Application Building and Deployments with AWS SAM – Learn expert tips and techniques for building and deploying serverless applications at scale with AWS SAM.

 

Storage

June 26, 2018 | 11:00 AM – 11:45 AM PTDeep Dive: Hybrid Cloud Storage with AWS Storage Gateway – Learn how you can reduce your on-premises infrastructure by using the AWS Storage Gateway to connecting your applications to the scalable and reliable AWS storage services.
June 27, 2018 | 01:00 PM – 01:45 PM PTChanging the Game: Extending Compute Capabilities to the Edge – Discover how to change the game for IIoT and edge analytics applications with AWS Snowball Edge plus enhanced Compute instances.
June 28, 2018 | 11:00 AM – 11:45 AM PTBig Data and Analytics Workloads on Amazon EFS – Get best practices and deployment advice for running big data and analytics workloads on Amazon EFS.

Kata Containers 1.0

Post Syndicated from ris original https://lwn.net/Articles/755230/rss

Kata Containers 1.0 has been released. “This first release of Kata Containers completes the merger of Intel’s Clear Containers and Hyper’s runV technologies, and delivers an OCI compatible runtime with seamless integration for container ecosystem technologies like Docker and Kubernetes.

[$] Securing the container image supply chain

Post Syndicated from corbet original https://lwn.net/Articles/754443/rss

“Security is hard” is a tautology, especially in the fast-moving world
of container orchestration. We have previously covered various aspects of
Linux container
security through, for example, the Clear Containers implementation
or the broader question of Kubernetes and
security
, but those are mostly concerned with container isolation; they do not address the
question of trusting a container’s contents. What is a container running?
Who built it and when? Even assuming we have good programmers and solid
isolation layers, propagating that good code around a Kubernetes cluster
and making strong assertions on the integrity of that supply chain is far
from trivial. The 2018 KubeCon
+ CloudNativeCon Europe
event featured some projects that could
eventually solve that problem.

[$] Updates in container isolation

Post Syndicated from corbet original https://lwn.net/Articles/754433/rss

At KubeCon
+ CloudNativeCon Europe
2018, several talks explored the topic of
container isolation and security. The last year saw the release of Kata Containers which, combined with
the CRI-O project, provided strong isolation
guarantees for containers using a hypervisor. During the conference, Google
released its own hypervisor called gVisor, adding yet another
possible solution for this problem. Those new developments prompted the
community to work on integrating the concept of “secure containers”
(or “sandboxed containers”) deeper
into Kubernetes. This work is now coming to fruition; it prompts us to look
again at how Kubernetes tries to keep the bad guys from wreaking havoc once
they break into a container.

Security updates for Wednesday

Post Syndicated from ris original https://lwn.net/Articles/754653/rss

Security updates have been issued by CentOS (dhcp), Debian (xen), Fedora (dhcp, flac, kubernetes, leptonica, libgxps, LibRaw, matrix-synapse, mingw-LibRaw, mysql-mmm, patch, seamonkey, webkitgtk4, and xen), Mageia (389-ds-base, exempi, golang, graphite2, libpam4j, libraw, libsndfile, libtiff, perl, quassel, spring-ldap, util-linux, and wget), Oracle (dhcp and kernel), Red Hat (389-ds-base, chromium-browser, dhcp, docker-latest, firefox, kernel-alt, libvirt, qemu-kvm, redhat-vertualization-host, rh-haproxy18-haproxy, and rhvm-appliance), Scientific Linux (389-ds-base, dhcp, firefox, libvirt, and qemu-kvm), and Ubuntu (poppler).

[$] Autoscaling for Kubernetes workloads

Post Syndicated from corbet original https://lwn.net/Articles/754153/rss

Technologies like containers, clusters, and Kubernetes offer the prospect
of rapidly scaling the available computing resources to match variable demands
placed on the system. Actually implementing that scaling can be a
challenge, though.
During KubeCon
+ CloudNativeCon Europe 2018
,
Frederic Branczyk from CoreOS (now
part of Red Hat) held a packed session
to introduce a standard and officially recommended way to scale workloads
automatically in Kubernetes
clusters.

Security updates for Friday

Post Syndicated from ris original https://lwn.net/Articles/754257/rss

Security updates have been issued by Arch Linux (libmupdf, mupdf, mupdf-gl, and mupdf-tools), Debian (firebird2.5, firefox-esr, and wget), Fedora (ckeditor, drupal7, firefox, kubernetes, papi, perl-Dancer2, and quassel), openSUSE (cairo, firefox, ImageMagick, libapr1, nodejs6, php7, and tiff), Red Hat (qemu-kvm-rhev), Slackware (mariadb), SUSE (xen), and Ubuntu (openjdk-8).

YouTube Won’t Put Up With Blatant Piracy Tutorials Forever

Post Syndicated from Andy original https://torrentfreak.com/youtube-wont-put-up-with-blatant-piracy-tutorials-forever-180506/

Once upon a time, Internet users’ voices would be heard in limited circles, on platforms such as Usenet or other niche platforms.

Then, with the rise of forum platforms such as phpBB in 2000 and Invision Power Board in 2002, thriving communities could gather in public to discuss endless specialist topics, including file-sharing of course.

When dedicated piracy forums began to gain traction, it was pretty much a free-for-all. People discussed obtaining free content absolutely openly. Nothing was taboo and no one considered that there would be any repercussions. As such, moderation was limited to keeping troublemakers in check.

As the years progressed and lawsuits against both sites and services became more commonplace, most sites that weren’t actually serving illegal content began to consider their positions. Run by hobbyists, most didn’t want the hassle of a multi-million dollar lawsuit, so links to pirate content began to diminish and the more overt piracy tutorials began to disappear underground.

Those that remained in plain sight became much more considered. Tutorials on how to pirate specific Hollywood blockbusters were no longer needed, a plain general tutorial would suffice. And, as communities matured and took time to understand the implications of their actions, those without political motivations realized that drawing attention to potential criminality was neither required nor necessary.

Then YouTube and social media happened and almost overnight, no one was in charge and anyone could say whatever they liked.

In this new reality, there were no irritating moderator-type figures removing links to this and that, and nobody warning people against breaking rules that suddenly didn’t exist anymore. In essence, previously tight-knit and street-wise file-sharing and piracy communities not only became fragmented, but also chaotic.

This meant that anyone could become a leader and in some cases, this was the utopia that many had hoped for. Not only couldn’t the record labels or Hollywood tell people what to do anymore, discussion site operators couldn’t either. For those who didn’t abuse the power and for those who knew no better, this was a much-needed breath of fresh air. But, like all good things, it was unlikely to last forever.

Where most file-sharing of yesterday was carried out by hobbyist enthusiasts, many of today’s pirates are far more casual. They’re just as thirsty for content, but they don’t want to spend hours hunting for it. They want it all on a plate, at the flick of a switch, delivered to their TV with a minimum of hassle.

With online discussions increasingly seen as laborious and old-fashioned, many mainstream pirates have turned to easy-to-consume videos. In support of their Kodi media player habits, YouTube has become the educational platform of choice for millions.

As a result, there is now a long line of self-declared Kodi piracy specialists scooping up millions of views on YouTube. Their videos – which in many cases are thinly veiled advertisements for third party addons, Kodi ‘builds’, illegal IPTV services, and obscure Android APKs – are now the main way for a new generation to obtain direct advice on pirating.

Many of the videos are incredibly blatant, like the past 15 years of litigation never happened. All the lessons learned by the phpBB board operators of yesteryear, of how to achieve their goals of sharing information without getting shut down, have been long forgotten. In their place, a barrage of daily videos designed to generate clicks and affiliate revenue, no matter what the cost, no matter what the risk.

It’s pretty clear that these videos are at least partly responsible for the phenomenal uptick in Kodi and Android-based piracy over the past few years. In that respect, many lovers of free content will be eternally grateful for the service they’ve provided. But like many piracy movements over the years, people shouldn’t get too attached to them, at least in their current form.

Thanks to the devil-may-care approach of many influential YouTubers, it won’t be long before a whole new set of moderators begin flexing their muscles. While your average phpBB moderator could be reasoned with in order to get a second chance, a determined and largely faceless YouTube will eject offenders without so much as a clear explanation.

When this happens (and it’s only a question of time given the growing blatancy of many tutorials) YouTubers will not only lose their voices but their revenue streams too. While YouTube’s partner programs bring in some welcome cash, the profitable affiliate schemes touted on these channels for external products will also be under threat.

Perhaps the most surprising thing in this drama-waiting-to-happen is that many of the most popular YouTubers can hardly be considered young and naive. While some are of more tender years, most – with their undoubted skill, knowledge and work ethic – should know better for their 30 or 40 years on this planet. Yet not only do they make their names public, they feature their faces heavily in their videos too.

Still, it’s likely that it will take some big YouTube accounts to fall before YouTubers respond by shaving the sharp edges off their blatant promotion of illegal activity. And there’s little doubt that those advertising products (which is most of them) will have to do so sooner rather than later.

Just this week, YouTube made it clear that it won’t tolerate people making money from the promotion of illegal activities.

“YouTube creators may include paid endorsements as part of their content only if the product or service they are endorsing complies with our advertising policies,” YouTube told the BBC.

“We will be working with creators going forward so they better understand that in video promotions [they] must not promote dishonest activity.”

That being said, like many other players in the piracy and file-sharing space over the past 18 years, YouTubers will eventually begin to learn that not only can the smart survive, they can flourish too.

Sure, there will be people out there who’ll protest that free speech allows citizens to express themselves in a manner of their choosing. But try PM’ing that to YouTube in response to a strike, and see how that fares.

When they say you’re done, the road back is a long one.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

CI/CD with Data: Enabling Data Portability in a Software Delivery Pipeline with AWS Developer Tools, Kubernetes, and Portworx

Post Syndicated from Kausalya Rani Krishna Samy original https://aws.amazon.com/blogs/devops/cicd-with-data-enabling-data-portability-in-a-software-delivery-pipeline-with-aws-developer-tools-kubernetes-and-portworx/

This post is written by Eric Han – Vice President of Product Management Portworx and Asif Khan – Solutions Architect

Data is the soul of an application. As containers make it easier to package and deploy applications faster, testing plays an even more important role in the reliable delivery of software. Given that all applications have data, development teams want a way to reliably control, move, and test using real application data or, at times, obfuscated data.

For many teams, moving application data through a CI/CD pipeline, while honoring compliance and maintaining separation of concerns, has been a manual task that doesn’t scale. At best, it is limited to a few applications, and is not portable across environments. The goal should be to make running and testing stateful containers (think databases and message buses where operations are tracked) as easy as with stateless (such as with web front ends where they are often not).

Why is state important in testing scenarios? One reason is that many bugs manifest only when code is tested against real data. For example, we might simply want to test a database schema upgrade but a small synthetic dataset does not exercise the critical, finer corner cases in complex business logic. If we want true end-to-end testing, we need to be able to easily manage our data or state.

In this blog post, we define a CI/CD pipeline reference architecture that can automate data movement between applications. We also provide the steps to follow to configure the CI/CD pipeline.

 

Stateful Pipelines: Need for Portable Volumes

As part of continuous integration, testing, and deployment, a team may need to reproduce a bug found in production against a staging setup. Here, the hosting environment is comprised of a cluster with Kubernetes as the scheduler and Portworx for persistent volumes. The testing workflow is then automated by AWS CodeCommit, AWS CodePipeline, and AWS CodeBuild.

Portworx offers Kubernetes storage that can be used to make persistent volumes portable between AWS environments and pipelines. The addition of Portworx to the AWS Developer Tools continuous deployment for Kubernetes reference architecture adds persistent storage and storage orchestration to a Kubernetes cluster. The example uses MongoDB as the demonstration of a stateful application. In practice, the workflow applies to any containerized application such as Cassandra, MySQL, Kafka, and Elasticsearch.

Using the reference architecture, a developer calls CodePipeline to trigger a snapshot of the running production MongoDB database. Portworx then creates a block-based, writable snapshot of the MongoDB volume. Meanwhile, the production MongoDB database continues serving end users and is uninterrupted.

Without the Portworx integrations, a manual process would require an application-level backup of the database instance that is outside of the CI/CD process. For larger databases, this could take hours and impact production. The use of block-based snapshots follows best practices for resilient and non-disruptive backups.

As part of the workflow, CodePipeline deploys a new MongoDB instance for staging onto the Kubernetes cluster and mounts the second Portworx volume that has the data from production. CodePipeline triggers the snapshot of a Portworx volume through an AWS Lambda function, as shown here

 

 

 

AWS Developer Tools with Kubernetes: Integrated Workflow with Portworx

In the following workflow, a developer is testing changes to a containerized application that calls on MongoDB. The tests are performed against a staging instance of MongoDB. The same workflow applies if changes were on the server side. The original production deployment is scheduled as a Kubernetes deployment object and uses Portworx as the storage for the persistent volume.

The continuous deployment pipeline runs as follows:

  • Developers integrate bug fix changes into a main development branch that gets merged into a CodeCommit master branch.
  • Amazon CloudWatch triggers the pipeline when code is merged into a master branch of an AWS CodeCommit repository.
  • AWS CodePipeline sends the new revision to AWS CodeBuild, which builds a Docker container image with the build ID.
  • AWS CodeBuild pushes the new Docker container image tagged with the build ID to an Amazon ECR registry.
  • Kubernetes downloads the new container (for the database client) from Amazon ECR and deploys the application (as a pod) and staging MongoDB instance (as a deployment object).
  • AWS CodePipeline, through a Lambda function, calls Portworx to snapshot the production MongoDB and deploy a staging instance of MongoDB• Portworx provides a snapshot of the production instance as the persistent storage of the staging MongoDB
    • The MongoDB instance mounts the snapshot.

At this point, the staging setup mimics a production environment. Teams can run integration and full end-to-end tests, using partner tooling, without impacting production workloads. The full pipeline is shown here.

 

Summary

This reference architecture showcases how development teams can easily move data between production and staging for the purposes of testing. Instead of taking application-specific manual steps, all operations in this CodePipeline architecture are automated and tracked as part of the CI/CD process.

This integrated experience is part of making stateful containers as easy as stateless. With AWS CodePipeline for CI/CD process, developers can easily deploy stateful containers onto a Kubernetes cluster with Portworx storage and automate data movement within their process.

The reference architecture and code are available on GitHub:

● Reference architecture: https://github.com/portworx/aws-kube-codesuite
● Lambda function source code for Portworx additions: https://github.com/portworx/aws-kube-codesuite/blob/master/src/kube-lambda.py

For more information about persistent storage for containers, visit the Portworx website. For more information about Code Pipeline, see the AWS CodePipeline User Guide.

Hackspace magazine 6: Paper Engineering

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace-magazine-6/

HackSpace magazine is back with our brand-new issue 6, available for you on shop shelves, in your inbox, and on our website right now.

Inside Hackspace magazine 6

Paper is probably the first thing you ever used for making, and for good reason: in no other medium can you iterate through 20 designs at the cost of only a few pennies. We’ve roped in Rob Ives to show us how to make a barking paper dog with moveable parts and a cam mechanism. Even better, the magazine includes this free paper automaton for you to make yourself. That’s right: free!

At the other end of the scale, there’s the forge, where heat, light, and noise combine to create immutable steel. We speak to Alec Steele, YouTuber, blacksmith, and philosopher, about his amazingly beautiful Damascus steel creations, and about why there’s no difference between grinding a knife and blowing holes in a mountain to build a road through it.

HackSpace magazine 6 Alec Steele

Do it yourself

You’ve heard of reading glasses — how about glasses that read for you? Using a camera, optical character recognition software, and a text-to-speech engine (and of course a Raspberry Pi to hold it all together), reader Andrew Lewis has hacked together his own system to help deal with age-related macular degeneration.

It’s the definition of hacking: here’s a problem, there’s no solution in the shops, so you go and build it yourself!

Radio

60 years ago, the cutting edge of home hacking was the transistor radio. Before the internet was dreamt of, the transistor radio made the world smaller and brought people together. Nowadays, the components you need to build a radio are cheap and easily available, so if you’re in any way electronically inclined, building a radio is an ideal excuse to dust off your soldering iron.

Tutorials

If you’re a 12-month subscriber (if you’re not, you really should be), you’ve no doubt been thinking of all sorts of things to do with the Adafruit Circuit Playground Express we gave you for free. How about a sewable circuit for a canvas bag? Use the accelerometer to detect patterns of movement — walking, for example — and flash a series of lights in response. It’s clever, fun, and an easy way to add some programmable fun to your shopping trips.


We’re also making gin, hacking a children’s toy car to unlock more features, and getting started with robot sumo to fill the void left by the cancellation of Robot Wars.

HackSpace magazine 6

All this, plus an 11-metre tall mechanical miner, in HackSpace magazine issue 6 — subscribe here from just £4 an issue or get the PDF version for free. You can also find HackSpace magazine in WHSmith, Tesco, Sainsbury’s, and independent newsagents in the UK. If you live in the US, check out your local Barnes & Noble, Fry’s, or Micro Center next week. We’re also shipping to stores in Australia, Hong Kong, Canada, Singapore, Belgium, and Brazil, so be sure to ask your local newsagent whether they’ll be getting HackSpace magazine.

The post Hackspace magazine 6: Paper Engineering appeared first on Raspberry Pi.

Security updates for Tuesday

Post Syndicated from ris original https://lwn.net/Articles/751454/rss

Security updates have been issued by CentOS (libvorbis and thunderbird), Debian (pjproject), Fedora (compat-openssl10, java-1.8.0-openjdk-aarch32, libid3tag, python-pip, python3, and python3-docs), Gentoo (ZendFramework), Oracle (thunderbird), Red Hat (ansible, gcc, glibc, golang, kernel, kernel-alt, kernel-rt, krb5, kubernetes, libvncserver, libvorbis, ntp, openssh, openssl, pcs, policycoreutils, qemu-kvm, and xdg-user-dirs), SUSE (openssl and openssl1), and Ubuntu (python-crypto, ubuntu-release-upgrader, and wayland).

Security updates for Monday

Post Syndicated from ris original https://lwn.net/Articles/751346/rss

Security updates have been issued by Arch Linux (openssl and zziplib), Debian (ldap-account-manager, ming, python-crypto, sam2p, sdl-image1.2, and squirrelmail), Fedora (bchunk, koji, libidn, librelp, nodejs, and php), Gentoo (curl, dhcp, libvirt, mailx, poppler, qemu, and spice-vdagent), Mageia (389-ds-base, aubio, cfitsio, libvncserver, nmap, and ntp), openSUSE (GraphicsMagick, ImageMagick, spice-gtk, and wireshark), Oracle (kubernetes), Slackware (patch), and SUSE (apache2 and openssl).

timeShift(GrafanaBuzz, 1w) Issue 38

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/03/30/timeshiftgrafanabuzz-1w-issue-38/

Welcome to TimeShift We have an abridged version of timeShift this week due to the long holiday weekend. Earlier this week we released Grafana v5.0.4 which included fixes for alerting, snapshots, starting Grafana on K8s and more. See the section below for specific bug fixes. Enjoy this issue and we’ll see you next week! Latest Stable Release This week we rolled out Grafana 5.0.4. Bug fixes in the latest release include: Docker: Can’t start Grafana on Kubernetes 1.

Facebook and Cambridge Analytica

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/facebook_and_ca.html

In the wake of the Cambridge Analytica scandal, news articles and commentators have focused on what Facebook knows about us. A lot, it turns out. It collects data from our posts, our likes, our photos, things we type and delete without posting, and things we do while not on Facebook and even when we’re offline. It buys data about us from others. And it can infer even more: our sexual orientation, political beliefs, relationship status, drug use, and other personality traits — even if we didn’t take the personality test that Cambridge Analytica developed.

But for every article about Facebook’s creepy stalker behavior, thousands of other companies are breathing a collective sigh of relief that it’s Facebook and not them in the spotlight. Because while Facebook is one of the biggest players in this space, there are thousands of other companies that spy on and manipulate us for profit.

Harvard Business School professor Shoshana Zuboff calls it “surveillance capitalism.” And as creepy as Facebook is turning out to be, the entire industry is far creepier. It has existed in secret far too long, and it’s up to lawmakers to force these companies into the public spotlight, where we can all decide if this is how we want society to operate and — if not — what to do about it.

There are 2,500 to 4,000 data brokers in the United States whose business is buying and selling our personal data. Last year, Equifax was in the news when hackers stole personal information on 150 million people, including Social Security numbers, birth dates, addresses, and driver’s license numbers.

You certainly didn’t give it permission to collect any of that information. Equifax is one of those thousands of data brokers, most of them you’ve never heard of, selling your personal information without your knowledge or consent to pretty much anyone who will pay for it.

Surveillance capitalism takes this one step further. Companies like Facebook and Google offer you free services in exchange for your data. Google’s surveillance isn’t in the news, but it’s startlingly intimate. We never lie to our search engines. Our interests and curiosities, hopes and fears, desires and sexual proclivities, are all collected and saved. Add to that the websites we visit that Google tracks through its advertising network, our Gmail accounts, our movements via Google Maps, and what it can collect from our smartphones.

That phone is probably the most intimate surveillance device ever invented. It tracks our location continuously, so it knows where we live, where we work, and where we spend our time. It’s the first and last thing we check in a day, so it knows when we wake up and when we go to sleep. We all have one, so it knows who we sleep with. Uber used just some of that information to detect one-night stands; your smartphone provider and any app you allow to collect location data knows a lot more.

Surveillance capitalism drives much of the internet. It’s behind most of the “free” services, and many of the paid ones as well. Its goal is psychological manipulation, in the form of personalized advertising to persuade you to buy something or do something, like vote for a candidate. And while the individualized profile-driven manipulation exposed by Cambridge Analytica feels abhorrent, it’s really no different from what every company wants in the end. This is why all your personal information is collected, and this is why it is so valuable. Companies that can understand it can use it against you.

None of this is new. The media has been reporting on surveillance capitalism for years. In 2015, I wrote a book about it. Back in 2010, the Wall Street Journal published an award-winning two-year series about how people are tracked both online and offline, titled “What They Know.”

Surveillance capitalism is deeply embedded in our increasingly computerized society, and if the extent of it came to light there would be broad demands for limits and regulation. But because this industry can largely operate in secret, only occasionally exposed after a data breach or investigative report, we remain mostly ignorant of its reach.

This might change soon. In 2016, the European Union passed the comprehensive General Data Protection Regulation, or GDPR. The details of the law are far too complex to explain here, but some of the things it mandates are that personal data of EU citizens can only be collected and saved for “specific, explicit, and legitimate purposes,” and only with explicit consent of the user. Consent can’t be buried in the terms and conditions, nor can it be assumed unless the user opts in. This law will take effect in May, and companies worldwide are bracing for its enforcement.

Because pretty much all surveillance capitalism companies collect data on Europeans, this will expose the industry like nothing else. Here’s just one example. In preparation for this law, PayPal quietly published a list of over 600 companies it might share your personal data with. What will it be like when every company has to publish this sort of information, and explicitly explain how it’s using your personal data? We’re about to find out.

In the wake of this scandal, even Mark Zuckerberg said that his industry probably should be regulated, although he’s certainly not wishing for the sorts of comprehensive regulation the GDPR is bringing to Europe.

He’s right. Surveillance capitalism has operated without constraints for far too long. And advances in both big data analysis and artificial intelligence will make tomorrow’s applications far creepier than today’s. Regulation is the only answer.

The first step to any regulation is transparency. Who has our data? Is it accurate? What are they doing with it? Who are they selling it to? How are they securing it? Can we delete it? I don’t see any hope of Congress passing a GDPR-like data protection law anytime soon, but it’s not too far-fetched to demand laws requiring these companies to be more transparent in what they’re doing.

One of the responses to the Cambridge Analytica scandal is that people are deleting their Facebook accounts. It’s hard to do right, and doesn’t do anything about the data that Facebook collects about people who don’t use Facebook. But it’s a start. The market can put pressure on these companies to reduce their spying on us, but it can only do that if we force the industry out of its secret shadows.

This essay previously appeared on CNN.com.

EDITED TO ADD (4/2): Slashdot thread.

Kubernetes 1.10 released

Post Syndicated from ris original https://lwn.net/Articles/750236/rss

Kubernetes 1.10 has been released. “This newest version stabilizes features in 3 key areas, including storage, security, and networking. Notable additions in this release include the introduction of external kubectl credential providers (alpha), the ability to switch DNS service to CoreDNS at install time (beta), and the move of Container Storage Interface (CSI) and persistent local volumes to beta.

Технологиите изпреварват политиките?

Post Syndicated from Bozho original https://blog.bozho.net/blog/3069

(статията е публикувана първоначално в списание „Мениджър“)

Технологичното развитие в днешно време е толкова бързо, че законодателството и националните политики по редица въпроси, изглежда, изостават сериозно.

Това изоставане има три аспекта. Първият се отнася до вече съществуващи бизнес модели и практики, които биват променяни с развитието на технологиите. Пример за това са таксиметровите и хотелиерските услуги. Uber и Airbnb не са били възможни преди 10-15 години. И съответно законодателството в тези сфери не ги допуска като възможност – такситата трябва да имат таксиметров апарат и да бъдат жълти, а хотелите трябва да са категоризирани и да отговарят на определени изисквания. Новите технологии позволяват създаването на репутационни системи (например за таксита или хотели), базирани на оценката на потребителите, а не на това какво смята държавата. Съответно законодателството изостава от реалностите и трябва да наваксва.

Вторият аспект на изоставането е свързан с тези технологии, които изцяло променят обществените отношения. Такава област е например изкуственият интелект, а в известен смисъл и блокчейн и криптовалутите. Те имат потенциал да създадат изцяло нови обществени отношения, като решенията при възникнали казуси много трудно може да се съпоставят с вече съществуващи практики.

Третият аспект на изоставащите политики е в електронното управление, т.е. приложението на новите технологии в администрацията. Тъй като проектите имат забавяне от няколко години между идея и реализация, рядко може да бъдат приложени най-новите добри технологични практики. Това обаче не е същественият проблем, тъй като не се прилагат дори и добри практики на по десет години.

Всъщност фундаментални технологични промени не се случват чак толкова бързо. Интернет и уеб набраха скорост за повече от десетилетие, а социалните мрежи и смартфоните също ги има над десет години. Т.е. оправданието, че технологиите се изменят твърде бързо, не е напълно приемливо. По-скоро причината за усещането за изоставане на публичните политики е резултат от няколко фактора: инерцията на администрацията, липсата на технологична адекватност на политическото лидерство, както и неумението за създаване на технологично неутрална нормативна уредба.

Инерцията и мудността на администрацията са естествено явление, което доскоро е можело да бъде разгледано и като преимущество, даващо стабилност и приемственост на институциите. Към това може да причислим и липсата на готовност за използване на новите технологии. Политическото лидерство също не е източник на иновации. Дори в западните демокрации няма много технологично ориентирани лидери и (с изключение на Естония) технологиите достигат до политическия елит едва когато придобият масова популярност.

Технологично неутралните регулации са умение, което нито инертната администрация, нито „нетехнологичното“ лидерство е успяло да усъвършенства. Това е умението да правиш такива закони, че да не се налага да ги променяш при напредък на технологиите. Например не е разумно да се фиксира технологията „таксиметров апарат“, защото при поява на други начини за сигурно измерване на разстояние ще се наложи промяна в закона. От гледна точка на електронното управление например не е разумно в закона за електронната идентификация да се определя, че идентификацията е на картов носител, защото същото ниво на сигурност в близко бъдеще ще бъде предоставяно от смартфоните. Чрез технологично неутрални регулации новите технологии няма да доведат до чак такова изоставане, защото за въвеждането им няма да е необходимо да се чака решение на Народното събрание.

Проактивното изграждане на електронно управление и технологично неутралните регулации ще намалят усещането за изоставане на политиките от технологиите. При изцяло новите технологии обаче, които създават принципно различни обществени отношения, това изоставане е неизбежно. Правото върви след реалния и виртуалния свят. Създаването на правни норми винаги идва със закъснение и няма как да предхожда новите развития.

Дали липсата на регулация на изкуствения интелект или криптовалутите ще доведе до сериозни проблеми? Едва ли, тъй като и тяхното развитие е сравнително бавно. Те съществуват от около десетилетие, така че оставят достатъчно време публичните политики да се възползват от тях, както и да ги регулират.

Все пак иновациите се случват до голяма степен в частния сектор, като публичният следва да ги използва и в краен случай – да ги регулира, само след като те вече са се наложили. В този смисъл технологиите, които изпреварват политиките, не са нещо, което би трябвало да ни притеснява.

N-O-D-E’s always-on networked Pi Plug

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/node-pi-plug/

N-O-D-E’s Pi Plug is a simple approach to using a Raspberry Pi Zero W as an always-on networked device without a tangle of wires.

Pi Plug 2: Turn The Pi Zero Into A Mini Server

Today I’m back with an update on the Pi Plug I made a while back. This prototype is still in the works, and is much more modular than the previous version. https://N-O-D-E.net/piplug2.html https://github.com/N-O-D-E/piplug —————- Shop: http://N-O-D-E.net/shop/ Patreon: http://patreon.com/N_O_D_E_ BTC: 17HqC7ZzmpE7E8Liuyb5WRbpwswBUgKRGZ Newsletter: http://eepurl.com/ceA-nL Music: https://archive.org/details/Fwawn-FromManToGod

The Pi Zero Power Case

In a video early last year, YouTuber N-O-D-E revealed his Pi Zero Power Case, an all-in-one always-on networked computer that fits snugly against a wall power socket.

NODE Plug Raspberry Pi Plug

The project uses an official Raspberry Pi power supply, a Zero4U USB hub, and a Raspberry Pi Zero W, and it allows completely wireless connection to a network. N-O-D-E cut the power cord and soldered its wires directly to the power input of the USB hub. The hub powers the Zero via pogo pins that connect directly to the test pads beneath.

The Power Case is a neat project, but it may be a little daunting for anyone not keen on cutting and soldering the power supply wires.

Pi Plug 2

In his overhaul of the design, N-O-D-E has created a modular reimagining of the previous always-on networked computer that fits more streamlined to the wall socket and requires absolutely no soldering or hacking of physical hardware.

Pi Plug

The Pi Plug 2 uses a USB power supply alongside two custom PCBs and a Zero W. While one PCB houses a USB connector that slots directly into the power supply, two blobs of solder on the second PCB press against the test pads beneath the Zero W. When connected, the PCBs run power directly from the wall socket to the Raspberry Pi Zero W. Neat!

NODE Plug Raspberry Pi
NODE Plug Raspberry Pi
NODE Plug Raspberry Pi
NODE Plug Raspberry Pi

While N-O-D-E isn’t currently selling these PCBs in his online store, all files are available on GitHub, so have a look if you want to recreate the Pi Plug.

Uses

In another video — and seriously, if you haven’t checked out N-O-D-E’s YouTube channel yet, you really should — he demonstrates a few changes that can turn your Zero into a USB dongle computer. This is a great hack if you don’t want to carry a power supply around in your pocket. As N-O-D-E explains:

Besides simply SSH’ing into the Pi, you could also easily install a remote desktop client and use the GUI. You can share your computer’s internet connection with the Pi and use it just like you would normally, but now without the need for a monitor, chargers, adapters, cables, or peripherals.

We’re keen to see how our community is hacking their Zeros and Zero Ws in order to take full advantage of the small footprint of the computer, so be sure to share your projects and ideas with us, either in the comments below or via social media.

The post N-O-D-E’s always-on networked Pi Plug appeared first on Raspberry Pi.

AWS Hot Startups for February 2018: Canva, Figma, InVision

Post Syndicated from Tina Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-for-february-2018-canva-figma-invision/

Note to readers! Starting next month, we will be publishing our monthly Hot Startups blog post on the AWS Startup Blog. Please come check us out.

As visual communication—whether through social media channels like Instagram or white space-heavy product pages—becomes a central part of everyone’s life, accessible design platforms and tools become more and more important in the world of tech. This trend is why we have chosen to spotlight three design-related startups—namely Canva, Figma, and InVision—as our hot startups for the month of February. Please read on to learn more about these design-savvy companies and be sure to check out our full post here.

Canva (Sydney, Australia)

For a long time, creating designs required expensive software, extensive studying, and time spent waiting for feedback from clients or colleagues. With Canva, a graphic design tool that makes creating designs much simpler and accessible, users have the opportunity to design anything and publish anywhere. The platform—which integrates professional design elements, including stock photography, graphic elements, and fonts for users to build designs either entirely from scratch or from thousands of free templates—is available on desktop, iOS, and Android, making it possible to spin up an invitation, poster, or graphic on a smartphone at any time.

To learn more about Canva, read our full interview with CEO Melanie Perkins here.

Figma (San Francisco, CA)

Figma is a cloud-based design platform that empowers designers to communicate and collaborate more effectively. Using recent advancements in WebGL, Figma offers a design tool that doesn’t require users to install any software or special operating systems. It also allows multiple people to work in a file at the same time—a crucial feature.

As the need for new design talent increases, the industry will need plenty of junior designers to keep up with the demand. Figma is prepared to help students by offering their platform for free. Through this, they “hope to give young designers the resources necessary to kick-start their education and eventually, their careers.”

For more about Figma, check out our full interview with CEO Dylan Field here.

InVision (New York, NY)

Founded in 2011 with the goal of helping improve every digital experience in the world, digital product design platform InVision helps users create a streamlined and scalable product design process, build and iterate on prototypes, and collaborate across organizations. The company, which raised a $100 million series E last November, bringing the company’s total funding to $235 million, currently powers the digital product design process at more than 80 percent of the Fortune 100 and brands like Airbnb, HBO, Netflix, and Uber.

Learn more about InVision here.

Be sure to check out our full post on the AWS Startups blog!

-Tina