Tag Archives: uber

AWS Online Tech Talks – June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-june-2018/

AWS Online Tech Talks – June 2018

Join us this month to learn about AWS services and solutions. New this month, we have a fireside chat with the GM of Amazon WorkSpaces and our 2nd episode of the “How to re:Invent” series. We’ll also cover best practices, deep dives, use cases and more! Join us and register today!

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:


Analytics & Big Data

June 18, 2018 | 11:00 AM – 11:45 AM PTGet Started with Real-Time Streaming Data in Under 5 Minutes – Learn how to use Amazon Kinesis to capture, store, and analyze streaming data in real-time including IoT device data, VPC flow logs, and clickstream data.
June 20, 2018 | 11:00 AM – 11:45 AM PT – Insights For Everyone – Deploying Data across your Organization – Learn how to deploy data at scale using AWS Analytics and QuickSight’s new reader role and usage based pricing.


AWS re:Invent
June 13, 2018 | 05:00 PM – 05:30 PM PTEpisode 2: AWS re:Invent Breakout Content Secret Sauce – Hear from one of our own AWS content experts as we dive deep into the re:Invent content strategy and how we maintain a high bar.

June 25, 2018 | 01:00 PM – 01:45 PM PTAccelerating Containerized Workloads with Amazon EC2 Spot Instances – Learn how to efficiently deploy containerized workloads and easily manage clusters at any scale at a fraction of the cost with Spot Instances.

June 26, 2018 | 01:00 PM – 01:45 PM PTEnsuring Your Windows Server Workloads Are Well-Architected – Get the benefits, best practices and tools on running your Microsoft Workloads on AWS leveraging a well-architected approach.


June 25, 2018 | 09:00 AM – 09:45 AM PTRunning Kubernetes on AWS – Learn about the basics of running Kubernetes on AWS including how setup masters, networking, security, and add auto-scaling to your cluster.



June 18, 2018 | 01:00 PM – 01:45 PM PTOracle to Amazon Aurora Migration, Step by Step – Learn how to migrate your Oracle database to Amazon Aurora.

June 20, 2018 | 09:00 AM – 09:45 AM PTSet Up a CI/CD Pipeline for Deploying Containers Using the AWS Developer Tools – Learn how to set up a CI/CD pipeline for deploying containers using the AWS Developer Tools.


Enterprise & Hybrid
June 18, 2018 | 09:00 AM – 09:45 AM PTDe-risking Enterprise Migration with AWS Managed Services – Learn how enterprise customers are de-risking cloud adoption with AWS Managed Services.

June 19, 2018 | 11:00 AM – 11:45 AM PTLaunch AWS Faster using Automated Landing Zones – Learn how the AWS Landing Zone can automate the set up of best practice baselines when setting up new


AWS Environments

June 21, 2018 | 11:00 AM – 11:45 AM PTLeading Your Team Through a Cloud Transformation – Learn how you can help lead your organization through a cloud transformation.

June 21, 2018 | 01:00 PM – 01:45 PM PTEnabling New Retail Customer Experiences with Big Data – Learn how AWS can help retailers realize actual value from their big data and deliver on differentiated retail customer experiences.

June 28, 2018 | 01:00 PM – 01:45 PM PTFireside Chat: End User Collaboration on AWS – Learn how End User Compute services can help you deliver access to desktops and applications anywhere, anytime, using any device.

June 27, 2018 | 11:00 AM – 11:45 AM PTAWS IoT in the Connected Home – Learn how to use AWS IoT to build innovative Connected Home products.


Machine Learning

June 19, 2018 | 09:00 AM – 09:45 AM PTIntegrating Amazon SageMaker into your Enterprise – Learn how to integrate Amazon SageMaker and other AWS Services within an Enterprise environment.

June 21, 2018 | 09:00 AM – 09:45 AM PTBuilding Text Analytics Applications on AWS using Amazon Comprehend – Learn how you can unlock the value of your unstructured data with NLP-based text analytics.


Management Tools

June 20, 2018 | 01:00 PM – 01:45 PM PTOptimizing Application Performance and Costs with Auto Scaling – Learn how selecting the right scaling option can help optimize application performance and costs.


June 25, 2018 | 11:00 AM – 11:45 AM PTDrive User Engagement with Amazon Pinpoint – Learn how Amazon Pinpoint simplifies and streamlines effective user engagement.


Security, Identity & Compliance

June 26, 2018 | 09:00 AM – 09:45 AM PTUnderstanding AWS Secrets Manager – Learn how AWS Secrets Manager helps you rotate and manage access to secrets centrally.
June 28, 2018 | 09:00 AM – 09:45 AM PTUsing Amazon Inspector to Discover Potential Security Issues – See how Amazon Inspector can be used to discover security issues of your instances.



June 19, 2018 | 01:00 PM – 01:45 PM PTProductionize Serverless Application Building and Deployments with AWS SAM – Learn expert tips and techniques for building and deploying serverless applications at scale with AWS SAM.



June 26, 2018 | 11:00 AM – 11:45 AM PTDeep Dive: Hybrid Cloud Storage with AWS Storage Gateway – Learn how you can reduce your on-premises infrastructure by using the AWS Storage Gateway to connecting your applications to the scalable and reliable AWS storage services.
June 27, 2018 | 01:00 PM – 01:45 PM PTChanging the Game: Extending Compute Capabilities to the Edge – Discover how to change the game for IIoT and edge analytics applications with AWS Snowball Edge plus enhanced Compute instances.
June 28, 2018 | 11:00 AM – 11:45 AM PTBig Data and Analytics Workloads on Amazon EFS – Get best practices and deployment advice for running big data and analytics workloads on Amazon EFS.

Kata Containers 1.0

Post Syndicated from ris original https://lwn.net/Articles/755230/rss

Kata Containers 1.0 has been released. “This first release of Kata Containers completes the merger of Intel’s Clear Containers and Hyper’s runV technologies, and delivers an OCI compatible runtime with seamless integration for container ecosystem technologies like Docker and Kubernetes.

[$] Securing the container image supply chain

Post Syndicated from corbet original https://lwn.net/Articles/754443/rss

“Security is hard” is a tautology, especially in the fast-moving world
of container orchestration. We have previously covered various aspects of
Linux container
security through, for example, the Clear Containers implementation
or the broader question of Kubernetes and
, but those are mostly concerned with container isolation; they do not address the
question of trusting a container’s contents. What is a container running?
Who built it and when? Even assuming we have good programmers and solid
isolation layers, propagating that good code around a Kubernetes cluster
and making strong assertions on the integrity of that supply chain is far
from trivial. The 2018 KubeCon
+ CloudNativeCon Europe
event featured some projects that could
eventually solve that problem.

[$] Updates in container isolation

Post Syndicated from corbet original https://lwn.net/Articles/754433/rss

At KubeCon
+ CloudNativeCon Europe
2018, several talks explored the topic of
container isolation and security. The last year saw the release of Kata Containers which, combined with
the CRI-O project, provided strong isolation
guarantees for containers using a hypervisor. During the conference, Google
released its own hypervisor called gVisor, adding yet another
possible solution for this problem. Those new developments prompted the
community to work on integrating the concept of “secure containers”
(or “sandboxed containers”) deeper
into Kubernetes. This work is now coming to fruition; it prompts us to look
again at how Kubernetes tries to keep the bad guys from wreaking havoc once
they break into a container.

Security updates for Wednesday

Post Syndicated from ris original https://lwn.net/Articles/754653/rss

Security updates have been issued by CentOS (dhcp), Debian (xen), Fedora (dhcp, flac, kubernetes, leptonica, libgxps, LibRaw, matrix-synapse, mingw-LibRaw, mysql-mmm, patch, seamonkey, webkitgtk4, and xen), Mageia (389-ds-base, exempi, golang, graphite2, libpam4j, libraw, libsndfile, libtiff, perl, quassel, spring-ldap, util-linux, and wget), Oracle (dhcp and kernel), Red Hat (389-ds-base, chromium-browser, dhcp, docker-latest, firefox, kernel-alt, libvirt, qemu-kvm, redhat-vertualization-host, rh-haproxy18-haproxy, and rhvm-appliance), Scientific Linux (389-ds-base, dhcp, firefox, libvirt, and qemu-kvm), and Ubuntu (poppler).

[$] Autoscaling for Kubernetes workloads

Post Syndicated from corbet original https://lwn.net/Articles/754153/rss

Technologies like containers, clusters, and Kubernetes offer the prospect
of rapidly scaling the available computing resources to match variable demands
placed on the system. Actually implementing that scaling can be a
challenge, though.
During KubeCon
+ CloudNativeCon Europe 2018
Frederic Branczyk from CoreOS (now
part of Red Hat) held a packed session
to introduce a standard and officially recommended way to scale workloads
automatically in Kubernetes

Security updates for Friday

Post Syndicated from ris original https://lwn.net/Articles/754257/rss

Security updates have been issued by Arch Linux (libmupdf, mupdf, mupdf-gl, and mupdf-tools), Debian (firebird2.5, firefox-esr, and wget), Fedora (ckeditor, drupal7, firefox, kubernetes, papi, perl-Dancer2, and quassel), openSUSE (cairo, firefox, ImageMagick, libapr1, nodejs6, php7, and tiff), Red Hat (qemu-kvm-rhev), Slackware (mariadb), SUSE (xen), and Ubuntu (openjdk-8).

CI/CD with Data: Enabling Data Portability in a Software Delivery Pipeline with AWS Developer Tools, Kubernetes, and Portworx

Post Syndicated from Kausalya Rani Krishna Samy original https://aws.amazon.com/blogs/devops/cicd-with-data-enabling-data-portability-in-a-software-delivery-pipeline-with-aws-developer-tools-kubernetes-and-portworx/

This post is written by Eric Han – Vice President of Product Management Portworx and Asif Khan – Solutions Architect

Data is the soul of an application. As containers make it easier to package and deploy applications faster, testing plays an even more important role in the reliable delivery of software. Given that all applications have data, development teams want a way to reliably control, move, and test using real application data or, at times, obfuscated data.

For many teams, moving application data through a CI/CD pipeline, while honoring compliance and maintaining separation of concerns, has been a manual task that doesn’t scale. At best, it is limited to a few applications, and is not portable across environments. The goal should be to make running and testing stateful containers (think databases and message buses where operations are tracked) as easy as with stateless (such as with web front ends where they are often not).

Why is state important in testing scenarios? One reason is that many bugs manifest only when code is tested against real data. For example, we might simply want to test a database schema upgrade but a small synthetic dataset does not exercise the critical, finer corner cases in complex business logic. If we want true end-to-end testing, we need to be able to easily manage our data or state.

In this blog post, we define a CI/CD pipeline reference architecture that can automate data movement between applications. We also provide the steps to follow to configure the CI/CD pipeline.


Stateful Pipelines: Need for Portable Volumes

As part of continuous integration, testing, and deployment, a team may need to reproduce a bug found in production against a staging setup. Here, the hosting environment is comprised of a cluster with Kubernetes as the scheduler and Portworx for persistent volumes. The testing workflow is then automated by AWS CodeCommit, AWS CodePipeline, and AWS CodeBuild.

Portworx offers Kubernetes storage that can be used to make persistent volumes portable between AWS environments and pipelines. The addition of Portworx to the AWS Developer Tools continuous deployment for Kubernetes reference architecture adds persistent storage and storage orchestration to a Kubernetes cluster. The example uses MongoDB as the demonstration of a stateful application. In practice, the workflow applies to any containerized application such as Cassandra, MySQL, Kafka, and Elasticsearch.

Using the reference architecture, a developer calls CodePipeline to trigger a snapshot of the running production MongoDB database. Portworx then creates a block-based, writable snapshot of the MongoDB volume. Meanwhile, the production MongoDB database continues serving end users and is uninterrupted.

Without the Portworx integrations, a manual process would require an application-level backup of the database instance that is outside of the CI/CD process. For larger databases, this could take hours and impact production. The use of block-based snapshots follows best practices for resilient and non-disruptive backups.

As part of the workflow, CodePipeline deploys a new MongoDB instance for staging onto the Kubernetes cluster and mounts the second Portworx volume that has the data from production. CodePipeline triggers the snapshot of a Portworx volume through an AWS Lambda function, as shown here




AWS Developer Tools with Kubernetes: Integrated Workflow with Portworx

In the following workflow, a developer is testing changes to a containerized application that calls on MongoDB. The tests are performed against a staging instance of MongoDB. The same workflow applies if changes were on the server side. The original production deployment is scheduled as a Kubernetes deployment object and uses Portworx as the storage for the persistent volume.

The continuous deployment pipeline runs as follows:

  • Developers integrate bug fix changes into a main development branch that gets merged into a CodeCommit master branch.
  • Amazon CloudWatch triggers the pipeline when code is merged into a master branch of an AWS CodeCommit repository.
  • AWS CodePipeline sends the new revision to AWS CodeBuild, which builds a Docker container image with the build ID.
  • AWS CodeBuild pushes the new Docker container image tagged with the build ID to an Amazon ECR registry.
  • Kubernetes downloads the new container (for the database client) from Amazon ECR and deploys the application (as a pod) and staging MongoDB instance (as a deployment object).
  • AWS CodePipeline, through a Lambda function, calls Portworx to snapshot the production MongoDB and deploy a staging instance of MongoDB• Portworx provides a snapshot of the production instance as the persistent storage of the staging MongoDB
    • The MongoDB instance mounts the snapshot.

At this point, the staging setup mimics a production environment. Teams can run integration and full end-to-end tests, using partner tooling, without impacting production workloads. The full pipeline is shown here.



This reference architecture showcases how development teams can easily move data between production and staging for the purposes of testing. Instead of taking application-specific manual steps, all operations in this CodePipeline architecture are automated and tracked as part of the CI/CD process.

This integrated experience is part of making stateful containers as easy as stateless. With AWS CodePipeline for CI/CD process, developers can easily deploy stateful containers onto a Kubernetes cluster with Portworx storage and automate data movement within their process.

The reference architecture and code are available on GitHub:

● Reference architecture: https://github.com/portworx/aws-kube-codesuite
● Lambda function source code for Portworx additions: https://github.com/portworx/aws-kube-codesuite/blob/master/src/kube-lambda.py

For more information about persistent storage for containers, visit the Portworx website. For more information about Code Pipeline, see the AWS CodePipeline User Guide.

Hackspace magazine 6: Paper Engineering

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace-magazine-6/

HackSpace magazine is back with our brand-new issue 6, available for you on shop shelves, in your inbox, and on our website right now.

Inside Hackspace magazine 6

Paper is probably the first thing you ever used for making, and for good reason: in no other medium can you iterate through 20 designs at the cost of only a few pennies. We’ve roped in Rob Ives to show us how to make a barking paper dog with moveable parts and a cam mechanism. Even better, the magazine includes this free paper automaton for you to make yourself. That’s right: free!

At the other end of the scale, there’s the forge, where heat, light, and noise combine to create immutable steel. We speak to Alec Steele, YouTuber, blacksmith, and philosopher, about his amazingly beautiful Damascus steel creations, and about why there’s no difference between grinding a knife and blowing holes in a mountain to build a road through it.

HackSpace magazine 6 Alec Steele

Do it yourself

You’ve heard of reading glasses — how about glasses that read for you? Using a camera, optical character recognition software, and a text-to-speech engine (and of course a Raspberry Pi to hold it all together), reader Andrew Lewis has hacked together his own system to help deal with age-related macular degeneration.

It’s the definition of hacking: here’s a problem, there’s no solution in the shops, so you go and build it yourself!


60 years ago, the cutting edge of home hacking was the transistor radio. Before the internet was dreamt of, the transistor radio made the world smaller and brought people together. Nowadays, the components you need to build a radio are cheap and easily available, so if you’re in any way electronically inclined, building a radio is an ideal excuse to dust off your soldering iron.


If you’re a 12-month subscriber (if you’re not, you really should be), you’ve no doubt been thinking of all sorts of things to do with the Adafruit Circuit Playground Express we gave you for free. How about a sewable circuit for a canvas bag? Use the accelerometer to detect patterns of movement — walking, for example — and flash a series of lights in response. It’s clever, fun, and an easy way to add some programmable fun to your shopping trips.

We’re also making gin, hacking a children’s toy car to unlock more features, and getting started with robot sumo to fill the void left by the cancellation of Robot Wars.

HackSpace magazine 6

All this, plus an 11-metre tall mechanical miner, in HackSpace magazine issue 6 — subscribe here from just £4 an issue or get the PDF version for free. You can also find HackSpace magazine in WHSmith, Tesco, Sainsbury’s, and independent newsagents in the UK. If you live in the US, check out your local Barnes & Noble, Fry’s, or Micro Center next week. We’re also shipping to stores in Australia, Hong Kong, Canada, Singapore, Belgium, and Brazil, so be sure to ask your local newsagent whether they’ll be getting HackSpace magazine.

The post Hackspace magazine 6: Paper Engineering appeared first on Raspberry Pi.

Security updates for Tuesday

Post Syndicated from ris original https://lwn.net/Articles/751454/rss

Security updates have been issued by CentOS (libvorbis and thunderbird), Debian (pjproject), Fedora (compat-openssl10, java-1.8.0-openjdk-aarch32, libid3tag, python-pip, python3, and python3-docs), Gentoo (ZendFramework), Oracle (thunderbird), Red Hat (ansible, gcc, glibc, golang, kernel, kernel-alt, kernel-rt, krb5, kubernetes, libvncserver, libvorbis, ntp, openssh, openssl, pcs, policycoreutils, qemu-kvm, and xdg-user-dirs), SUSE (openssl and openssl1), and Ubuntu (python-crypto, ubuntu-release-upgrader, and wayland).

Security updates for Monday

Post Syndicated from ris original https://lwn.net/Articles/751346/rss

Security updates have been issued by Arch Linux (openssl and zziplib), Debian (ldap-account-manager, ming, python-crypto, sam2p, sdl-image1.2, and squirrelmail), Fedora (bchunk, koji, libidn, librelp, nodejs, and php), Gentoo (curl, dhcp, libvirt, mailx, poppler, qemu, and spice-vdagent), Mageia (389-ds-base, aubio, cfitsio, libvncserver, nmap, and ntp), openSUSE (GraphicsMagick, ImageMagick, spice-gtk, and wireshark), Oracle (kubernetes), Slackware (patch), and SUSE (apache2 and openssl).

Facebook and Cambridge Analytica

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/facebook_and_ca.html

In the wake of the Cambridge Analytica scandal, news articles and commentators have focused on what Facebook knows about us. A lot, it turns out. It collects data from our posts, our likes, our photos, things we type and delete without posting, and things we do while not on Facebook and even when we’re offline. It buys data about us from others. And it can infer even more: our sexual orientation, political beliefs, relationship status, drug use, and other personality traits — even if we didn’t take the personality test that Cambridge Analytica developed.

But for every article about Facebook’s creepy stalker behavior, thousands of other companies are breathing a collective sigh of relief that it’s Facebook and not them in the spotlight. Because while Facebook is one of the biggest players in this space, there are thousands of other companies that spy on and manipulate us for profit.

Harvard Business School professor Shoshana Zuboff calls it “surveillance capitalism.” And as creepy as Facebook is turning out to be, the entire industry is far creepier. It has existed in secret far too long, and it’s up to lawmakers to force these companies into the public spotlight, where we can all decide if this is how we want society to operate and — if not — what to do about it.

There are 2,500 to 4,000 data brokers in the United States whose business is buying and selling our personal data. Last year, Equifax was in the news when hackers stole personal information on 150 million people, including Social Security numbers, birth dates, addresses, and driver’s license numbers.

You certainly didn’t give it permission to collect any of that information. Equifax is one of those thousands of data brokers, most of them you’ve never heard of, selling your personal information without your knowledge or consent to pretty much anyone who will pay for it.

Surveillance capitalism takes this one step further. Companies like Facebook and Google offer you free services in exchange for your data. Google’s surveillance isn’t in the news, but it’s startlingly intimate. We never lie to our search engines. Our interests and curiosities, hopes and fears, desires and sexual proclivities, are all collected and saved. Add to that the websites we visit that Google tracks through its advertising network, our Gmail accounts, our movements via Google Maps, and what it can collect from our smartphones.

That phone is probably the most intimate surveillance device ever invented. It tracks our location continuously, so it knows where we live, where we work, and where we spend our time. It’s the first and last thing we check in a day, so it knows when we wake up and when we go to sleep. We all have one, so it knows who we sleep with. Uber used just some of that information to detect one-night stands; your smartphone provider and any app you allow to collect location data knows a lot more.

Surveillance capitalism drives much of the internet. It’s behind most of the “free” services, and many of the paid ones as well. Its goal is psychological manipulation, in the form of personalized advertising to persuade you to buy something or do something, like vote for a candidate. And while the individualized profile-driven manipulation exposed by Cambridge Analytica feels abhorrent, it’s really no different from what every company wants in the end. This is why all your personal information is collected, and this is why it is so valuable. Companies that can understand it can use it against you.

None of this is new. The media has been reporting on surveillance capitalism for years. In 2015, I wrote a book about it. Back in 2010, the Wall Street Journal published an award-winning two-year series about how people are tracked both online and offline, titled “What They Know.”

Surveillance capitalism is deeply embedded in our increasingly computerized society, and if the extent of it came to light there would be broad demands for limits and regulation. But because this industry can largely operate in secret, only occasionally exposed after a data breach or investigative report, we remain mostly ignorant of its reach.

This might change soon. In 2016, the European Union passed the comprehensive General Data Protection Regulation, or GDPR. The details of the law are far too complex to explain here, but some of the things it mandates are that personal data of EU citizens can only be collected and saved for “specific, explicit, and legitimate purposes,” and only with explicit consent of the user. Consent can’t be buried in the terms and conditions, nor can it be assumed unless the user opts in. This law will take effect in May, and companies worldwide are bracing for its enforcement.

Because pretty much all surveillance capitalism companies collect data on Europeans, this will expose the industry like nothing else. Here’s just one example. In preparation for this law, PayPal quietly published a list of over 600 companies it might share your personal data with. What will it be like when every company has to publish this sort of information, and explicitly explain how it’s using your personal data? We’re about to find out.

In the wake of this scandal, even Mark Zuckerberg said that his industry probably should be regulated, although he’s certainly not wishing for the sorts of comprehensive regulation the GDPR is bringing to Europe.

He’s right. Surveillance capitalism has operated without constraints for far too long. And advances in both big data analysis and artificial intelligence will make tomorrow’s applications far creepier than today’s. Regulation is the only answer.

The first step to any regulation is transparency. Who has our data? Is it accurate? What are they doing with it? Who are they selling it to? How are they securing it? Can we delete it? I don’t see any hope of Congress passing a GDPR-like data protection law anytime soon, but it’s not too far-fetched to demand laws requiring these companies to be more transparent in what they’re doing.

One of the responses to the Cambridge Analytica scandal is that people are deleting their Facebook accounts. It’s hard to do right, and doesn’t do anything about the data that Facebook collects about people who don’t use Facebook. But it’s a start. The market can put pressure on these companies to reduce their spying on us, but it can only do that if we force the industry out of its secret shadows.

This essay previously appeared on CNN.com.

EDITED TO ADD (4/2): Slashdot thread.

Kubernetes 1.10 released

Post Syndicated from ris original https://lwn.net/Articles/750236/rss

Kubernetes 1.10 has been released. “This newest version stabilizes features in 3 key areas, including storage, security, and networking. Notable additions in this release include the introduction of external kubectl credential providers (alpha), the ability to switch DNS service to CoreDNS at install time (beta), and the move of Container Storage Interface (CSI) and persistent local volumes to beta.

Технологиите изпреварват политиките?

Post Syndicated from Bozho original https://blog.bozho.net/blog/3069

(статията е публикувана първоначално в списание „Мениджър“)

Технологичното развитие в днешно време е толкова бързо, че законодателството и националните политики по редица въпроси, изглежда, изостават сериозно.

Това изоставане има три аспекта. Първият се отнася до вече съществуващи бизнес модели и практики, които биват променяни с развитието на технологиите. Пример за това са таксиметровите и хотелиерските услуги. Uber и Airbnb не са били възможни преди 10-15 години. И съответно законодателството в тези сфери не ги допуска като възможност – такситата трябва да имат таксиметров апарат и да бъдат жълти, а хотелите трябва да са категоризирани и да отговарят на определени изисквания. Новите технологии позволяват създаването на репутационни системи (например за таксита или хотели), базирани на оценката на потребителите, а не на това какво смята държавата. Съответно законодателството изостава от реалностите и трябва да наваксва.

Вторият аспект на изоставането е свързан с тези технологии, които изцяло променят обществените отношения. Такава област е например изкуственият интелект, а в известен смисъл и блокчейн и криптовалутите. Те имат потенциал да създадат изцяло нови обществени отношения, като решенията при възникнали казуси много трудно може да се съпоставят с вече съществуващи практики.

Третият аспект на изоставащите политики е в електронното управление, т.е. приложението на новите технологии в администрацията. Тъй като проектите имат забавяне от няколко години между идея и реализация, рядко може да бъдат приложени най-новите добри технологични практики. Това обаче не е същественият проблем, тъй като не се прилагат дори и добри практики на по десет години.

Всъщност фундаментални технологични промени не се случват чак толкова бързо. Интернет и уеб набраха скорост за повече от десетилетие, а социалните мрежи и смартфоните също ги има над десет години. Т.е. оправданието, че технологиите се изменят твърде бързо, не е напълно приемливо. По-скоро причината за усещането за изоставане на публичните политики е резултат от няколко фактора: инерцията на администрацията, липсата на технологична адекватност на политическото лидерство, както и неумението за създаване на технологично неутрална нормативна уредба.

Инерцията и мудността на администрацията са естествено явление, което доскоро е можело да бъде разгледано и като преимущество, даващо стабилност и приемственост на институциите. Към това може да причислим и липсата на готовност за използване на новите технологии. Политическото лидерство също не е източник на иновации. Дори в западните демокрации няма много технологично ориентирани лидери и (с изключение на Естония) технологиите достигат до политическия елит едва когато придобият масова популярност.

Технологично неутралните регулации са умение, което нито инертната администрация, нито „нетехнологичното“ лидерство е успяло да усъвършенства. Това е умението да правиш такива закони, че да не се налага да ги променяш при напредък на технологиите. Например не е разумно да се фиксира технологията „таксиметров апарат“, защото при поява на други начини за сигурно измерване на разстояние ще се наложи промяна в закона. От гледна точка на електронното управление например не е разумно в закона за електронната идентификация да се определя, че идентификацията е на картов носител, защото същото ниво на сигурност в близко бъдеще ще бъде предоставяно от смартфоните. Чрез технологично неутрални регулации новите технологии няма да доведат до чак такова изоставане, защото за въвеждането им няма да е необходимо да се чака решение на Народното събрание.

Проактивното изграждане на електронно управление и технологично неутралните регулации ще намалят усещането за изоставане на политиките от технологиите. При изцяло новите технологии обаче, които създават принципно различни обществени отношения, това изоставане е неизбежно. Правото върви след реалния и виртуалния свят. Създаването на правни норми винаги идва със закъснение и няма как да предхожда новите развития.

Дали липсата на регулация на изкуствения интелект или криптовалутите ще доведе до сериозни проблеми? Едва ли, тъй като и тяхното развитие е сравнително бавно. Те съществуват от около десетилетие, така че оставят достатъчно време публичните политики да се възползват от тях, както и да ги регулират.

Все пак иновациите се случват до голяма степен в частния сектор, като публичният следва да ги използва и в краен случай – да ги регулира, само след като те вече са се наложили. В този смисъл технологиите, които изпреварват политиките, не са нещо, което би трябвало да ни притеснява.

N-O-D-E’s always-on networked Pi Plug

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/node-pi-plug/

N-O-D-E’s Pi Plug is a simple approach to using a Raspberry Pi Zero W as an always-on networked device without a tangle of wires.

Pi Plug 2: Turn The Pi Zero Into A Mini Server

Today I’m back with an update on the Pi Plug I made a while back. This prototype is still in the works, and is much more modular than the previous version. https://N-O-D-E.net/piplug2.html https://github.com/N-O-D-E/piplug —————- Shop: http://N-O-D-E.net/shop/ Patreon: http://patreon.com/N_O_D_E_ BTC: 17HqC7ZzmpE7E8Liuyb5WRbpwswBUgKRGZ Newsletter: http://eepurl.com/ceA-nL Music: https://archive.org/details/Fwawn-FromManToGod

The Pi Zero Power Case

In a video early last year, YouTuber N-O-D-E revealed his Pi Zero Power Case, an all-in-one always-on networked computer that fits snugly against a wall power socket.

NODE Plug Raspberry Pi Plug

The project uses an official Raspberry Pi power supply, a Zero4U USB hub, and a Raspberry Pi Zero W, and it allows completely wireless connection to a network. N-O-D-E cut the power cord and soldered its wires directly to the power input of the USB hub. The hub powers the Zero via pogo pins that connect directly to the test pads beneath.

The Power Case is a neat project, but it may be a little daunting for anyone not keen on cutting and soldering the power supply wires.

Pi Plug 2

In his overhaul of the design, N-O-D-E has created a modular reimagining of the previous always-on networked computer that fits more streamlined to the wall socket and requires absolutely no soldering or hacking of physical hardware.

Pi Plug

The Pi Plug 2 uses a USB power supply alongside two custom PCBs and a Zero W. While one PCB houses a USB connector that slots directly into the power supply, two blobs of solder on the second PCB press against the test pads beneath the Zero W. When connected, the PCBs run power directly from the wall socket to the Raspberry Pi Zero W. Neat!

NODE Plug Raspberry Pi
NODE Plug Raspberry Pi
NODE Plug Raspberry Pi
NODE Plug Raspberry Pi

While N-O-D-E isn’t currently selling these PCBs in his online store, all files are available on GitHub, so have a look if you want to recreate the Pi Plug.


In another video — and seriously, if you haven’t checked out N-O-D-E’s YouTube channel yet, you really should — he demonstrates a few changes that can turn your Zero into a USB dongle computer. This is a great hack if you don’t want to carry a power supply around in your pocket. As N-O-D-E explains:

Besides simply SSH’ing into the Pi, you could also easily install a remote desktop client and use the GUI. You can share your computer’s internet connection with the Pi and use it just like you would normally, but now without the need for a monitor, chargers, adapters, cables, or peripherals.

We’re keen to see how our community is hacking their Zeros and Zero Ws in order to take full advantage of the small footprint of the computer, so be sure to share your projects and ideas with us, either in the comments below or via social media.

The post N-O-D-E’s always-on networked Pi Plug appeared first on Raspberry Pi.

AWS Hot Startups for February 2018: Canva, Figma, InVision

Post Syndicated from Tina Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-for-february-2018-canva-figma-invision/

Note to readers! Starting next month, we will be publishing our monthly Hot Startups blog post on the AWS Startup Blog. Please come check us out.

As visual communication—whether through social media channels like Instagram or white space-heavy product pages—becomes a central part of everyone’s life, accessible design platforms and tools become more and more important in the world of tech. This trend is why we have chosen to spotlight three design-related startups—namely Canva, Figma, and InVision—as our hot startups for the month of February. Please read on to learn more about these design-savvy companies and be sure to check out our full post here.

Canva (Sydney, Australia)

For a long time, creating designs required expensive software, extensive studying, and time spent waiting for feedback from clients or colleagues. With Canva, a graphic design tool that makes creating designs much simpler and accessible, users have the opportunity to design anything and publish anywhere. The platform—which integrates professional design elements, including stock photography, graphic elements, and fonts for users to build designs either entirely from scratch or from thousands of free templates—is available on desktop, iOS, and Android, making it possible to spin up an invitation, poster, or graphic on a smartphone at any time.

To learn more about Canva, read our full interview with CEO Melanie Perkins here.

Figma (San Francisco, CA)

Figma is a cloud-based design platform that empowers designers to communicate and collaborate more effectively. Using recent advancements in WebGL, Figma offers a design tool that doesn’t require users to install any software or special operating systems. It also allows multiple people to work in a file at the same time—a crucial feature.

As the need for new design talent increases, the industry will need plenty of junior designers to keep up with the demand. Figma is prepared to help students by offering their platform for free. Through this, they “hope to give young designers the resources necessary to kick-start their education and eventually, their careers.”

For more about Figma, check out our full interview with CEO Dylan Field here.

InVision (New York, NY)

Founded in 2011 with the goal of helping improve every digital experience in the world, digital product design platform InVision helps users create a streamlined and scalable product design process, build and iterate on prototypes, and collaborate across organizations. The company, which raised a $100 million series E last November, bringing the company’s total funding to $235 million, currently powers the digital product design process at more than 80 percent of the Fortune 100 and brands like Airbnb, HBO, Netflix, and Uber.

Learn more about InVision here.

Be sure to check out our full post on the AWS Startups blog!


Визия за електронно бъдеще

Post Syndicated from Bozho original https://blog.bozho.net/blog/3047

Не се имам за визионер. Най-вече защото смятам за нужно да мога да си представя почти всички стъпки, необходими за реализирането на всичко, което предлагам. И тогава то не е точно „визия“, а по-скоро „план“. Но така или иначе, наскоро се замислих какво бих искал да имаме след като реализираме пътната карта за е-управление (която е доста конкретен план). Нещо като … визия за 2025-та. И направих следния списък:

  • Електронно гражданство – това естонците вече го имат, а в заданието за системата за електронна идентификация бяхме заложили гъвкавост в идентификаторите – в момента са само ЕГН и ЛНЧ, но ако законодателството позволи на произволни чужди граждани да се издава електронна идентичност, това да бъде възможно и със съществуващата система. Това би позволило на чужденци да откриват фирми, да плащат данъци и да развиват дигитален бизнес без да са стъпвали в страната.
  • Гъвкава електронна идентификация – в момента електронната идентификация се предвижда на носител (смарткарта). Това не е най-удобното решение, но за желаните нива на сигурност е горе-долу единствената опция. Но след 5-6 години мобилните телефони, а и други преносими устройства ще имат, надявам се, същото ниво на сигурност (в момента са възможни хибридни схеми със split key между телефон + HSM, но да не влизаме в подробности.)
  • Пълен контрол на гражданите върху данните им – всеки да може да определя кой има достъп до данните му, да вижда кога са четени – не само в публичния сектор но и в частния. Това технологично изглежда трудно, но за публичния сектор е напълно постижимо, а за частния – с развитието на криптографията се надявам да има как да управляваме данните си без да се налага да правим „крипто-шаманизми“, които са трудни дори за напреднали.
  • Всички системи да имат програмни интерфейси и да си „говорят“. Това вече е заложено като изискване, но ще стане реалност най-рано след 4-5 години. Това ще превърне системите на държавата (а и не само) в лего-блокчета, от които и държавата, и бизнесът ще могат да сглобяват нови приложения.
  • Единен портал за граждани – и това е заложено като „първа версия“ в системата за електронна идентификация, но в неговата пълнота би изглеждало така – влизате (с електронната си идентичност) в портала и там виждате всички данни за себе си, всички данъци и такси, които дължите (и история на тяхното плащане), срокове, в които да извършите някакви задължения или препоръчителни действия (технически прегледи, гражданска отговорност, смяна на лична карта, записване на дете в детска градина или училище, профилактични медицински прегледи) и всичко това да може да се направи с един бутон – „плати данъци“, „записи дете в детска градина Х“, „поднови гражданска отговорност“, „запази си час за личния лекар“ и т.н. Някои от нещата може да стават и автоматично и само да получаваме известия, че са станали. Това разбира се би изисквало оптимизиране или изграждане наново на стотици процеси, но е постижимо.
  • Хартия в администрацията и в отношенията между бизнеса и гражданите – само в тоалетната. Това е шега на естонците, но колкото по-малко хартия размотаваме напред-назад, толкова по-добре. И не само заради спасените гори, а защото това би значело, че всичко можем да свършим в движение, отдалечено, лесно и бързо.

Това са неща пряко свързани с електронното управление. Ето още някои идеи за по-добро управление в по-общ смисъл. Те разчитат на голямо хранилище от данни, което може да звучи и малко антиутопично (напр. като в този разказ), но всичко това биха били данни, които държавата вече има и събира и в голямата си част не са данни за отделния гражданин, т.е. не биха нарушили личната му свобода:

  • Автоматична оценка на въздействието на законодателството – всеки закон или наредба в момента трябва да бъде приеман само след като е оценено въздействието му (напр. върху бизнеса). Това обаче далеч не се прави винаги. Според мен може да бъде автоматизирано до голяма степен – ако всички данни на държавата са налични в голямо хранилище и поддържани и класифицирани прилежно, а текстът на законопроектите не се твори „на колянце“, а следват адекватен, електронизиран и прозрачен процес, то той ще може да бъде анализиран машинно и съпоставян с данните, като така ще може по време на писането да е ясно какви аспекти засяга. Разбира се, това няма как да е пълно (освен ако изкуственият интелект не напредне драматично), но поне ще можем да имаме частична картина.
  • Изкуствен интелект за идентифициране на проблемни сфери – следейки гореспоменатите данни за дълги периоди от време, изкуствен интелект ще може да вдига „червени флагчета“ за проблемни сфери – ако раждаемостта намалява 7-8 години поред, значи може би е нужна политика, която да адресира проблема; ако чуждите инвестиции намаляват, значи е нужна политика по привличането им; ако броят на деца, оставащи извън детски градини расте, значи спешно трябва политика по осигуряването на такива (стимул за частни; ускорено строене на общински и др.); ако въздухът е мръсен за продължителен период… и т.н, и т.н. И всеки управляващ (министър/кмет) да има едно табло, на което да вижда проблемите сфери подредени по риск и приоритетност.
  • Система за идентифициране на корупция – в предложения проект за анализ на корупционния риск в пътната карта е залегнало автоматичен анализ, но той не е проактивен – на база на хранилището за данни може да се идентифицира корупция много по-ефективно.

Има обаче и много други аспекти на дигитализацията:

  • Пряко гражданско участие – не само електронно дистанционно гласуване и електронни референдуми, а възможност за активно участие във вземането на решения. Не смятам, че представителната демокрация е лоша и че пряката непременно ще реши всички проблеми, но със сигурност повече възможности за електронно гражданско участие (напр. в гласувания в парламентарни комисии; в изготвяне на законопроекти и др.) биха значели по-демократично-осъзнатео общество.
  • Дигитална грамотност (e-literacy) – в момента България е на последните място в Европа по дигитална грамотност. Не ползваме възможностите, които новите технологии предоставят, не се ориентираме в интернет-лабиринта, вярваме на фалшиви новини, не умеем да комунимираме онлайн и т.н. Това всичко може и трябва да се подобри, за да не изоставаме и да не ставаме по-бедни поради това си изоставане. Политика на министерство на образованието е необходима, но не достатъчна. Трябва електронното ограмотяване да се случва на всички нива, във всички възрасти. И не просто „как да ползваме компютър, за да се обаждаме на децата в чужбина“. А дори да можем да програмираме прости програми, ако щете. Защото това би повишило ефективността ни многократно, без значение от професията.
  • ИТ индустрията да премине отвъд аутсорсинга. Отвъд това да изпълнява тривиални (но времеемки) задачи на големи компании. Имаме потенциала да решаваме световни и местни проблеми и поне някой от е-гигантите на бъдещето да бъде тук. Да, за това е нужно не само технологична експертиза и предприемчивост, а и инвеститорска екосистема, но напредваме в това отношение.
  • Реален единен цифров пазар в Европа (а защо не и Европа+САЩ+други държави). В момента регулациите в различните европейски държави са толкова различни, че ако един бизнес иска да продава навсякъде, трябва да си наема юристи във всяка държава (образно казано). Опитите на настоящата комисия не бяха достатъчни и много сфери останаха или нехармонизирани, или хармонизирани проформа (напр. директивата за авторското право, за която ще пиша скоро, няма изгледи да постигне желания ефект). Дали европейските регламенти и директиви ще премахват местни особености или ще ги хармонизират между нациите, резултатът трябва да е един – единствената разлика между България, Франция, Естоняи и Испания да бъде езикът на потребителския интерфейс. А той би трябвало да бъде превеждан машинно в следващите 5-6 години.
  • Позволяване на реална споделена икономика чрез умни и гъвкави регулации и дерегулации – не твърдя, че Uber и AirBNB „са бъдещето“, но и такива и по-децентрализирани модели на предоставяне на услуги трябва да бъдат допустими, а не „по ръба на закона“. Схемите за репутация на шофьори, хотели, ресторанти и какво ли още не не трябва да са държавен монопол – държавата трябва да ги делегира на технологично по-адекватните.

„Абе т’ва ваш’то не е точно визия“. Сигурно не е, но поне е част от представата ми за възможното и постижимото след 10 години. И наличието просто на един списък с идеи, хрумнали в трамвая, не е начин нещо от тях да се случи. Но може би е първа стъпка, която в комбинация с достатъчна активност, попътен вятър и късмет, може пък и да стане.

Но защо всичко да е дигитално? Защо ни е този напън към електронизация, към преминаване към виртуалния свят? Не е ли това лошо, рисковано, откъсващо ни от корените, антиутопично? Не мисля. Технологията е и ще си остане само средство, а не самоцел, но като средство може да бъде много ефективна – за това да прави хората по отделно, и обществата като цяло, по-щастливи, по-богати (и материално и нематериално) и дори по-добра версия на самите себе си. „Само“ трябва да се научим как да я използваме.

Task Networking in AWS Fargate

Post Syndicated from Nathan Peck original https://aws.amazon.com/blogs/compute/task-networking-in-aws-fargate/

AWS Fargate is a technology that allows you to focus on running your application without needing to provision, monitor, or manage the underlying compute infrastructure. You package your application into a Docker container that you can then launch using your container orchestration tool of choice.

Fargate allows you to use containers without being responsible for Amazon EC2 instances, similar to how EC2 allows you to run VMs without managing physical infrastructure. Currently, Fargate provides support for Amazon Elastic Container Service (Amazon ECS). Support for Amazon Elastic Container Service for Kubernetes (Amazon EKS) will be made available in the near future.

Despite offloading the responsibility for the underlying instances, Fargate still gives you deep control over configuration of network placement and policies. This includes the ability to use many networking fundamentals such as Amazon VPC and security groups.

This post covers how to take advantage of the different ways of networking your containers in Fargate when using ECS as your orchestration platform, with a focus on how to do networking securely.

The first step to running any application in Fargate is defining an ECS task for Fargate to launch. A task is a logical group of one or more Docker containers that are deployed with specified settings. When running a task in Fargate, there are two different forms of networking to consider:

  • Container (local) networking
  • External networking

Container Networking

Container networking is often used for tightly coupled application components. Perhaps your application has a web tier that is responsible for serving static content as well as generating some dynamic HTML pages. To generate these dynamic pages, it has to fetch information from another application component that has an HTTP API.

One potential architecture for such an application is to deploy the web tier and the API tier together as a pair and use local networking so the web tier can fetch information from the API tier.

If you are running these two components as two processes on a single EC2 instance, the web tier application process could communicate with the API process on the same machine by using the local loopback interface. The local loopback interface has a special IP address of and hostname of localhost.

By making a networking request to this local interface, it bypasses the network interface hardware and instead the operating system just routes network calls from one process to the other directly. This gives the web tier a fast and efficient way to fetch information from the API tier with almost no networking latency.

In Fargate, when you launch multiple containers as part of a single task, they can also communicate with each other over the local loopback interface. Fargate uses a special container networking mode called awsvpc, which gives all the containers in a task a shared elastic network interface to use for communication.

If you specify a port mapping for each container in the task, then the containers can communicate with each other on that port. For example the following task definition could be used to deploy the web tier and the API tier:

  "family": "myapp"
  "containerDefinitions": [
      "name": "web",
      "image": "my web image url",
      "portMappings": [
          "containerPort": 80
      "memory": 500,
      "cpu": 10,
      "esssential": true
      "name": "api",
      "image": "my api image url",
      "portMappings": [
          "containerPort": 8080
      "cpu": 10,
      "memory": 500,
      "essential": true

ECS, with Fargate, is able to take this definition and launch two containers, each of which is bound to a specific static port on the elastic network interface for the task.

Because each Fargate task has its own isolated networking stack, there is no need for dynamic ports to avoid port conflicts between different tasks as in other networking modes. The static ports make it easy for containers to communicate with each other. For example, the web container makes a request to the API container using its well-known static port:


This sends a local network request, which goes directly from one container to the other over the local loopback interface without traversing the network. This deployment strategy allows for fast and efficient communication between two tightly coupled containers. But most application architectures require more than just internal local networking.

External Networking

External networking is used for network communications that go outside the task to other servers that are not part of the task, or network communications that originate from other hosts on the internet and are directed to the task.

Configuring external networking for a task is done by modifying the settings of the VPC in which you launch your tasks. A VPC is a fundamental tool in AWS for controlling the networking capabilities of resources that you launch on your account.

When setting up a VPC, you create one or more subnets, which are logical groups that your resources can be placed into. Each subnet has an Availability Zone and its own route table, which defines rules about how network traffic operates for that subnet. There are two main types of subnets: public and private.

Public subnets

A public subnet is a subnet that has an associated internet gateway. Fargate tasks in that subnet are assigned both private and public IP addresses:

A browser or other client on the internet can send network traffic to the task via the internet gateway using its public IP address. The tasks can also send network traffic to other servers on the internet because the route table can route traffic out via the internet gateway.

If tasks want to communicate directly with each other, they can use each other’s private IP address to send traffic directly from one to the other so that it stays inside the subnet without going out to the internet gateway and back in.

Private subnets

A private subnet does not have direct internet access. The Fargate tasks inside the subnet don’t have public IP addresses, only private IP addresses. Instead of an internet gateway, a network address translation (NAT) gateway is attached to the subnet:


There is no way for another server or client on the internet to reach your tasks directly, because they don’t even have an address or a direct route to reach them. This is a great way to add another layer of protection for internal tasks that handle sensitive data. Those tasks are protected and can’t receive any inbound traffic at all.

In this configuration, the tasks can still communicate to other servers on the internet via the NAT gateway. They would appear to have the IP address of the NAT gateway to the recipient of the communication. If you run a Fargate task in a private subnet, you must add this NAT gateway. Otherwise, Fargate can’t make a network request to Amazon ECR to download the container image, or communicate with Amazon CloudWatch to store container metrics.

Load balancers

If you are running a container that is hosting internet content in a private subnet, you need a way for traffic from the public to reach the container. This is generally accomplished by using a load balancer such as an Application Load Balancer or a Network Load Balancer.

ECS integrates tightly with AWS load balancers by automatically configuring a service-linked load balancer to send network traffic to containers that are part of the service. When each task starts, the IP address of its elastic network interface is added to the load balancer’s configuration. When the task is being shut down, network traffic is safely drained from the task before removal from the load balancer.

To get internet traffic to containers using a load balancer, the load balancer is placed into a public subnet. ECS configures the load balancer to forward traffic to the container tasks in the private subnet:

This configuration allows your tasks in Fargate to be safely isolated from the rest of the internet. They can still initiate network communication with external resources via the NAT gateway, and still receive traffic from the public via the Application Load Balancer that is in the public subnet.

Another potential use case for a load balancer is for internal communication from one service to another service within the private subnet. This is typically used for a microservice deployment, in which one service such as an internet user account service needs to communicate with an internal service such as a password service. Obviously, it is undesirable for the password service to be directly accessible on the internet, so using an internet load balancer would be a major security vulnerability. Instead, this can be accomplished by hosting an internal load balancer within the private subnet:

With this approach, one container can distribute requests across an Auto Scaling group of other private containers via the internal load balancer, ensuring that the network traffic stays safely protected within the private subnet.

Best Practices for Fargate Networking

Determine whether you should use local task networking

Local task networking is ideal for communicating between containers that are tightly coupled and require maximum networking performance between them. However, when you deploy one or more containers as part of the same task they are always deployed together so it removes the ability to independently scale different types of workload up and down.

In the example of the application with a web tier and an API tier, it may be the case that powering the application requires only two web tier containers but 10 API tier containers. If local container networking is used between these two container types, then an extra eight unnecessary web tier containers would end up being run instead of allowing the two different services to scale independently.

A better approach would be to deploy the two containers as two different services, each with its own load balancer. This allows clients to communicate with the two web containers via the web service’s load balancer. The web service could distribute requests across the eight backend API containers via the API service’s load balancer.

Run internet tasks that require internet access in a public subnet

If you have tasks that require internet access and a lot of bandwidth for communication with other services, it is best to run them in a public subnet. Give them public IP addresses so that each task can communicate with other services directly.

If you run these tasks in a private subnet, then all their outbound traffic has to go through an NAT gateway. AWS NAT gateways support up to 10 Gbps of burst bandwidth. If your bandwidth requirements go over this, then all task networking starts to get throttled. To avoid this, you could distribute the tasks across multiple private subnets, each with their own NAT gateway. It can be easier to just place the tasks into a public subnet, if possible.

Avoid using a public subnet or public IP addresses for private, internal tasks

If you are running a service that handles private, internal information, you should not put it into a public subnet or use a public IP address. For example, imagine that you have one task, which is an API gateway for authentication and access control. You have another background worker task that handles sensitive information.

The intended access pattern is that requests from the public go to the API gateway, which then proxies request to the background task only if the request is from an authenticated user. If the background task is in a public subnet and has a public IP address, then it could be possible for an attacker to bypass the API gateway entirely. They could communicate directly to the background task using its public IP address, without being authenticated.


Fargate gives you a way to run containerized tasks directly without managing any EC2 instances, but you still have full control over how you want networking to work. You can set up containers to talk to each other over the local network interface for maximum speed and efficiency. For running workloads that require privacy and security, use a private subnet with public internet access locked down. Or, for simplicity with an internet workload, you can just use a public subnet and give your containers a public IP address.

To deploy one of these Fargate task networking approaches, check out some sample CloudFormation templates showing how to configure the VPC, subnets, and load balancers.

If you have questions or suggestions, please comment below.