In September of last year, we launched our 2017/2018 Astro Pi challenge with our partners at the European Space Agency (ESA). Students from ESA membership and associate countries had the chance to design science experiments and write code to be run on one of our two Raspberry Pis on the International Space Station (ISS).
Submissions for the Mission Space Lab challenge have just closed, and the results are in! Students had the opportunity to design an experiment for one of the following two themes:
Life in space Making use of Astro Pi Vis (Ed) in the European Columbus module to learn about the conditions inside the ISS.
Life on Earth Making use of Astro Pi IR (Izzy), which will be aimed towards the Earth through a window to learn about Earth from space.
ESA astronaut Alexander Gerst, speaking from the replica of the Columbus module at the European Astronaut Center in Cologne, has a message for all Mission Space Lab participants:
Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspberry Pi from one of our Approved Resellers: http://rpf.io/ytproducts Find out more about the Raspberry Pi Foundation: Raspberry Pi http://rpf.io/ytrpi Code Club UK http://rpf.io/ytccuk Code Club International http://rpf.io/ytcci CoderDojo http://rpf.io/ytcd Check out our free online training courses: http://rpf.io/ytfl Find your local Raspberry Jam event: http://rpf.io/ytjam Work through our free online projects: http://rpf.io/ytprojects Do you have a question about your Raspberry Pi?
Flight status
We had a total of 212 Mission Space Lab entries from 22 countries. Of these, a 114 fantastic projects have been given flight status, and the teams’ project code will run in space!
But they’re not winners yet. In April, the code will be sent to the ISS, and then the teams will receive back their experimental data. Next, to get deeper insight into the process of scientific endeavour, they will need produce a final report analysing their findings. Winners will be chosen based on the merit of their final report, and the winning teams will get exclusive prizes. Check the list below to see if your team got flight status.
Belgium
Flight status achieved:
Team De Vesten, Campus De Vesten, Antwerpen
Ursa Major, CoderDojo Belgium, West-Vlaanderen
Special operations STEM, Sint-Claracollege, Antwerpen
Canada
Flight status achieved:
Let It Grow, Branksome Hall, Toronto
The Dark Side of Light, Branksome Hall, Toronto
Genie On The ISS, Branksome Hall, Toronto
Byte by PIthons, Youth Tech Education Society & Kid Code Jeunesse, Edmonton
The Broadviewnauts, Broadview, Ottawa
Czech Republic
Flight status achieved:
BLEK, Střední Odborná Škola Blatná, Strakonice
Denmark
Flight status achieved:
2y Infotek, Nærum Gymnasium, Nærum
Equation Quotation, Allerød Gymnasium, Lillerød
Team Weather Watchers, Allerød Gymnasium, Allerød
Space Gardners, Nærum Gymnasium, Nærum
Finland
Flight status achieved:
Team Aurora, Hyvinkään yhteiskoulun lukio, Hyvinkää
France
Flight status achieved:
INC2, Lycée Raoul Follereau, Bourgogne
Space Project SP4, Lycée Saint-Paul IV, Reunion Island
Dresseurs2Python, clg Albert CAMUS, essonne
Lazos, Lycée Aux Lazaristes, Rhone
The space nerds, Lycée Saint André Colmar, Alsace
Les Spationautes Valériquais, lycée de la Côte d’Albâtre, Normandie
Вдовицата на известен унгарски поет търси по съдебен ред обезщетение. Твърди, че е засегнато доброто й име в публикация. Според заглавието на тази публикация името на починалия се погазва, защото вдовицата търси известност.
Решението е поредното от серията решения, в което се засягат баланса на права (чл.8 v чл.10) и принципите на отговорната журналистика.
Из решението:
28. Съдът вече е имал възможност да определи съответните принципи, които трябва да ръководят неговата оценка в тази област. Установени са критерии за балансиране на конкурентните права (виж Von Hannover (№ 2) и Axel Springer AG §§ 90-95). Съответните критерии са следните: принос към обсъждането на въпроси от обществен интерес, степента на известност на засегнатото лице, предметът на новината, предходното поведение на съответното лице, съдържанието, формата и последиците от публикацията и, когато е подходящо, обстоятелствата, при които е получена информацията или снимката.
29. Когато упражняването на баланса на права е извършено от националните власти в съответствие с критериите, установени в практиката на Съда, Съдът би изисквал сериозни основания, за да замени със становището си оценката на националните съдилища.
33. […] въпреки че журналистите имат право на преувеличение или дори на провокация, те все пак имат “задължения и отговорности” и трябва да действат добросъвестно и в съответствие с етиката на журналистиката (вж. Markkinapörssi Oy Satamedia Oy v. Finland и Pentikäinen v. Finland § 90).
Съдът установява, че информацията в публикацията е била предоставено доброволно, в статията и в заглавието не се съдържат неоснователни твърдения, а заглавието е в рамките на възможността за редакционен избор. В светлината на широкото медийно отразяване на жалбоподателката и нейния бивш съпруг, предизвикано лично от тях, Съдът се съгласява с констатацията на местните съдилища, че оспорваното публикуване не е било вредно за честта и репутацията на жалбоподателката.
We’re growing at a pretty rapid clip, and as we add more customers, we need people to help keep all of our hard drive spinning. Along with support, the other department that grows linearly with the number of customers that join us is the operations team, and they’ve just added a new member to their team, Rich! He joins us as a Network Systems Administrator! Lets take a moment to learn more about Rich, shall we?
What is your Backblaze Title? Network Systems Administrator
Where are you originally from? The Upper Peninsula of Michigan. Da UP, eh!
What attracted you to Backblaze? The fact that it is a small tech company packed with highly intelligent people and a place where I can also be friends with my peers. I am also huge on cloud storage and backing up your past!
What do you expect to learn while being at Backblaze? I look forward to expanding my Networking skills and System Administration skills while helping build the best Cloud Storage and Backup Company there is!
Where else have you worked? I first started working in Data Centers at Viawest. I was previously an Infrastructure Engineer at Twitter and a Production Engineer at Groupon.
Where did you go to school? I started at Finlandia University in Norther Michigan, carried onto Northwest Florida State and graduated with my A.S. from North Lake College in Dallas, TX. I then completed my B.S. Degree online at WGU.
What’s your dream job? Sr. Network Engineer
Favorite place you’ve traveled? I have traveled around a bit in my life. I really liked Dublin, Ireland but I have to say favorite has to be Puerto Vallarta, Mexico! Which is actually where I am getting married in 2019!
Favorite hobby? Water is my life. I like to wakeboard and wakesurf. I also enjoy biking, hunting, fishing, camping, and anything that has to do with the great outdoors!
Of what achievement are you most proud? I’m proud of moving up in my career as quickly as I have been. I am also very proud of being able to wakesurf behind a boat without a rope! Lol!
Star Trek or Star Wars? Star Trek! I grew up on it!
Coke or Pepsi? H2O 😀
Favorite food? Mexican Food and Pizza!
Why do you like certain things? Hmm…. because certain things make other certain things particularly certain!
Anything else you’d like you’d like to tell us? Nope 😀
Who can say no to high quality H2O? Welcome to the team Rich!
Contributed by Tiffany Jernigan, Developer Advocate for Amazon ECS
Get ready for takeoff!
We made sure that this year’s re:Invent is chock-full of containers: there are over 40 sessions! New to containers? No problem, we have several introductory sessions for you to dip your toes. Been using containers for years and know the ins and outs? Don’t miss our technical deep-dives and interactive chalk talks led by container experts.
If you can’t make it to Las Vegas, you can catch the keynotes and session recaps from our livestream and on Twitch.
Session types
Not everyone learns the same way, so we have multiple types of breakout content:
Birds of a Feather An interactive discussion with industry leaders about containers on AWS.
Breakout sessions 60-minute presentations about building on AWS. Sessions are delivered by both AWS experts and customers and span all content levels.
Workshops 2.5-hour, hands-on sessions that teach how to build on AWS. AWS credits are provided. Bring a laptop, and have an active AWS account.
Chalk Talks 1-hour, highly interactive sessions with a smaller audience. They begin with a short lecture delivered by an AWS expert, followed by a discussion with the audience.
Session levels
Whether you’re new to containers or you’ve been using them for years, you’ll find useful information at every level.
Introductory Sessions are focused on providing an overview of AWS services and features, with the assumption that attendees are new to the topic.
Advanced Sessions dive deeper into the selected topic. Presenters assume that the audience has some familiarity with the topic, but may or may not have direct experience implementing a similar solution.
Expert Sessions are for attendees who are deeply familiar with the topic, have implemented a solution on their own already, and are comfortable with how the technology works across multiple services, architectures, and implementations.
Session locations
All container sessions are located in the Aria Resort.
MONDAY 11/27
Breakout sessions
Level 200 (Introductory)
CON202 – Getting Started with Docker and Amazon ECS By packaging software into standardized units, Docker gives code everything it needs to run, ensuring consistency from your laptop all the way into production. But once you have your code ready to ship, how do you run and scale it in the cloud? In this session, you become comfortable running containerized services in production using Amazon ECS. We cover container deployment, cluster management, service auto-scaling, service discovery, secrets management, logging, monitoring, security, and other core concepts. We also cover integrated AWS services and supplementary services that you can take advantage of to run and scale container-based services in the cloud.
Chalk talks
Level 200 (Introductory)
CON211 – Reducing your Compute Footprint with Containers and Amazon ECS Tomas Riha, platform architect for Volvo, shows how Volvo transitioned its WirelessCar platform from using Amazon EC2 virtual machines to containers running on Amazon ECS, significantly reducing cost. Tomas dives deep into the architecture that Volvo used to achieve the migration in under four months, including Amazon ECS, Amazon ECR, Elastic Load Balancing, and AWS CloudFormation.
CON212 – Anomaly Detection Using Amazon ECS, AWS Lambda, and Amazon EMR Learn about the architecture that Cisco CloudLock uses to enable automated security and compliance checks throughout the entire development lifecycle, from the first line of code through runtime. It includes integration with IAM roles, Amazon VPC, and AWS KMS.
Level 400 (Expert)
CON410 – Advanced CICD with Amazon ECS Control Plane Mohit Gupta, product and engineering lead for Clever, demonstrates how to extend the Amazon ECS control plane to optimize management of container deployments and how the control plane can be broadly applied to take advantage of new AWS services. This includes ark—an AWS CLI-based deployment to Amazon ECS, Dapple—a slack-based automation system for deployments and notifications, and Kayvee—log and event routing libraries based on Amazon Kinesis.
Workshops
Level 200 (Introductory)
CON209 – Interstella 8888: Learn How to Use Docker on AWS Interstella 8888 is an intergalactic trading company that deals in rare resources, but their antiquated monolithic logistics systems are causing the business to lose money. Join this workshop to get hands-on experience with Docker as you containerize Interstella 8888’s aging monolithic application and deploy it using Amazon ECS.
CON213 – Hands-on Deployment of Kubernetes on AWS In this workshop, attendees get hands-on experience using Kubernetes and Kops (Kubernetes Operations), as described in our recent blog post. Attendees learn how to provision a cluster, assign role-based permissions and security, and launch a container. If you’re interested in learning best practices for running Kubernetes on AWS, don’t miss this workshop.
TUESDAY 11/28
Breakout Sessions
Level 200 (Introductory)
CON206 – Docker on AWS In this session, Docker Technical Staff Member Patrick Chanezon discusses how Finnish Rail, the national train system for Finland, is using Docker on Amazon Web Services to modernize their customer facing applications, from ticket sales to reservations. Patrick also shares the state of Docker development and adoption on AWS, including explaining the opportunities and implications of efforts such as Project Moby, Docker EE, and how developers can use and contribute to Docker projects.
CON208 – Building Microservices on AWS Increasingly, organizations are turning to microservices to help them empower autonomous teams, letting them innovate and ship software faster than ever before. But implementing a microservices architecture comes with a number of new challenges that need to be dealt with. Chief among these finding an appropriate platform to help manage a growing number of independently deployable services. In this session, Sam Newman, author of Building Microservices and a renowned expert in microservices strategy, discusses strategies for building scalable and robust microservices architectures. He also tells you how to choose the right platform for building microservices, and about common challenges and mistakes organizations make when they move to microservices architectures.
Level 300 (Advanced)
CON302 – Building a CICD Pipeline for Containers on AWS Containers can make it easier to scale applications in the cloud, but how do you set up your CICD workflow to automatically test and deploy code to containerized apps? In this session, we explore how developers can build effective CICD workflows to manage their containerized code deployments on AWS.
Ajit Zadgaonkar, Director of Engineering and Operations at Edmunds walks through best practices for CICD architectures used by his team to deploy containers. We also deep dive into topics such as how to create an accessible CICD platform and architect for safe blue/green deployments.
CON307 – Building Effective Container Images Sick of getting paged at 2am and wondering “where did all my disk space go?” New Docker users often start with a stock image in order to get up and running quickly, but this can cause problems as your application matures and scales. Creating efficient container images is important to maximize resources, and deliver critical security benefits.
In this session, AWS Sr. Technical Evangelist Abby Fuller covers how to create effective images to run containers in production. This includes an in-depth discussion of how Docker image layers work, things you should think about when creating your images, working with Amazon ECR, and mise-en-place for install dependencies. Prakash Janakiraman, Co-Founder and Chief Architect at Nextdoor discuss high-level and language-specific best practices for with building images and how Nextdoor uses these practices to successfully scale their containerized services with a small team.
CON309 – Containerized Machine Learning on AWS Image recognition is a field of deep learning that uses neural networks to recognize the subject and traits for a given image. In Japan, Cookpad uses Amazon ECS to run an image recognition platform on clusters of GPU-enabled EC2 instances. In this session, hear from Cookpad about the challenges they faced building and scaling this advanced, user-friendly service to ensure high-availability and low-latency for tens of millions of users.
CON320 – Monitoring, Logging, and Debugging for Containerized Services As containers become more embedded in the platform tools, debug tools, traces, and logs become increasingly important. Nare Hayrapetyan, Senior Software Engineer and Calvin French-Owen, Senior Technical Officer for Segment discuss the principals of monitoring and debugging containers and the tools Segment has implemented and built for logging, alerting, metric collection, and debugging of containerized services running on Amazon ECS.
Chalk Talks
Level 300 (Advanced)
CON314 – Automating Zero-Downtime Production Cluster Upgrades for Amazon ECS Containers make it easy to deploy new code into production to update the functionality of a service, but what happens when you need to update the Amazon EC2 compute instances that your containers are running on? In this talk, we’ll deep dive into how to upgrade the Amazon EC2 infrastructure underlying a live production Amazon ECS cluster without affecting service availability. Matt Callanan, Engineering Manager at Expedia walk through Expedia’s “PRISM” project that safely relocates hundreds of tasks onto new Amazon EC2 instances with zero-downtime to applications.
CON322 – Maximizing Amazon ECS for Large-Scale Workloads Head of Mobfox DevOps, David Spitzer, shows how Mobfox used Docker and Amazon ECS to scale the Mobfox services and development teams to achieve low-latency networking and automatic scaling. This session covers Mobfox’s ecosystem architecture. It compares 2015 and today, the challenges Mobfox faced in growing their platform, and how they overcame them.
CON323 – Microservices Architectures for the Enterprise Salva Jung, Principle Engineer for Samsung Mobile shares how Samsung Connect is architected as microservices running on Amazon ECS to securely, stably, and efficiently handle requests from millions of mobile and IoT devices around the world.
CON324 – Windows Containers on Amazon ECS Docker containers are commonly regarded as powerful and portable runtime environments for Linux code, but Docker also offers API and toolchain support for running Windows Servers in containers. In this talk, we discuss the various options for running windows-based applications in containers on AWS.
CON326 – Remote Sensing and Image Processing on AWS Learn how Encirca services by DuPont Pioneer uses Amazon ECS powered by GPU-instances and Amazon EC2 Spot Instances to run proprietary image-processing algorithms against satellite imagery. Mark Lanning and Ethan Harstad, engineers at DuPont Pioneer show how this architecture has allowed them to process satellite imagery multiple times a day for each agricultural field in the United States in order to identify crop health changes.
Workshops
Level 300 (Advanced)
CON317 – Advanced Container Management at Catsndogs.lol Catsndogs.lol is a (fictional) company that needs help deploying and scaling its container-based application. During this workshop, attendees join the new DevOps team at CatsnDogs.lol, and help the company to manage their applications using Amazon ECS, and help release new features to make our customers happier than ever.Attendees get hands-on with service and container-instance auto-scaling, spot-fleet integration, container placement strategies, service discovery, secrets management with AWS Systems Manager Parameter Store, time-based and event-based scheduling, and automated deployment pipelines. If you are a developer interested in learning more about how Amazon ECS can accelerate your application development and deployment workflows, or if you are a systems administrator or DevOps person interested in understanding how Amazon ECS can simplify the operational model associated with running containers at scale, then this workshop is for you. You should have basic familiarity with Amazon ECS, Amazon EC2, and IAM.
Additional requirements:
The AWS CLI or AWS Tools for PowerShell installed
An AWS account with administrative permissions (including the ability to create IAM roles and policies) created at least 24 hours in advance.
WEDNESDAY 11/29
Birds of a Feather (BoF)
CON01 – Birds of a Feather: Containers and Open Source at AWS Cloud native architectures take advantage of on-demand delivery, global deployment, elasticity, and higher-level services to enable developer productivity and business agility. Open source is a core part of making cloud native possible for everyone. In this session, we welcome thought leaders from the CNCF, Docker, and AWS to discuss the cloud’s direction for growth and enablement of the open source community. We also discuss how AWS is integrating open source code into its container services and its contributions to open source projects.
Breakout Sessions
Level 300 (Advanced)
CON308 – Mastering Kubernetes on AWS Much progress has been made on how to bootstrap a cluster since Kubernetes’ first commit and is now only a matter of minutes to go from zero to a running cluster on Amazon Web Services. However, evolving a simple Kubernetes architecture to be ready for production in a large enterprise can quickly become overwhelming with options for configuration and customization.
In this session, Arun Gupta, Open Source Strategist for AWS and Raffaele Di Fazio, software engineer at leading European fashion platform Zalando, show the common practices for running Kubernetes on AWS and share insights from experience in operating tens of Kubernetes clusters in production on AWS. We cover options and recommendations on how to install and manage clusters, configure high availability, perform rolling upgrades and handle disaster recovery, as well as continuous integration and deployment of applications, logging, and security.
CON310 – Moving to Containers: Building with Docker and Amazon ECS If you’ve ever considered moving part of your application stack to containers, don’t miss this session. We cover best practices for containerizing your code, implementing automated service scaling and monitoring, and setting up automated CI/CD pipelines with fail-safe deployments. Manjeeva Silva and Thilina Gunasinghe show how McDonalds implemented their home delivery platform in four months using Docker containers and Amazon ECS to serve tens of thousands of customers.
Level 400 (Expert)
CON402 – Advanced Patterns in Microservices Implementation with Amazon ECS Scaling a microservice-based infrastructure can be challenging in terms of both technical implementation and developer workflow. In this talk, AWS Solutions Architect Pierre Steckmeyer is joined by Will McCutchen, Architect at BuzzFeed, to discuss Amazon ECS as a platform for building a robust infrastructure for microservices. We look at the key attributes of microservice architectures and how Amazon ECS supports these requirements in production, from configuration to sophisticated workload scheduling to networking capabilities to resource optimization. We also examine what it takes to build an end-to-end platform on top of the wider AWS ecosystem, and what it’s like to migrate a large engineering organization from a monolithic approach to microservices.
CON404 – Deep Dive into Container Scheduling with Amazon ECS As your application’s infrastructure grows and scales, well-managed container scheduling is critical to ensuring high availability and resource optimization. In this session, we deep dive into the challenges and opportunities around container scheduling, as well as the different tools available within Amazon ECS and AWS to carry out efficient container scheduling. We discuss patterns for container scheduling available with Amazon ECS, the Blox scheduling framework, and how you can customize and integrate third-party scheduler frameworks to manage container scheduling on Amazon ECS.
Chalk Talks
Level 300 (Advanced)
CON312 – Building a Selenium Fleet on the Cheap with Amazon ECS with Spot Fleet Roberto Rivera and Matthew Wedgwood, engineers at RetailMeNot, give a practical overview of setting up a fleet of Selenium nodes running on Amazon ECS with Spot Fleet. Discuss the challenges of running Selenium with high availability at minimum cost using Amazon ECS container introspection to connect the Selenium Hub with its nodes.
CON315 – Virtually There: Building a Render Farm with Amazon ECS Learn how 8i Corp scales its multi-tenanted, volumetric render farm up to thousands of instances using AWS, Docker, and an API-driven infrastructure. This render farm enables them to turn the video footage from an array of synchronized cameras into a photo-realistic hologram capable of playback on a range of devices, from mobile phones to high-end head mounted displays. Join Owen Evans, VP of Engineering for 8i, as they dive deep into how 8i’s rendering infrastructure is built and maintained by just a handful of people and powered by Amazon ECS.
CON325 – Developing Microservices – from Your Laptop to the Cloud Wesley Chow, Staff Engineer at Adroll, shows how his team extends Amazon ECS by enabling local development capabilities. Hologram, Adroll’s local development program, brings the capabilities of the Amazon EC2 instance metadata service to non-EC2 hosts, so that developers can run the same software on local machines with the same credentials source as in production.
CON327 – Patterns and Considerations for Service Discovery Roven Drabo, head of cloud operations at Kaplan Test Prep, illustrates Kaplan’s complete container automation solution using Amazon ECS along with how his team uses NGINX and HashiCorp Consul to provide an automated approach to service discovery and container provisioning.
CON328 – Building a Development Platform on Amazon ECS Quinton Anderson, Head of Engineering for Commonwealth Bank of Australia, walks through how they migrated their internal development and deployment platform from Mesos/Marathon to Amazon ECS. The platform uses a custom DSL to abstract a layered application architecture, in a way that makes it easy to plug or replace new implementations into each layer in the stack.
Workshops
Level 300 (Advanced)
CON318 – Interstella 8888: Monolith to Microservices with Amazon ECS Interstella 8888 is an intergalactic trading company that deals in rare resources, but their antiquated monolithic logistics systems are causing the business to lose money. Join this workshop to get hands-on experience deploying Docker containers as you break Interstella 8888’s aging monolithic application into containerized microservices. Using Amazon ECS and an Application Load Balancer, you create API-based microservices and deploy them leveraging integrations with other AWS services.
CON332 – Build a Java Spring Application on Amazon ECS This workshop teaches you how to lift and shift existing Spring and Spring Cloud applications onto the AWS platform. Learn how to build a Spring application container, understand bootstrap secrets, push container images to Amazon ECR, and deploy the application to Amazon ECS. Then, learn how to configure the deployment for production.
THURSDAY 11/30
Breakout Sessions
Level 200 (Introductory)
CON201 – Containers on AWS – State of the Union Just over four years after the first public release of Docker, and three years to the day after the launch of Amazon ECS, the use of containers has surged to run a significant percentage of production workloads at startups and enterprise organizations. Join Deepak Singh, General Manager of Amazon Container Services, as he covers the state of containerized application development and deployment trends, new container capabilities on AWS that are available now, options for running containerized applications on AWS, and how AWS customers successfully run container workloads in production.
Level 300 (Advanced)
CON304 – Batch Processing with Containers on AWS Batch processing is useful to analyze large amounts of data. But configuring and scaling a cluster of virtual machines to process complex batch jobs can be difficult. In this talk, we show how to use containers on AWS for batch processing jobs that can scale quickly and cost-effectively. We also discuss AWS Batch, our fully managed batch-processing service. You also hear from GoPro and Here about how they use AWS to run batch processing jobs at scale including best practices for ensuring efficient scheduling, fine-grained monitoring, compute resource automatic scaling, and security for your batch jobs.
Level 400 (Expert)
CON406 – Architecting Container Infrastructure for Security and Compliance While organizations gain agility and scalability when they migrate to containers and microservices, they also benefit from compliance and security, advantages that are often overlooked. In this session, Kelvin Zhu, lead software engineer at Okta, joins Mitch Beaumont, enterprise solutions architect at AWS, to discuss security best practices for containerized infrastructure. Learn how Okta built their development workflow with an emphasis on security through testing and automation. Dive deep into how containers enable automated security and compliance checks throughout the development lifecycle. Also understand best practices for implementing AWS security and secrets management services for any containerized service architecture.
Chalk Talks
Level 300 (Advanced)
CON329 – Full Software Lifecycle Management for Containers Running on Amazon ECS Learn how The Washington Post uses Amazon ECS to run Arc Publishing, a digital journalism platform that powers The Washington Post and a growing number of major media websites. Amazon ECS enabled The Washington Post to containerize their existing microservices architecture, avoiding a complete rewrite that would have delayed the platform’s launch by several years. In this session, Jason Bartz, Technical Architect at The Washington Post, discusses the platform’s architecture. He addresses the challenges of optimizing Arc Publishing’s workload, and managing the application lifecycle to support 2,000 containers running on more than 50 Amazon ECS clusters.
CON330 – Running Containerized HIPAA Workloads on AWS Nihar Pasala, Engineer at Aetion, discusses the Aetion Evidence Platform, a system for generating the real-world evidence used by healthcare decision makers to implement value-based care. This session discusses the architecture Aetion uses to run HIPAA workloads using containers on Amazon ECS, best practices, and learnings.
Level 400 (Expert)
CON408 – Building a Machine Learning Platform Using Containers on AWS DeepLearni.ng develops and implements machine learning models for complex enterprise applications. In this session, Thomas Rogers, Engineer for DeepLearni.ng discusses how they worked with Scotiabank to leverage Amazon ECS, Amazon ECR, Docker, GPU-accelerated Amazon EC2 instances, and TensorFlow to develop a retail risk model that helps manage payment collections for millions of Canadian credit card customers.
Workshops
Level 300 (Advanced)
CON319 – Interstella 8888: CICD for Containers on AWS Interstella 8888 is an intergalactic trading company that deals in rare resources, but their antiquated monolithic logistics systems are causing the business to lose money. Join this workshop to learn how to set up a CI/CD pipeline for containerized microservices. You get hands-on experience deploying Docker container images using Amazon ECS, AWS CloudFormation, AWS CodeBuild, and AWS CodePipeline, automating everything from code check-in to production.
FRIDAY 12/1
Breakout Sessions
Level 400 (Expert)
CON405 – Moving to Amazon ECS – the Not-So-Obvious Benefits If you ask 10 teams why they migrated to containers, you will likely get answers like ‘developer productivity’, ‘cost reduction’, and ‘faster scaling’. But teams often find there are several other ‘hidden’ benefits to using containers for their services. In this talk, Franziska Schmidt, Platform Engineer at Mapbox and Yaniv Donenfeld from AWS will discuss the obvious, and not so obvious benefits of moving to containerized architecture. These include using Docker and Amazon ECS to achieve shared libraries for dev teams, separating private infrastructure from shareable code, and making it easier for non-ops engineers to run services.
Chalk Talks
Level 300 (Advanced)
CON331 – Deploying a Regulated Payments Application on Amazon ECS Travelex discusses how they built an FCA-compliant international payments service using a microservices architecture on AWS. This chalk talk covers the challenges of designing and operating an Amazon ECS-based PaaS in a regulated environment using a DevOps model.
Workshops
Level 400 (Expert)
CON407 – Interstella 8888: Advanced Microservice Operations Interstella 8888 is an intergalactic trading company that deals in rare resources, but their antiquated monolithic logistics systems are causing the business to lose money. In this workshop, you help Interstella 8888 build a modern microservices-based logistics system to save the company from financial ruin. We give you the hands-on experience you need to run microservices in the real world. This includes implementing advanced container scheduling and scaling to deal with variable service requests, implementing a service mesh, issue tracing with AWS X-Ray, container and instance-level logging with Amazon CloudWatch, and load testing.
Know before you go
Want to brush up on your container knowledge before re:Invent? Here are some helpful resources to get started:
Programa de revendedor aprovado agora no Brasil — our Approved Reseller programme is live in Brazil, with Anatel-approved Raspberry Pis in a rather delicious shade of blue on sale from today.
Blue Raspberry is more than just the best Jolly Ranger flavour
The challenge
The difficulty in buying our products — and the lack of Anatel certification — have been consistent points of feedback from our many Brazilian customers and followers. In much the same way that electrical products in the USA must be FCC-approved in order to be produced or sold there, products sold in Brazil must be approved by Anatel. And so we’re pleased to tell you that the Raspberry Pi finally has this approval.
Blue Raspberry
Today we’re also announcing the appointment of our first Approved Reseller in Brazil: FilipeFlop will be able to sell Raspberry Pi 3 units across the country.
A big shout-out to the team at FilipeFlop that has worked so hard with us to ensure that we’re getting the product on sale in Brazil at the right price. (They also helped us understand the various local duties and taxes which need to be paid!)
Please note: the blue colouring of the Raspberry Pi 3 sold in Brazil is the only difference between it and the standard green model. People outside Brazil will not be able to purchase the blue variant from FilipeFlop.
More Raspberry Pi Approved Resellers
Since first announcing it back in August, we have further expanded our Approved Reseller programme by adding resellers for Austria, Canada, Cyprus, Czech Republic, Denmark, Estonia, Finland, Germany, Latvia, Lithuania, Norway, Poland, Slovakia, Sweden, Switzerland, and the US. All Approved Resellers are listed on our products page, and more will follow over the next few weeks!
Make and share
If you’re based in Brazil and you’re ordering the new, blue Raspberry Pi, make sure to share your projects with us on social media. We can’t wait to see what you get up to with them!
As Backblaze continues to grow, we need to keep our web experience on point, so we put out a call for creative folks that can help us make the Backblaze experience all that it can be. We found Carlo! He’s a frontend web developer who used to work at Sea World. Lets learn a bit more about Carlo, shall we?
What is your Backblaze Title?
Senior Frontend Developer
Where are you originally from?
I grew up in San Diego, California.
What attracted you to Backblaze?
I am excited that frontend architecture is approaching parity with the rest of the web services software development ecosystem. Most of my experience has been full stack development, but I have recently started focusing on the front end. Backblaze shares my goal of having a first class user experience using frameworks like React.
What do you expect to learn while being at Backblaze?
I’m interested in building solutions that help customers visualize and work with their data intuitively and efficiently.
Where else have you worked?
GoPro, Sungevity, and Sea World.
What’s your dream job?
Hip Hop dressage choreographer.
Favorite place you’ve traveled?
The Arctic in Northern Finland, in a train in a boat sailing the gap between Germany and Denmark, and Vieques PR.
Favorite hobby?
Sketching, writing, and dressing up my hairless dogs.
Of what achievement are you most proud?
It’s either helping release a large SOA site, or orchestrating a Morrissey cover band flash mob #squadgoals. OK, maybe one those things didn’t happen…
Star Trek or Star Wars?
Interstellar!
Favorite food?
Mexican food.
Coke or Pepsi?
Ginger beer.
Why do you like certain things?
Things that I like bring me joy a la Marie Kondo.
Anything else you’d like you’d like to tell us?
¯\_(ツ)_/¯
Wow, hip hop dressage choreographer — that is amazing. Welcome aboard Carlo!
Amazon Redshift makes analyzing exabyte-scale data fast, simple, and cost-effective. It delivers advanced data warehousing capabilities, including parallel execution, compressed columnar storage, and end-to-end encryption as a fully managed service, for less than $1,000/TB/year. With Amazon Redshift Spectrum, you can run SQL queries directly against exabytes of unstructured data in Amazon S3 for $5/TB scanned.
Today, we are making our Dense Compute (DC) family faster and more cost-effective with new second-generation Dense Compute (DC2) nodes at the same price as our previous generation DC1. DC2 is designed for demanding data warehousing workloads that require low latency and high throughput. DC2 features powerful Intel E5-2686 v4 (Broadwell) CPUs, fast DDR4 memory, and NVMe-based solid state disks.
We’ve tuned Amazon Redshift to take advantage of the better CPU, network, and disk on DC2 nodes, providing up to twice the performance of DC1 at the same price. Our DC2.8xlarge instances now provide twice the memory per slice of data and an optimized storage layout with 30 percent better storage utilization.
Customer successes
Several flagship customers, ranging from fast growing startups to large Fortune 100 companies, previewed the new DC2 node type. In their tests, DC2 provided up to twice the performance as DC1. Our preview customers saw faster ETL (extract, transform, and load) jobs, higher query throughput, better concurrency, faster reports, and shorter data-to-insights—all at the same cost as DC1. DC2.8xlarge customers also noted that their databases used up to 30 percent less disk space due to our optimized storage format, reducing their costs.
4Cite Marketing, one of America’s fastest growing private companies, uses Amazon Redshift to analyze customer data and determine personalized product recommendations for retailers. “Amazon Redshift’s new DC2 node is giving us a 100 percent performance increase, allowing us to provide faster insights for our retailers, more cost-effectively, to drive incremental revenue,” said Jim Finnerty, 4Cite’s senior vice president of product.
BrandVerity, a Seattle-based brand protection and compliance company, provides solutions to monitor, detect, and mitigate online brand, trademark, and compliance abuse. “We saw a 70 percent performance boost with the DC2 nodes for running Redshift Spectrum queries. As a result, we can analyze far more data for our customers and deliver results much faster,” said Hyung-Joon Kim, principal software engineer at BrandVerity.
“Amazon Redshift is at the core of our operations and our marketing automation tools,” said Jarno Kartela, head of analytics and chief data scientist at DNA Plc, one of the leading Finnish telecommunications groups and Finland’s largest cable operator and pay TV provider. “We saw a 52 percent performance gain in moving to Amazon Redshift’s DC2 nodes. We can now run queries in half the time, allowing us to provide more analytics power and reduce time-to-insight for our analytics and marketing automation users.”
You can try the new node type using our getting started guide. Just choose dc2.large or dc2.8xlarge in the Amazon Redshift console:
If you have a DC1.large Amazon Redshift cluster, you can restore to a new DC2.large cluster using an existing snapshot. To migrate from DS2.xlarge, DS2.8xlarge, or DC1.8xlarge Amazon Redshift clusters, you can use the resize operation to move data to your new DC2 cluster. For more information, see Clusters and Nodes in Amazon Redshift.
To get the latest Amazon Redshift feature announcements, check out our What’s New page, and subscribe to the RSS feed.
В решение по делото Halldorsson v. Iceland Европейският съд по правата на човека (ЕКПЧ) заявява, че журналист, отговорен за телевизионна новина, която засяга доброто име на идентифицируемо публично лице, трябва да може да докаже, че е действал добросъвестно, що се отнася до точността на твърденията в новината. Журналистът не може да се позовава на тайна на източниците на информация, когато не може да представи доказателства за сериозни обвинения. И в по-ранни решения ЕСПЧ вече е посочвал, че правата на журналистите могат да ползват тези, които действат добросъвестно и според стандартите на отговорната журналистика (вж Pentikainen v Finland).
Жалбоподателят е журналист, работещ в новинарската редакция на Исландската национална телевизия (RUV). Телевизията излъчва серия от новинарски предавания за сделка от около 20 милиона евро между исландско дружество и компания в Панама. Съобщава се, че са замесени трима исландски бизнесмени (A, B и C). Показани са техни снимки заедно с текст „разследва се”, придружен от съобщението, че властите разследват случая. В друга новина снимки на А, Б и С са показани над карта на света, като купчинка пари се прехвърлят визуално върху снимките на мъжете, като се споменава, че парите са в “джобовете на тройката”. Обобщение на съдържанието на излъчваните новинарски материали е публикувано и на интернет страницата на RUV. След излъчването на новините единият от засегнатите А отрича всяка връзка с предполагаемата заподозряна сделка. По-късно А подава дело срещу клевета срещу Свавар Халдорсон, автор на новините. Халдорсон е осъден да заплати на А около 2,600 евро като обезщетение за неимуществени вреди.
Пред Европейския съд по правата на човека Халдорсон поддържа, че изявленията в новините не са засегнали А, не са клеветнически и не се твърди, че А е виновен за финансово престъпление или други действия, наказуеми от закона. Стандартите
В съответствие с констатациите на националните съдилища ЕСПЧ потвърждава, че новините действително съдържат сериозно обвинение за незаконни и престъпни деяния; следователно ЕСПЧ е на мнение, че спорът изисква проучване на справедливото равновесие между правото на зачитане на личния живот и правото на свобода на изразяване
Принципите, които се отнасят до въпроса дали “в демократичното общество е необходима намеса в свободата на изразяване”, са добре установени в практиката на Съда (вж Delfi AS срещу Естония). [37].
Съдът е постановил, че доброто име на дадено лице, дори ако е критикувано в рамките на обществен дебат, е част от неговата лична самоличност и психологическа неприкосновеност и следователно попада в приложното поле на неговия “личен живот” “. За да влезе в действие член 8, атаката срещу личната чест и доброто име трябва да е достигнала определено ниво на сериозност. [38].
Тъй като многократно е трябвало да разглежда спорове, изискващи проверка на справедливото равновесие между правото на зачитане на личния живот и правото на свобода на изразяване, Съдът е разработил общи принципи, произтичащи от богата съдебна практика в тази област. [39].
Критериите, които са от значение за балансирането на правото на свобода на изразяване срещу правото на зачитане на личния живот, са inter alia: приносът към дебатите от общ интерес; колко добре е известно заинтересованото лице и какъв е предметът на публикацията; предишното му поведение; метода за получаване на информацията и нейната достоверност; съдържанието, формата и последствията от публикацията; строгостта на наложената санкция (вж. например Axel Springer AG срещу Германия и Von Hannover срещу Германия (№ 2 ).
Накрая, Съдът напомня, че в зоната на преценка на националните власти са необходими сериозни мотиви, за да не се приеме становището на националните съдилища. [40].
Решението
ЕСПЧ е съгласен, че А трябва да се смята за публична фигура и че предметът на спорните новинарски материали е въпрос от обществен интерес.
Съдът потвърждава заключенията на Върховния съд на Исландия, че Халдорсон не е действал добросъвестно. Не е потърсил информация от А, докато подготвя новината. ЕСПЧ отново заявява, че защитата, предоставена от член 10 от ЕКПЧ на журналистите по отношение на докладването по въпроси от общ интерес, зависи от условието те да действат добросъвестно и на точна фактическа основа и да предоставят надеждна и точна информация в съответствие с етиката на журналистиката.
Съдът посочва, че не намира основания журналистът да се отклони от задължението си да проверява фактическите изявления, които засягат доброто име.
Отхвърлени са аргументите на Халдорсон, които се отнасят до правото да запази поверителните си източници и документацията, послужили за изготвяне на новините. ЕСПЧ потвърждава, че защитата на журналистическите източници е едно от основните условия за свободата на медиите, липсата на защита ги демотивира да оказват помощ на пресата при информирането на обществеността по въпроси от обществено значение. ЕСПЧ пояснява обаче, че простото позоваване на защитата на източниците не може да освободи журналист от задължението да докаже достоверността на твърденията, или да има достатъчно основания за сериозни обвинения от фактически характер – задължение, което може да бъде изпълнено, без непременно да се налага да се разкриват източниците.[51]
И накрая, ЕСПЧ не смята, че финансовата компенсация и изплащането на разноските по вътрешното производство са прекомерни или с възспиращ ефект върху упражняването на свободата на медиите. Според Съда потенциалното въздействие на медията е важен фактор при отчитането на пропорционалността на намесата. В това отношение ЕСПЧ напомня становището си, че аудиовизуалните медии имат по-непосредствен и мощен ефект от печатните медии.
Върховният съд на Исландия е уравновесил правото на свобода на изразяване с правото на зачитане на личния живот, взел е предвид критериите, определени в съдебната практика на ЕСПЧ, действал е в рамките на предоставената му преценка и е постигнал разумен баланс между наложените мерки, ограничаващи правото на свобода на изразяване.
Поради това ЕКПЧ заключава с единодушие, че не е налице нарушение на чл. 10 от ЕКПЧ.
Реакцията на интернет компаниите срещу някои от предложенията е ясна и разбираема, както е разбираемо и – симетрично – желанието на носителите на права за още контрол по отношение на платформите.
В публикация от тази седмица се отбелязва позицията на държави от ЕС срещу проекта, в частност по въпроса за наблюдението на входа: дали мониторингът при ъплоуд нарушава правата на човека? Цитира се документ, според който Белгия, Чехия, Финландия, Холандия и др. настояват Правната служба на Съвета да прецени дали член 13, разпоредбата за наблюдение при ъплоуд, съответства на Хартата за основните права и Директивата за електронната търговия в светлината на решенията на Съда на ЕС.
Член 13 от предложението на Комисията за директива относно авторското право в единния цифров пазар налага задължение на някои платформи за активно предотвратяване на качването от потребители на съдържание, което съдържа защитени произведения или предмет, които притежателите на права искат да блокират. Това може да се постигне само чрез използване на технологията за идентифициране и филтриране. Същевременно Комисията заявява, че ще поддържа съществуващите принципи на електронната търговия в Директива 2001/29 / ЕО.
Предварителната идентификация и филтриране преди етапа на качване на съдържанието ще се осъществи автоматично, когато идентификационната технология намери съвпадение с дадено произведение или защитен предмет. Този процес ще се прилага за голямо разнообразие от онлайн услуги и платформи, използвани от европейските граждани, за да качват съдържание в интернет. На практика това би станало независимо от факта, че потребителят може да се възползва от изключение от авторското право.
Освен това установената съдебна практика на Съда на ЕС подчертава конфликта между мониторинга и основни права като защита на личните данни и право на стопанска дейност. В решението по делото Sabam / Netlog, Съдът на ЕС отказа да наложи задължение за систематичен мониторинг върху съдържанието на основание чл. 8, 11 и 16 от Хартата на основните права на ЕС.
Въпрос: Дали мярка / задължение, както се предлага съгласно член 13, би била съвместима с Хартата (и по-специално член 11 – свободата на изразяване и информация, член 8 – Защита на личните данни – и член 16 – Свобода на стопанска дейност) в светлината на юриспруденцията на Съда на ЕС, която има за цел да осигури справедлив баланс при прилагането на конкуренцията на основни права? Дали предложените мерки са оправдани и пропорционални?
*
Междувременно Естония – като държава, председателстваща Съвета на ЕС – е изпратила компромисен текст до държавите, запазващ идеите за наблюдение на входа. Въпреки режима на Директивата за електронната търговия се предвижда платформите и потребителите да носят отговорност за нарушения на авторските права.
Още едно решение на Съда за правата на човека, в което се обсъжда критичната функция на медиите по отношение на лица от съдебната система. И отново тази предметна област е подчертана като област, представляваща значителен обществен интерес.
*
В решението по делото Tavares de Almeida Fernandes and Almeida Fernandes v. Portugal ЕСПЧ констатира нарушение на чл.10 – свобода на изразяване.
В началото са припомнени общи принципи, които Съдът прилага при решенията по чл.10 ЕКПЧ, като се казва [53-59], че
Общите принципи за преценка дали намесата в упражняването на правото на свобода на изразяване е “необходима в едно демократично общество” по смисъла на член 10 § 2 от Конвенцията са добре установени в съдебната практика на Съда. Наскоро те бяха обобщени в решенията по дела Bédat v Switzerland (2016 г.) и Pentikäinen v. Finland [GC] ( 2015 г. ).
Журналистическата свобода обхваща евентуално преувеличаване или дори провокация (вж. Prager и Oberschlick).
Чл.10 няма указания за ограничаване на политическото слово или за дебатите по въпроси от обществен интерес (виж Morice v France 2015 г., с по-нататъшни препратки). Висока степен на защита на свободата на изразяване обикновено се предоставя, когато се засяга въпрос от обществен интерес, какъвто е случаят по-специално с функционирането на съдебната система (пак там).
Съдът винаги е правил разграничение между твърдения за факти, от една страна, и оценки. Съществуването на факти може да се докаже, истинността на оценките – не. Ако обаче дадено твърдение представлява оценка, пропорционалността на намесата зависи от това дали има достатъчна фактическа основа за оспорваното твърдение: ако не, тази оценка може да се окаже прекомерна (вж. Lindon, Otchakovsky- Laurens и др. срещу Франция).
Защитата, предоставена от член 10 на журналисти във връзка с въпроси от обществен интерес, е подчинена на условието те да действат добросъвестно и да предоставят точна и надеждна информация в съответствие с етиката на журналистиката ( виж Божков срещу България 2011 г.). В ситуации, в които има твърдение за факт без достатъчно доказателства – но журналистът обсъжда въпрос от истински обществен интерес – се проверява дали журналистът е действал професионално и добросъвестно (Касабова срещу България).
Съдът проверява дали е постигнат справедлив баланс между защитата на свободата на изразяване и защитата на доброто име на засегнатите лица. В два съвсем неотдавнашни случая ЕСПЧ продължи да определя критерии, които трябва да бъдат взети предвид, когато правото на свобода на изразяване се балансира спрямо правото на зачитане на личния живот (Axel Springer AG v Germany и Von Hannover v Germany (№ 2).
На последно място, естеството и тежестта на наложените санкции са също фактори, които трябва да бъдат взети предвид при оценката на пропорционалността на намесата. Както вече изтъква Съдът, намесата в свободата на изразяване може да има смразяващ ефект върху упражняването на тази свобода (вж. Morice ).
Накрая Съдът напомня, че взема предвид обстоятелствата и цялостния контекст, в който са били направени съответните изявления (вж. Morice, § 162).
Случаят:
португалски журналист пише редакционна статия, озаглавена “Стратегията на паяка”, в която дава мнението си за избора на съдия на поста председател на Върховния съд.Той е осъден да плати неимуществени вреди за нарушаване на доброто име на съдията – постъпка “с отрицателно въздействие върху личната сфера, включително семейния и професионалния кръг на ищеца”.
Въпросът е в центъра на оживени дебати в Португалия, което националните съдилища пропускат да вземат предвид. Няма съмнение, че към този въпрос има значителен обществен интерес. Съдът отбелязва изрично, че функционирането на съдебната система, която е от съществено значение за всяко демократично общество, е въпрос от обществен интерес (пак там, § 128). Лицата, които са избрани да представляват различните институции в съдебната система, също представляват значителен интерес. Следователно ограниченията на свободата на изразяване в тази сфера трябва да се тълкуват стриктно.
Според решението вече е добре установено в практиката на Съда, че членовете на съдебната власт, които действат в качеството си на длъжностни лица, могат да бъдат подложени на критика в по-широки граници в сравнение с обикновените граждани (виж SARL Libération § 74 , ЕКПЧ 2008). В същото време Съдът многократно подчертава особената роля на съдебната власт, която като гарант на справедливостта е фундаментална ценност в държава, ръководена от върховенството на закона. Може да се окаже необходимо съдебната власт да бъде защитавана срещу разрушителните атаки, когато са необосновани.
Португалските съдилища приемат, че личният интерес на ищеца за защитата на репутацията му надхвърля правото на свобода на изразяване. Те намират, inter alia, че някои твърдения в статията са прекомерни, надхвърлят границите на приемливата критика и правото на информиране и представляват атака срещу правата на личността на новия председател на ВС.
Според ЕСПЧ:
На първо място Съдът отбелязва, че тези изявления представляват оценки, при това с достатъчна фактическа основа.
На второ място Съдът приема, че националните съдилища не са коментирали метафоричния тон на оспорваните твърдения и не е обсъдено съдържанието и смисъла им. Те като че ли са разглеждали твърденията изолирано от останалата част от статията. За ЕСПЧ твърденията остават в рамките на допустимите критики и преувеличения. Португалските съдилища не обясняват в достатъчна степен как журналистът е надхвърлил правото си на критика и защо правото му да изразява своето мнение е трябвало да бъде ограничено.
На последно място, що се отнася до наложеното наказание, Съдът подчертава, че съгласно Конвенцията присъждането на обезщетение за обида или клевета трябва да е разумно пропорционално на претърпяната вреда.
В заключение: Съдът не намира, че намесата “е необходима в едно демократично общество”. Според ЕСПЧ португалските съдилища са превишили предоставената им свобода на преценка по отношение на възможното ограничаване на дебатите от обществен интерес.
There’s no doubt about it – Artificial Intelligence is changing the world and how it operates. Across industries, organizations from startups to Fortune 500s are embracing AI to develop new products, services, and opportunities that are more efficient and accessible for their consumers. From driverless cars to better preventative healthcare to smart home devices, AI is driving innovation at a fast rate and will continue to play a more important role in our everyday lives.
This month we’d like to highlight startups using AI solutions to help companies grow. We are pleased to feature:
SignalBox – a simple and accessible deep learning platform to help businesses get started with AI.
Valossa – an AI video recognition platform for the media and entertainment industry.
Kaliber – innovative applications for businesses using facial recognition, deep learning, and big data.
SignalBox (UK)
In 2016, SignalBox founder Alain Richardt was hearing the same comments being made by developers, data scientists, and business leaders. They wanted to get into deep learning but didn’t know where to start. Alain saw an opportunity to commodify and apply deep learning by providing a platform that does the heavy lifting with an easy-to-use web interface, blueprints for common tasks, and just a single-click to productize the models. With SignalBox, companies can start building deep learning models with no coding at all – they just select a data set, choose a network architecture, and go. SignalBox also offers step-by-step tutorials, tips and tricks from industry experts, and consulting services for customers that want an end-to-end AI solution.
SignalBox offers a variety of solutions that are being used across many industries for energy modeling, fraud detection, customer segmentation, insurance risk modeling, inventory prediction, real estate prediction, and more. Existing data science teams are using SignalBox to accelerate their innovation cycle. One innovative UK startup, Energi Mine, recently worked with SignalBox to develop deep networks that predict anomalous energy consumption patterns and do time series predictions on energy usage for businesses with hundreds of sites.
SignalBox uses a variety of AWS services including Amazon EC2, Amazon VPC, Amazon Elastic Block Store, and Amazon S3. The ability to rapidly provision EC2 GPU instances has been a critical factor in their success – both in terms of keeping their operational expenses low, as well as speed to market. The Amazon API Gateway has allowed for operational automation, giving SignalBox the ability to control its infrastructure.
As students at the University of Oulu in Finland, the Valossa founders spent years doing research in the computer science and AI labs. During that time, the team witnessed how the world was moving beyond text, with video playing a greater role in day-to-day communication. This spawned an idea to use technology to automatically understand what an audience is viewing and share that information with a global network of content producers. Since 2015, Valossa has been building next generation AI applications to benefit the media and entertainment industry and is moving beyond the capabilities of traditional visual recognition systems.
Valossa’s AI is capable of analyzing any video stream. The AI studies a vast array of data within videos and converts that information into descriptive tags, categories, and overviews automatically. Basically, it sees, hears, and understands videos like a human does. The Valossa AI can detect people, visual and auditory concepts, key speech elements, and labels explicit content to make moderating and filtering content simpler. Valossa’s solutions are designed to provide value for the content production workflow, from media asset management to end-user applications for content discovery. AI-annotated content allows online viewers to jump directly to their favorite scenes or search specific topics and actors within a video.
Valossa leverages AWS to deliver the industry’s first complete AI video recognition platform. Using Amazon EC2 GPU instances, Valossa can easily scale their computation capacity based on customer activity. High-volume video processing with GPU instances provides the necessary speed for time-sensitive workflows. The geo-located Availability Zones in EC2 allow Valossa to bring resources close to their customers to minimize network delays. Valossa also uses Amazon S3 for video ingestion and to provide end-user video analytics, which makes managing and accessing media data easy and highly scalable.
Serial entrepreneurs Ray Rahman and Risto Haukioja founded Kaliber in 2016. The pair had previously worked in startups building smart cities and online privacy tools, and teamed up to bring AI to the workplace and change the hospitality industry. Our world is designed to appeal to our senses – stores and warehouses have clearly marked aisles, products are colorfully packaged, and we use these designs to differentiate one thing from another. We tell each other apart by our faces, and previously that was something only humans could measure or act upon. Kaliber is using facial recognition, deep learning, and big data to create solutions for business use. Markets and companies that aren’t typically associated with cutting-edge technology will be able to use their existing camera infrastructure in a whole new way, making them more efficient and better able to serve their customers.
Computer video processing is rapidly expanding, and Kaliber believes that video recognition will extend to far more than security cameras and robots. Using the clients’ network of in-house cameras, Kaliber’s platform extracts key data points and maps them to actionable insights using their machine learning (ML) algorithm. Dashboards connect users to the client’s BI tools via the Kaliber enterprise APIs, and managers can view these analytics to improve their real-world processes, taking immediate corrective action with real-time alerts. Kaliber’s Real Metrics are aimed at combining the power of image recognition with ML to ultimately provide a more meaningful experience for all.
The AWS Community Heroes program seeks to recognize and honor the most engaged Amazon Web Services developers who have had a positive impact in the global community. If you are interested in learning more about the AWS Community Heroes program or curious about ways to get involved with your local AWS community, please click the graphic below to see the AWS Heroes talk directly about the program.
Now that you know more about the AWS Community Hero program, I am elated to introduce to you all the latest AWS Heroes to join the fold:
These guys and gals impart their passion for AWS and cloud technologies with the technical community by sharing their time and knowledge across social media and via in-person events.
Ben Kehoe
Ben Kehoe works in the field of Cloud Robotics—using the internet to enable robots to do more and better things—an area of IoT involving computation in the cloud and at the edge, Big Data, and machine learning. Approaching cloud computing from this angle, Ben focuses on developing business value rapidly through serverless (and service full) applications.
At iRobot, Ben guided the transition to a serverless architecture on AWS based on AWS Lambda and AWS IoT to support iRobot’s connected robot fleet. This architecture enables iRobot to focus on its core mission of building amazing robots with a minimum of development and operations effort.
Ben seeks to amplify voices from dev, operations, and security to help the community shape the evolution of serverless and event-driven designs for IoT and cloud computing more broadly.
Marcia Villalba
Marcia is a Senior Full-stack Developer at Rovio, the creators of Angry Birds. She is originally from Uruguay but has been living in Finland for almost a decade.
She has been designing and developing software professionally for over 10 years. For more than four years she has been working with AWS, including the past year which she’s worked mostly with serverless technologies.
Marcia runs her own YouTube channel, in which she publishes at least one new video every week. In her channel, she focuses on teaching how to use AWS serverless technologies and managed services. In addition to her professional work, she is the Tech Lead in “Girls in Tech” Helsinki, helping to inspire more women to enter into technology and programming.
Joshua Levy
Joshua Levy is an entrepreneur, engineer, writer, and serial startup technologist and advisor in cloud, AI, search, and startup scaling.
He co-founded the Open Guide to AWS, which is one of the most popular AWS resources and communities on the web. The collaborative project welcomes new contributors or editors, and anyone who wishes to ask or answer questions.
Josh has years of experience in hands-on software engineering and leadership at fast-growing consumer and enterprise startups, including Viv Labs (acquired by Samsung) and BloomReach (where he led engineering and AWS infrastructure), and a background in AI and systems research at SRI and mathematics at Berkeley. He has a passion for improving how we share knowledge on complex engineering, product, or business topics. If you share any of these interests, reach out on Twitter or find his contact details on GitHub.
Michael Ezzell
Michael Ezzell is a frequent contributor of detailed, in-depth solutions to questions spanning a wide variety of AWS services on Stack Overflow and other sites on the Stack Exchange Network.
Michael is the resident DBA and systems administrator for Online Rewards, a leading provider of web-based employee recognition, channel incentive, and customer loyalty programs, where he was a key player in the company’s full transition to the AWS platform.
Based in Cincinnati, and known to coworkers and associates as “sqlbot,” he also provides design, development, and support services to freelance consulting clients for AWS services and MySQL, as well as, broadcast & cable television and telecommunications technologies.
Thanos Baskous
Thanos Baskous is a San Francisco-based software engineer and entrepreneur who is passionate about designing and building scalable and robust systems.
He co-founded the Open Guide to AWS, which is one of the most popular AWS resources and communities on the web.
At Twitter, he built infrastructure that allows engineers to seamlessly deploy and run their applications across private data centers and public cloud environments. He previously led a team at TellApart (acquired by Twitter) that built an internal platform-as-a-service (Docker, Apache Aurora, Mesos on AWS) in support of a migration from a monolithic application architecture to a microservice-based architecture. Before TellApart, he co-founded AWS-hosted AdStack (acquired by TellApart) in order to automatically personalize and improve the quality of content in marketing emails and email newsletters.
Rob Gruhl
Rob is a senior engineering manager located in Seattle, WA. He supports a team of talented engineers at Nordstrom Technology exploring and deploying a variety of serverless systems to production.
From the beginning of the serverless era, Rob has been exclusively using serverless architectures to allow a small team of engineers to deliver incredible solutions that scale effortlessly and wake them in the middle of the night rarely. In addition to a number of production services, together with his team Rob has created and released two major open source projects and accompanying open source workshops using a 100% serverless approach. He’d love to talk with you about serverless, event-sourcing, and/or occasionally-connected distributed data layers.
Feel free to follow these great AWS Heroes on Twitter and check out their blogs. It is exciting to have them all join the AWS Community Heroes program.
This week, just nine weeks after its launch, we will ship the 250,000th Pi Zero W into the market. As well as hitting that pretty impressive milestone, today we are announcing 13 new Raspberry Pi Zero distributors, so you should find it much easier to get hold of a unit.
This significantly extends the reach we can achieve with Pi Zero and Pi Zero W across the globe. These new distributors serve Australia and New Zealand, Italy, Malaysia, Japan, South Africa, Poland, Greece, Switzerland, Denmark, Sweden, Norway, and Finland. We are also further strengthening our network in the USA, Canada, and Germany, where demand continues to be very high.
A common theme on the Raspberry Pi forums has been the difficulty of obtaining a Zero or Zero W in a number of countries. This has been most notable in the markets which are furthest away from Europe or North America. We are hoping that adding these new distributors will make it much easier for Pi-fans across the world to get hold of their favourite tiny computer.
We know there are still more markets to cover, and we are continuing to work with other potential partners to improve the Pi Zero reach. Watch this space for even further developments!
Who are the new Pi Zero Distributors?
Check the icons below to find the distributor that’s best for you!
Australia and New Zealand
South Africa
Please note: Pi Zero W is not currently available to buy in South Africa, as we are waiting for ICASA Certification.
Denmark, Sweden, Finland, and Norway
Germany and Switzerland
Poland
Greece
Italy
Japan
Please note: Pi Zero W is not currently available to buy in Japan as we are waiting for TELEC Certification.
Malaysia
Please note: Pi Zero W is not currently available to buy in Malaysia as we are waiting for SIRIM Certification
Canada and USA
Get your Pi Zero
For full product details, plus a complete list of Pi Zero distributors, visit the Pi Zero W page.
Last year we launched new AWS Regions in Canada, India, Korea, the UK (London), and the United States (Ohio), and announced that new regions are coming to France (Paris) and China (Ningxia).
Today, I am happy to be able to tell you that we are planning to open up an AWS Region in Stockholm, Sweden in 2018. This region will give AWS partners and customers in Denmark, Finland, Iceland, Norway, and Sweden low-latency connectivity and the ability to run their workloads and store their data close to home.
The Nordics is well known for its vibrant startup community and highly innovative business climate. With successful global enterprises like ASSA ABLOY, IKEA, and Scania along with fast growing startups like Bambora, Supercell, Tink, and Trustpilot, it comes as no surprise that Forbes ranks Sweden as the best country for business, with all the other Nordic countries in the top 10. Even better, the European Commission ranks Sweden as the most innovative country in EU.
This will be the fifth AWS Region in Europe joining four other Regions there — EU (Ireland), EU (London), EU (Frankfurt) and an additional Region in France expected to launch in the coming months. Together, these Regions will provide our customers with a total of 13 Availability Zones (AZs) and allow them to architect highly fault tolerant applications while storing their data in the EU.
Today, our infrastructure comprises 42 Availability Zones across 16 geographic regions worldwide, with another three AWS Regions (and eight Availability Zones) in France, China and Sweden coming online throughout 2017 and 2018, (see the AWS Global Infrastructure page for more info).
We are looking forward to serving new and existing Nordic customers and working with partners across Europe. Of course, the new region will also be open to existing AWS customers who would like to process and store data in Sweden. Public sector organizations (government agencies, educational institutions, and nonprofits) in Sweden will be able to use this region to store sensitive data in-country (the AWS in the Public Sector page has plenty of success stories drawn from our worldwide customer base).
If you are a customer or a partner and have specific questions about this Region, you can contact our Nordic team.
Help Wanted As part of our launch, we are hiring individual contributors and managers for IT support, electrical, logistics, and physical security positions. If you are interested in learning more, please contact [email protected].
On August 25, 1991, an obscure student in Finland named Linus Benedict Torvalds posted a message to the comp.os.minix Usenet newsgroup saying that he was working on a free operating system as a project to learn about the x86 architecture. He cannot possibly have known that he was launching a project that would change the computing industry in fundamental ways. Twenty-five years later, it is fair to say that none of us foresaw where Linux would go — a lesson that should be taken to heart when trying to imagine where it might go from here.
I believe that Open Source is one of the best ways to develop software. However, as I have written in blogs before, the Open Source model presents challenges to creating a software company that has the needed resources to continually invest in product development and innovation.
One reason for this is a lack of understanding of the costs associated with developing and extending software. As one example of what I regard to be unrealistic user expectations, here is a statement from a large software company when I asked them to support MariaDB development with financial support:
“As you may remember, we’re a fairly traditional and conservative company. A donation from us would require feature work in exchange for the donation. Unfortunately, I cannot think of a feature that I would want developed that we would be willing to pay for this year.”
This thinking is flawed on many fronts — a new feature can take more than a year to develop! It also shows that the company saw that features create value they would invest in, but was not willing to pay for features that had already been developed and was not prepared to invest into keeping alive a product they depend upon. They also don’t trust the development team with the ability to independently define new features that would bring value. Without that investment, a technology company cannot invest in ongoing research and development, thereby dooming its survival.
To be able to compete with closed source technology companies who have massive profit margins, one needs income.
Dual licensing on Free Software, as we applied it at MySQL, works only for a limited subset of products (something I have called ‘infrastructure software’) that customers need to combine with their own closed source software and distribute to their customers. Most software products are not like that. This is why David Axmark and I created the Business Source license (BSL), a license designed to harmonize producing Open Source software and running a successful software company.
The intent of BSL is to increase the overall freedom and innovation in the software industry, for customers, developers, user and vendors. Finally, I hope that BSL will pave the way for a new business model that sustains software development without relying primarily on support.
Today, MariaDB Corporation is excited to introduce the beta release of MariaDB MaxScale 2.0, our database proxy, which is released under BSL. I am very happy to see MariaDB MaxScale being released under BSL, rather than under an Open Core or Closed Source license. Developing software under BSL will provide more resources to enhance it for future releases, in similar ways as Dual Licensing did for MySQL. MariaDB Corporation will over time create more BSL products. Even with new products coming under BSL, MariaDB Server will continue to be licensed under GPL in perpetuity. Keep in mind that because MariaDB Server extends earlier MySQL GPL code it is forever legally bound by the original GPL license of MySQL.
In addition to putting MaxScale under BSL, we have also created a framework to make it easy for anyone else to license their software under BSL.
Here follows the copyright notice used in the MaxScale 2.0 source code:
/* * Copyright (c) 2016 MariaDB Corporation Ab * * Use of this software is governed by the Business Source License * included in the LICENSE.TXT file and at www.mariadb.com/bsl. * * Change Date: 2019-01-01 * * On the date above, in accordance with the Business Source * License, use of this software will be governed by version 2 * or later of the General Public License. */
Two out of three top characteristics of the BSL are already shown here: The Change Date and the Change License. Starting on 1 January 2019 (the Change Date), MaxScale 2.0 is governed by GPLv2 or later (the Change License).
The centrepiece of the LICENSE.TXT file itself is this text:
Use Limitation: Usage of the software is free when your application uses the Software with a total of less than three database server instances for production purposes.
This third top characteristic is in effect until the Change Date.
What this means is that the software can be distributed, used, modified, etc., for free, within the use limitation. Beyond it, a commercial relationship is required – which, in the case of MaxScale 2.0, is a MariaDB Enterprise Subscription, which permits the use of MaxScale with three or more database servers.
You can find the full license text for MaxScale at mariadb.com/bsl and a general BSL FAQ at mariadb.com/bsl-faq-adopting. Feel free to copy or refer to them for your own BSL software!
The key characteristics of BSL are as follows:
The source code of BSL software is available in full from day one.
Users of BSL software can modify, distribute and compile the source.
Code contributions are encouraged and accepted through the “new BSD” license.
The BSL is purposefully designed to avoid vendor lock-in. With vendor lock in, I here mean that users of BSL software are not depending on one single vendor for support, fixing bugs or enhancing the BSL product.
The Change Date and Change License provide a time-delayed safety net for users, should the vendor stop developing the software.
Testing BSL software is always free of cost.
Production use of the software is free of cost within the use limitation.
Adoption of BSL software is encouraged with use limitations that provide ample freedom.
Monetisation of BSL software is driven by incremental sales in cases where the use limitation applies.
Whether BSL will be widely adopted remains to be seen. It’s certainly my desire that this new business model will inspire companies who develop Closed Source software or Open Core software to switch to BSL, which will ultimately result in more Open Source software in the community. With BSL, companies can realize a similar amount of revenue for the company, as they could with closed source or open core, while the free of cost usage in core production scenarios establishes a much larger user base to drive testing, innovation and adoption.
If you live in Berlin and are a GNOMEr of some kind then please feel invited top drop by tomorrow (Fri 29) at 4 pm at the Prater Biergarten (Weather permitting outside, otherwise inside). We’ll have a little GNOME get-together. For now, we know that at least the Openismus Berlin folks will be there, as will I and presumably one special guest from Finland, and whoever else wants to attend.
Hope to see you tomorrow!
The collective thoughts of the interwebz
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.