Walleij: How the ARM32 Linux kernel decompresses

Post Syndicated from corbet original https://lwn.net/Articles/828750/rss

For those who are into the details: here is a
step-by-step guide
through the process of decompressing an Arm kernel
and getting ready to boot from Linus Walleij. “Next the
decompression code sets up a page table, if it is possible to fit one over
the whole uncompressed+compressed kernel image. The page table is not for
virtual memory, but for enabling cache, which is then turned on. The
decompression will for natural reasons be much faster if we can use

Измерването на аудиторията

Post Syndicated from nellyo original https://nellyo.wordpress.com/2020/08/13/bmg_audience/

На 8 юли 2020 се е състояла среща  на радио-и телевизионните оператори (NOVA Broadcasting Group -NBG, БНТ и bTV Media Group – bMG) и представители на рекламодатели и комуникационни агенции (БАР, БАКА).

На  тази среща е поставен въпросът за необходимостта от адекватна система за измерване на аудиторията (Audience Measurement Systems -AMS).  Според  bMG

представителите на рекламодателите и комуникационните агенции са изразили съгласие  с общия подход, предложен от bMG. Едва когато NBG категорично се противопостави на провеждането на търг и не подкрепи учредяването на Комитет на индустрията1, БАР и БАКА изразиха притеснения във връзка с подлагането на системата за измерване на аудиторията на търг. Вместо това NBG предложи да се извърши одит на съществуващата AMS, без да се уточнява как ще изглежда той. Вместо да подкрепи създаването на Комитет на индустрията (JointIndustry Committee) с функциите, описани по-горе и по-подробно в Позицията, NBG изглежда подкрепя възстановяването на Потребителския комитет -с много ограничени функции, които не биха променили настоящата система –непрозрачна, без централизирана собственост върху данните, а по-скоро изградена върху двустранните взаимоотношения между доставчика на AMS и участниците.

Според bMG kлючови компоненти за възстановяване на доверието са

  • подлагането на AMS на търг. Регулярните търгове за избор на система за измерване на аудиторията са установена практика в индустрията на всички пазари с утвърденаAMS, но такъв търг никога не е провеждан в България.
  •  учредяването на Комитет на индустрията (JointIndustry Committee –JIC)-също често срещана практика на пазари с функционираща AMS структура,който притежава данните от AMS, ползвасе с доверието на всички заинтересовани страни и гарантира прозрачни взаимоотношения между доставчика на AMS и участниците в JIC.

На стр. 1 под черта има и бележка за отношението на БНТ към обсъжданата тема.

Текст, представен от bMG , в две части – втората е Позиция, първата може би е Становище, но не съм убедена. Общо 16 страници по темата, несъмнено важни за формирането на пазар.

Introducing the CDK construct library for the serverless LAMP stack

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/introducing-the-cdk-construct-library-for-the-serverless-lamp-stack/

In this post, you learn how the new CDK construct library for the serverless LAMP stack is helping developers build serverless PHP applications.

The AWS Cloud Development Kit (AWS CDK) is an open source software development framework for defining cloud application resources in code. It allows developers to define their infrastructure in familiar programming languages such as TypeScript, Python, C# or Java. Developers benefit from the features those languages provide such as Interfaces, Generics, Inheritance, and Method Access Modifiers. The AWS Construct Library provides a broad set of modules that expose APIs for defining AWS resources in CDK applications.

The “Serverless LAMP stack” blog series provides best practices, code examples and deep dives into many serverless concepts and demonstrates how these are applied to PHP applications. It also highlights valuable contributions from the community to help spark inspiration for PHP developers.

Each component of this serverless LAMP stack is explained in detail in the blog post series:

The CDK construct library for the serverless LAMP stack is an abstraction created by AWS Developer Advocate, Pahud Hsieh. It offers a single high-level component for defining all resources that make up the serverless LAMP stack.

CDK construct for Serverless LAMP stack

CDK construct for Serverless LAMP stack

  1. Amazon API Gateway HTTP API.
  2. AWS Lambda with Bref-FPM runtime.
  3. Amazon Aurora for MySQL database cluster with Amazon RDS Proxy enabled.

Why build PHP applications with AWS CDK constructs?

Building complex web applications from scratch is a time-consuming process. PHP frameworks such as Laravel and Symfony provide a structured and standardized way to build web applications. Using templates and generic components helps reduce overall development effort. Using a serverless approach helps to address some of the traditional LAMP stack challenges of scalability and infrastructure management. Defining these resources with the AWS CDK construct library allows developers to apply the same framework principles to infrastructure as code.

The AWS CDK enables fast and easy onboarding for new developers. In addition to improved readability through reduced codebase size, PHP developers can use their existing skills and tools to build cloud infrastructure. Familiar concepts such as objects, loops, and conditions help to reduce cognitive overhead. Defining the LAMP stack infrastructure for your PHP application within the same codebase reduces context switching and streamlines the provisioning process. Connect CDK constructs to deploy a serverless LAMP infrastructure quickly with minimal code.

Code is a liability and with the AWS CDK you are applying the serverless first mindset to infra code by allowing others to create abstractions they maintain so you don’t need to. I always love deleting code

Says Matt Coulter, creator of CDK patterns – An open source resource for CDK based architecture patterns.

Building a serverless Laravel application with the ServerlessLaravel construct

The cdk-serverless-lamp construct library is built with aws/jsii and published as npm and Python modules. The stack is deployed in either TypeScript or Python and includes the ServerlessLaravel construct. This makes it easier for PHP developers to deploy a serverless Laravel application.

First, follow the “Working with the AWS CDK with in TypeScript“ steps to prepare the AWS CDK environment for TypeScript.

Deploy the serverless LAMP stack with the following steps:

  1. Confirm the CDK CLI instillation:
    $ cdk –version
  2. Create a new Laravel project with AWS CDK:
    $ mkdir serverless-lamp && cd serverless-lamp
  3. Create directories for AWS CDK and Laravel project:
    $ mkdir cdk codebase
  4. Create the new Laravel project with docker
    $ docker run --rm -ti \
    --volume $PWD:/app \
    composer create-project --prefer-dist laravel/laravel ./codebase

The cdk-serverless-lamp construct library uses the bref-FPM custom runtime to run PHP code in a Lambda function. The bref runtime performs similar functionality to Apache or NGINX by forwarding HTTP requests through the FastCGI protocol. This process is explained in detail in “The Serverless LAMP stack part 3: Replacing the web server”. In addition to this, a bref package named larval-bridge automatically configures Laravel to work on Lambda. This saves the developer from having to manually implement some of the configurations detailed in “The serverless LAMP stack part 4: Building a serverless Laravel application

  1. Install bref/bref and bref/laravel-bridge packages in the vendor directories:
    $ cd codebase
    $ docker run --rm -ti \
    --volume $PWD:/app \
    composer require bref/bref bref/laravel-bridge
  2. Initialize the AWS CDK project with typescript.
    $ cd ../cdk
    $ cdk init -l typescript
  3. Install the cdk-severless-lamp npm module
    $ yarn add cdk-serverless-lamp

This creates the following directory structure:

├── cdk
└── codebase

The cdk directory contains the AWS CDK resource definitions. The codebase directory contains the Laravel project.

Building a Laravel Project with the AWS CDK

Replace the contents of ./lib/cdk-stack.ts with:

import * as cdk from '@aws-cdk/core';
import * as path from 'path';
import { ServerlessLaravel } from 'cdk-serverless-lamp';

export class CdkStack extends cdk.Stack {
  constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    new ServerlessLaravel(this, 'ServerlessLaravel', {
      brefLayerVersion: 'arn:aws:lambda:us-east-1:209497400698:layer:php-74-fpm:12',
      laravelPath: path.join(__dirname, '../../codebase'),

The brefLayerVersion argument refers to the AWS Lambda layer version ARN of the Bref PHP runtime. Select the correct ARN and corresponding Region from the bref website. This example deploys the stack into the us-east-1 Region with the corresponding Lambda layer version ARN for the Region.

  1. Deploy the stack:
    cdk deploy

Once the deployment is complete, an Amazon API Gateway HTTP API endpoint is returned in the CDK output. This URL serves the Laravel application.

CDK construct output for Serverless LAMP stack

The application is running PHP on Lambda using bref’s FPM custom runtime. This entire stack is deployed by a single instantiation of the ServerlessLaravel construct class with required properties.

Adding an Amazon Aurora database

The ServerlessLaravel stack is extended with the DatabaseCluster construct class to provision an Amazon Aurora database. Pass a Amazon RDS Proxy instance for this cluster to the ServerlessLaravel construct:

  1. Edit the ./lib/cdk-stack.ts :
 import * as cdk from '@aws-cdk/core';
 import { InstanceType, Vpc } from '@aws-cdk/aws-ec2';
 import * as path from 'path';
 import { ServerlessLaravel, DatabaseCluster } from 'cdk-serverless-lamp';

 export class CdkStack extends cdk.Stack {
  constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);
 const vpc = new Vpc(this, 'Vpc',{ maxAzs: 3, natGateways: 1 } )
    // the DatabaseCluster sharing the same vpc with the ServerlessLaravel
    const db = new DatabaseCluster(this, 'DatabaseCluster', { vpc, instanceType: new InstanceType('t3.small'), rdsProxy: true, })
    // the ServerlessLaravel
    new ServerlessLaravel(this, 'ServerlessLaravel', {
      brefLayerVersion: 'arn:aws:lambda:us-east-1:209497400698:layer:php-74-fpm:12',
      laravelPath: path.join(__dirname, '../composer/laravel-bref'),
      databaseConfig: { writerEndpoint: db.rdsProxy!.endpoint, },
  1. Run cdk diff to check the difference :
    $ cdk diff

The output shows that a shared VPC is created for the ServerlessLaravel stack and the DatabaseCluster stack. An Amazon Aurora DB cluster with a single DB instance and a default secret from AWS Secrets Manager is also created. The cdk-serverless-lamp construct library configures Amazon RDS proxy automatically with the required AWS IAM policies and connection rules.

  1. Deploy the stack.
    $ cdk deploy

The ServerlessLaravel stack is running with DatabaseCluster in a single VPC. A single Lambda function is automatically configured with the RDS Proxy DB_WRITER and DB_READER stored as Lambda environment variables.

Database authentication

The Lambda function authenticates to RDS Proxy with the execution IAM role. RDS Proxy authenticates to the Aurora DB cluster using the credentials stored in the AWS Secrets Manager. This is a more secure alternative to embedding database credentials in the application code base. Read “Introducing the serverless LAMP stack – part 2 relational databases” for more information on connecting to an Aurora DB cluster with Lambda using RDS Proxy.

Clean up

To remove the stack, run:
$ cdk destroy

The video below demonstrates a deployment with the CDK construct for the serverless LAMP stack.


This post introduces the new CDK construct library for the serverless LAMP stack. It explains how to use it to deploy a serverless Laravel application. Combining this with other CDK constructs such as DatabaseCluster gives PHP developers the building blocks to create scalable, repeatable patterns at speed with minimal coding.

With the CDK construct library for the serverless LAMP stack, PHP development teams can focus on shipping code without changing the way they build.

Start building serverless applications with PHP.

„Спрете финансирането на нашата мафия.“ Протест пред посолството на Германия в София

Post Syndicated from Тоест original https://toest.bg/protest-pred-germanskoto-posolstvo/

На 12 август, 35-тия ден от началото на антиправителствените протести в България, се проведе акция пред посолството на Германия в София под надслов „Масово затваряне на очи“.

Click to view slideshow.

Комикс „Мащехата и Винету“ / Борисов: „Шефовете ми от Кремъл и българските ДС олигарси Ви благодарят за подарените милиарди. Парите свършиха, ще има ли още?“ / Меркел: „Ще има още много, Бойко! Но внимавай мигрантите все така да стоят далече и да тече повече руски газ към Европа. Иначе ми е все тая какви ги вършите там долу!“ © Тихомира Методиева – Тихич

Протестиращите се събраха пред германското представителство у нас, за да попитат „защо вече едно десетилетие една от водещите сили в Европейския съюз толерира очевидната организирана престъпност в нашата управленска класа в лицето на Бойко Борисов и неговите правителства, както и на свързаните с тях структури в съдебната система“. В описанието във Facebook събитието също се казва:

Искаме да знаем как се съвместяват претенциите за върховенството на закона, които съпътстват всяка европейска директива към България, с широко затворените очи на европейския политически елит за техните „партньори“ от ГЕРБ.

„Спрете финансирането на нашата мафия“ © Тихомира Методиева – Тихич
„Политическата корупция е ненакърнима. Да бъде уважавана и пазена е задължение на цялата власт на държавата“ (заигравка с чл. 1, ал. 1 от Конституцията на Германия – в оригиналния текст вместо „политическа корупция“ е „човешкото достойнство“) © Тихомира Методиева – Тихич
„Изцяло сте вътре, а не просто зрители“ (от популярен през 90-те години рекламен слоган на спортния канал DSF) © Тихомира Методиева – Тихич
„Надяваме се, че няма да ни спрете европейските средства, ако не е вашето момче“ © Тихомира Методиева – Тихич

Организаторите изрично подчертават, че протестната акция се разграничава от противниците на Европейския съюз. „Ние искаме да подкрепим пълноценното израстване на България в демократичните ценности на Европа и очакваме от европейските лидери да заемат критична позиция и да поемат отговорност.“

© Тихомира Методиева – Тихич

По-рано в сряда временно управляващият посолството на Германия у нас Йорк Шуграф заяви по повод обявения протест, че е „напълно естествено гражданите в Европейския съюз да се възползват от правото си на свобода на изразяване и мирен протест“, но „решенията за политическото бъдеще на България се вземат само в България“.

Фотогалерия: © Тихомира Методиева – Тихич. Превод от немски: Донка и Чило Попови

Тоест“ разчита единствено на финансовата подкрепа на читателите си.

Прокуратурата, една публикация и чл. 7 ДЕС

Post Syndicated from nellyo original https://nellyo.wordpress.com/2020/08/13/prb_media/

Прокуратурата има в най-новата си история епизоди, в които по специфичен начин интерпретира свободата на изразяване (всяване на паника) и свободата на медиите, ставало е дума за използването на чл. 326 НК.

Но продължава да пише нови епизоди от същия сериал.

В декларация от 4 август 2020 прокуратурата чрез администрацията си оценява публикация на Свободна Европа като съдържаща нарушение на Етичния кодекс на българските медиите.

Наложи се Комисията по журналистическа етика да реагира с декларация:

Декларация на Комисията за журналистическа етика

Комисията за журналистическа етика се запозна с позиция на Дирекция “Публична комуникация” на Прокуратурата на РБ от 4 август 2020 г., в която Дирекцията “намира за нужно да подчертае, че е допуснато нарушение на Етичния кодекс на българските медии”.

Комисията за журналистическа етика намира за нужно да подчертае, че е единственият орган, упълномощен да се произнася относно нарушения на Етичния кодекс на българските медии.

Съвсем естествено Свободна Европа излезе със заглавие Не е работа на прокуратурата.

Ами не е. Но все същата публикация продължава да привлича вниманието на прокуратурата – вече на друго ниво. Прокурорската колегия на ВСС излиза с нова декларация, с която “споделя и се присъединява към Декларацията на Управителния съвет на Асоциацията на прокурорите в България и позицията на Прокуратурата на Република България от 04.08.2020 г.”

Този път няма анализ на съответствието на публикацията с Етичния кодекс на българските медии, но се твърди друго –  че авторът Борис Митов   се занимава с какво?  –

създаване на внушения за зависимости и нарушаване на конституционно установения принцип – прокурорът да се подчинява само на закона при осъществяване на своите функции, засягат както доброто име и професионализма на прокурор Бецова, така и авторитета на Прокуратурата, върховенството на закона и доверието в съдебната власт. В тази връзка изразяваме разочарование от манипулативно представената информация от журналиста Б. Митов.

– през внушения и заплаха за независимостта на прокуратурата  – до засягане на върховенството на закона. Статията за Бецова. Създава риск за върховенството на закона.

Нататък вече пътят е открит  – риск за върховенството на закона – чл. 7 ДЕС.

Намерили са си и втори повод – политически позиции  на политически лидер  – Христо Иванов – за реформиране на прокуратурата. И на него – чл. 7 ДЕС.

Чл. 7 ДЕС има предвид ситуация, в която  се създава реален риск една държава  да извърши нарушение на основополагащите ценности на ЕС, или е осъществено такова нарушение.

Позоваването на чл.7 ДЕС е странно в контекста на всичко, което вече се е написало за чл.7.  Властта, решенията на властта са довели до дебатиране на евентуално прилагане на чл.7, а не публикация на журналист или изявление на извънпарламентарен опозиционен политически лидер.

И финалът на декларацията е крайно интересен:  всеки от двата абзаца отразява  – според мен –  по едно погрешно решение, основано на погрешна оценка на правната и фактическата реалност (мисля, че второто решение за разпространяване на декларацията е погрешно, защото първото – за препратка към чл. 7 ДЕС  – ми изглежда компрометиращо своите автори):

Отправяме призив към ръководството на Прокуратурата на РБ за организиране на национално съвещание с участие на административните ръководители, прокурорите и следователите, посветено на темата „Отстояване на  независимостта на българската прокуратура с оглед предотвратяване на риск от тежко нарушение на върховенството на закона съгласно чл.7 от Договора за Европейския за съюз“.

Изпращаме настоящата декларация на Европейската комисия, Европейския парламент, на посолствата на всички държави-членки на Европейския съюз в България, на посолството на САЩ в България и посолството на Обединено кралство Великобритания в България.

New – High-Performance HDD Storage for Amazon FSx for Lustre File Systems

Post Syndicated from Harunobu Kameda original https://aws.amazon.com/blogs/aws/new-high-performance-hdd-storage-for-amazon-fsx-for-lustre-file-systems/

Many workloads, such as genome analysis, training of machine learning models, High Performance Computing (HPC), and analytics applications depend on multiple compute instances accessing the same set of data. For these workloads, clusters of compute instances are commonly connected to a high-performance shared file system. Amazon FSx for Lustre makes it easy and cost-effective to launch and run the world’s most popular high-performance shared file system. And today we’re announcing new HDD storage options for FSx for Lustre that reduce storage costs by up to 80% for throughput-intensive workloads that don’t require the sub-millisecond latencies of SSD storage.

Customers can achieve up to tens of gigabytes of throughput per second while lowering their storage costs for workloads where throughput is the dominant performance attribute. Video rendering and financial simulations are two examples of these throughput-intensive workloads.

This announcement includes two new HDD-based storage options which are optimized for reading and writing sequential file data. One offers 12 MB/sec of baseline throughput per TiB of storage and the other offers 40 MB/sec of baseline throughput per TiB of storage, and both allow you to burst to six times those throughput levels. To increase performance for frequently accessed files, you can also provision an SSD cache that is automatically sized to 20% of your HDD file system storage capacity. On file systems that are provisioned with an SSD cache, files read from the cache are served with sub-millisecond latencies.

The new FSx file systems are comprised of multiple HDD-based storage servers and a single SSD-based metadata server. The SSD storage on the metadata servers ensures that all metadata operations, which represent the majority of file system operations, are delivered with sub-millisecond latencies.

HDD performance increases with storage capacity making it easy to scale out your storage solution without encountering file system bottlenecks. Here’s a summary of the performance specifications for both the new HDD storage options and the existing SSD storage options.

Quick Guide

Traditionally, operating and scaling high performance file systems was costly and time consuming. Now with just a few clicks anyone can use FSx for Lustre for any compute workload. Launching the HDD-based file system is easy. Simply open the management console and click the Create file system button.

Chose FSx for Lustre and click Next.

FSx for Lustre offers two deployment types – Persistent and Scratch. HDD storage is available on persistent mode which is designed for longer-term storage and workloads. On persistent file systems, data is replicated and file servers are replaced if they fail whereas the scratch type are ideal for temporary storage and shorter-term processing of data. On scratch file systems, data is not replicated and does not persist if a file server fails. You can can find more detail on the difference between the two deployment options in this blog article.

Once you choose HDD as the Storage Type, you can select 12 or 40 MB/s per TiB for the Throughput per unit of storage. You can also add the SSD cache to accelerate file access by choosing “Read-only SSD cache” as Drive Cache Type.

You can also create a file system by CLI.

fsx create-file-system \
--storage-capacity <capacity> --storage-type HDD \
--file-system-type LUSTRE \
--subnet-ids subnet-<your vpc subnet id>85b2c0ce --lustre-configuration \
DeploymentType=PERSISTENT_1,PerUnitStorageThroughput=<12 or 40>\,DriveCacheType=<NONE or READ>

For PerUnitStorageThroughput=12, acceptable values of storage capacity are multiples of 6000.
For PerUnitStorageThroughput=40, acceptable values of storage capacity are multiples of 1800.

Available Today

The new HDD storage options are available for all AWS regions where Amazon FSx for Lustre is available. Please visit our web site for more details.

–  Kame;


Migrating AWS Lambda functions to Amazon Linux 2

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/migrating-aws-lambda-functions-to-al2/

You can now use the latest version of any of the AWS Lambda runtimes on Amazon Linux 2 (AL2). End-of-life of standard support for Amazon Linux (AL1 for simplicity in this post) is coming in December 2020. As a result, AWS is providing a path for customers to migrate current and future workloads to AL2-supported runtimes.

This blog post covers:

  • New runtimes for AL2
  • AL1 end-of-life schedule
  • Legacy runtime end-of-life schedules

New runtimes

The choice to run a Lambda function on AL1 or AL2 is based upon the runtime. With the addition of the java8.al2 and provided.al2 runtimes, it is now possible to run Java 8 (Corretto), Go, and custom runtimes on AL2. It also means that the latest version of all supported runtimes can now run on AL2.

The following shows how the runtimes are mapped to Amazon Linux versions:

RuntimeAmazon LinuxAmazon Linux 2 (AL2)
Node.jsnodejs12.x, nodejs10.x
Pythonpython3.7, python3.6, python2.7python3.8
Javajavajava11 (Corretto 11), java8.al2 (Corretto 8)

Java 8 (Corretto)

Amazon Corretto 8 is a production-ready distribution of the Open Java Development Kit (OpenJDK) 8 and comes with long-term support (LTS). AWS runs Corretto internally on thousands of production services. Patches and improvements in Corretto allows AWS to address real-world service concerns and meet heavy performance and scalability demands.

Developers can now take advantage of these improvements as they develop Lambda functions by using the new Java 8 (Corretto) runtime of java8.al2. You can see the new Java runtimes supported in the Lambda console:

Console: choosing the Java 8 (Corretto) runtime

Console: Console: choosing the Java 8 (Corretto) runtime

Or, in an AWS Serverless Application Model (AWS SAM) template:

    Type: AWS::Serverless::Function
      CodeUri: HelloWorldFunction
      Handler: helloworld.App::handleRequest
      Runtime: java8.al2

Custom runtimes

The custom runtime for Lambda feature was announced at re:Invent 2018. Since then, developers have created custom runtimes for PHP, Erlang/Elixir, Swift, COBOL, Rust, and many others. Until today, custom runtimes have only used the AL1 environment. Now, developers can choose to run custom runtimes in the AL2 execution environment. To do this, select the provided.al2 runtime value in the console when creating or updating your Lambda function:

Console: choosing the custom runtime

Console: choosing the custom runtime

Or, in an AWS SAM template:

    Type: AWS::Serverless::Function
      CodeUri: hello-world/
      Handler: my.bootstrap.file
      Runtime: provided.al2


With the addition of the provided.al2 runtime option, Go developers can now run Lambda functions in AL2. As one of the later supported runtimes for Lambda, Go is implemented differently than other native runtimes. Under the hood, Go is treated as a custom runtime and runs accordingly. A Go developer can take advantage of this by choosing the provided.al2 runtime and providing the required bootstrap file.

Using SAM Build to build AL2 functions

With the new sam build options, this is easily accomplished with the following steps:

  1. Update the AWS Serverless Application Model template to the new provided.al2 runtime. Add the Metadata parameter to set the BuildMethod to makefile.
        Type: AWS::Serverless::Function
          CodeUri: hello-world/
          Handler: my.bootstrap.file
          Runtime: provided.al2
          BuildMethod: makefile
  2. Add a MakeFile to the project.
      GOOS=linux go build
      cp hello-world $(ARTIFACTS_DIR)/bootstrap
  3. Use the sam build command.

    Example: sam build

    Example: sam build

A working sample Go application on AL2 can be found here: https://github.com/aws-samples/sessions-with-aws-sam/tree/master/go-al2.

Amazon Linux end-of-life timeline

With the latest runtimes now available on AL2, we are encouraging developers to begin migrating Lambda functions from AL1-based runtimes to AL2-based runtimes. By starting this process now, Lambda functions are running on the latest long-term supported environment.

With AL1, that long-term support is coming to an end. The latest version of AL1, 2018.13, had an original end-of-life date set for June 30, 2020. However, AWS extended this date until December 30, 2020. On this date, AL1 will transition from long-term support (LTS) to a maintenance support period lasting until June 30, 2023. During the maintenance support period, AL1 receives critical and important security updates for a reduced set of packages.

Support timeline

Support timeline

However, AL2 is scheduled for LTS until June 30, 2023 and provides the following support:

  1. Security updates and bug fixes for all packages in core.
  2. Maintain user-space application binary interface (ABI) compatibility for core packages.

Legacy runtime end-of-life schedules

As shown in the preceding chart, some runtimes are still mapped to AL1 host operating systems. AWS Lambda is committed to supporting runtimes through their long-term support (LTS) window, as specified by the language publisher. During this maintenance support period, Lambda provides base operating system and patching for these runtimes. After this period, runtimes are deprecated.

According to our runtime support policy, deprecation occurs in two phases:

  1. Phase 1: you can no longer create functions that use the deprecated runtime. For at least 30 days, you can continue to update existing functions that use the deprecated runtime.
  2. Phase 2: both function creation and updates are disabled permanently. However, the function continues to be available to process invocation events.

Based on this timeline and our commitment to supporting runtimes through their LTS, the following schedule is followed for the deprecation of AL1-based runtimes:

python3.7June 2023
python3.6December 2021
python2.7Current plans to support until AL1 end-of-life
ruby2.5Supported until March 2021
java8Supported until March 2022
go1.xEach major Go release is supported until there are two newer Go releases.
dotnetcore2.1Supported until August 2021
providedSupported until December 2020 with AL1


AWS is committed to helping developers build their Lambda functions with the latest tools and technology. This post covers two new Lambda runtimes that expand the range of available runtimes on AL2. I discuss the end-of-life schedule of AL1 and why developers should start migrating to AL2 now. Finally, I discuss the remaining runtimes and their plan for support until deprecation, according to the AWS runtime support policy.

Happy coding!

Organize and share your content with folders in Amazon QuickSight

Post Syndicated from Jose Kunnackal original https://aws.amazon.com/blogs/big-data/organize-and-share-your-content-with-folders-in-amazon-quicksight/

Amazon QuickSight Enterprise Edition now supports folders for organization and sharing content. Folders in QuickSight are of two types:

  • Personal folders – Allow individual authors and administrators to organize assets for their personal ease of navigation and manageability
  • Shared folders – Allow authors and administrators to define folder hierarchies that they can share across the organization and use to manage user permissions and access to dashboards, analyses, and datasets

You can access folders directly from shortcuts on the new QuickSight home page (see the following screenshot). In this post, we take a deeper look at folders and how you can implement this in your QuickSight account.

Asset permissions and folders

Before we dive into how the two types of folders work, let’s understand how asset permissions work in QuickSight. QuickSight assets (dashboards, analyses, and datasets) are created by authors or admins, reside in the cloud, and by default are permissioned to be visible from the UI to only the owner, which in this case is the creator of the asset. The owner can share the asset with other users (authors or admins, or in the case of dashboards, readers) or groups of users. When the asset needs to be shared, QuickSight allows the owner to share with specific users or groups of users, who can then be provided viewer or owner access.

Previously, these flows meant that admins and authors who have hundreds of assets have to manage permissions for users and groups individually. There was no hierarchical structure to easily navigate and discover key assets available. We built personal folders to solve the need for organization for authors and admins, while shared folders provide easier bulk permissions management for authors and discovery of assets for both authors and readers.

Personal folders are available to all authors and admins in QuickSight Enterprise Edition. You can create these folders within your user interface and add assets in them. Personal folders aren’t visible to other users within the account, and they don’t affect the permissions of any objects placed within. This means that if you create a personal folder called Published dashboards and add a dashboard to it, there are no changes to user permissions in the dashboard on account of its addition to this folder. An important difference here is that unlike traditional folders, QuickSight allows you to place the same asset in multiple folders, which avoids the need to replicate the same asset in different folders. This allows you to update one time and make sure all your stakeholders get the latest information.

The following screenshot shows the My folders page on the QuickSight console.

Shared folders in QuickSight are visible to permissioned users across author, admin, and reader roles in QuickSight Enterprise Edition. Top (root)-level shared folders can only be created by admins of the QuickSight account, who can share these with other users or groups. When sharing, folders offer two levels of permissions:

  • Owner access – Allows admins or authors with access to the folder to add and remove assets (including subfolders), modify the folder itself, and share as needed with users or groups.
  • Viewer access – Restricts users to only viewing the folder and contents within, including subfolders. Readers can only be assigned viewer access, and can see the Shared folders section when at least one folder is shared with them.

The following screenshot shows the Shared folders page.

The following screenshot shows the Share folder pop-up window, which you use to choose who to share folders with.

Permissions granted to a user or group at a parent folder level are propagated to subfolders within, which means that owners of a parent folder have access to subfolders. As a result, it’s best to model your permissions tree and folder structure before implementing and sharing folders in your account. Users who are to be restricted to specific folders are best granted access at the lowest level possible.

Folder permissions are currently also inherited by the assets within. For example, if a dashboard is placed in a shared folder, and Sally is granted access to the folder as an owner, Sally now has ownership over the folder and the dashboard. This model allows you to effectively use folders to manage shared permissions across thousands of users without having to implement this on a per-user or per-asset level.

For example, a team of 10 analysts could have owner permissions to a shared folder, which allows them to own both the folder and contents within, while thousands of other users (readers, authors, and admins) can be granted viewer permissions to the folder. This ensures that permissions management for these viewers can be done by the one-time action of granting them viewer permissions over the folder, instead of granting these permissions to users and groups within each dashboard. Permissions applied at the individual asset level continue to be enforced, and the final permissions of a user is the combination of the folder and individual asset permissions (whichever is higher).

Shared folders also enforce a uniqueness check over the folder path, which means that you can’t have two folders that have the same name at the same level in the folder tree. For example, if the admin creates /Oktank/ and shares with Tom and Sally as owners, and Tom creates /Oktank/Marketing/, Sally can no longer create a folder with the name Marketing. She should coordinate with Tom on permissions and get Tom to share this folder as an owner so that she can also contribute to the marketing assets. For personal folders (and for other asset types including dashboards, analyses, and datasets), QuickSight doesn’t require such uniqueness.

With QuickSight Enterprise Edition, dashboards, analyses, and datasets—whether owned by a user or shared with them—exist within the user’s QuickSight account and can be accessed via the asset-specific details page or search. All assets continue to be displayed via these pages, while those added to specific folders become visible via the folders view.

Use case: Oktank Analytics

Let’s put this all together and look from the lens of how a fictional customer, Oktank Analytics, can set up shared folders within their account. Let’s assume that Oktank has three departments: marketing, sales, and finance, with the sales team subdivided into US and EU orgs. Each of these departments and sub-teams has their own set of analysts that build and manage dashboards, and departmental users that expect to see data pertaining to their functional area. Oktank also has C-level executives that need access to dashboards from each department. Finally, QuickSight administrators oversee the overall business intelligence solution.

To implement this in QuickSight and provide a scalable model, the admin team first creates the top-level folder /Oktank/ and grants viewer access to the C-level executives. This grants the leadership team access to all subfolders underneath, making sure that there are no access issues. Access is also limited to viewer, so that the leadership has visibility but can’t accidentally make any changes.

Next, the admin team creates subfolders for marketing, sales, and finance. Both the admins and C-level executives have access to these folders (as owner and viewer, respectively) due to their permissions on the top-level folder.

The following diagram illustrates this folder hierarchy.

Oktank admins grant owner permissions to the Marketing folder to the marketing analyst team (via QuickSight groups). This allows the analyst team to create subfolders that match expectations of their users and leadership. To streamline access, the marketing analyst team creates two subfolders: Assets and Dashboards. The marketing analyst team uses Assets (/Oktank/Marketing/Assets/) to store datasets and analyses that they need to build and manage dashboards. Because all the marketing analysts have access to this folder, critical assets aren’t disrupted when an analyst is on vacation or leaves the company. Marketing analysts have owner permissions, the admin team has owner permissions, and C-level executives have viewer permissions.

The marketing analyst team uses the Dashboards folder to store dashboards that are shared to all marketing users (via QuickSight groups). All marketing users are granted viewer permissions to this folder (/Oktank/Marketing/Dashboards/); marketing analysts grant themselves owner permissions while the admin team and C-level executives have owner and viewer permissions propagated. For marketing users, access to this folder means that all the dashboards relevant to their roles can be explored in /Oktank/Marketing/Dashboards/, which is available through the Shared Folders link on the home page. The marketing analyst team also doesn’t have to share these assets individually or worry about permissions being missed out for specific users or dashboards.

The sales team needs further division because US and EU have different teams and data. The admin team creates the Sales subfolder, and then creates US and EU subfolders. They grant US sales analysts owner access to the US subfolder (/Oktank/Sales/US/), which gives the analysts the ability to create subfolders and share with users as appropriate. This allows the US sales analyst team to create /Oktank/Sales/US/Assets and /Oktank/Sales/US/Dashboards/. Similar to the marketing team, they can now store their critical datasets, analyses, and dashboards in the Assets folder, and open up the Dashboards folder to all US sales personnel, providing a one-stop shop for their users. The C-level executives have reader access to these folders and can access these assets and anything added in the future.

Admins and C-level executives see the following hierarchy in their shared folder structure; admins have owner access to all, and C-level executives have viewer access:


Oktank > Marketing

Oktank > Marketing > Assets

Oktank > Marketing > Dashboards

Oktank > Sales

Oktank > Sales > US

Oktank > Sales > US > Assets

Oktank > Sales > US > Dashboards

Oktank > Sales > EU

Oktank > Sales > EU > Assets

Oktank > Sales > EU > Dashboards

Oktank > Finance

Oktank > Finance > Assets

Oktank > Finance > Dashboards

A member of the marketing analyst team sees the following:


Oktank > Marketing

Oktank > Marketing > Assets

Oktank > Marketing > Dashboards

A member of the Oktank marketing team (e.g., marketing manager) sees the following:


Oktank > Marketing

Oktank > Marketing > Dashboards

A member of the US Sales analyst team sees the following:


Oktank > Sales

Oktank > Sales > US

Oktank > Sales > US > Assets

Oktank > Sales > US > Dashboards

A member of the Oktank US Sales team (e.g., salesperson) sees the following:


Oktank > Sales

Oktank > Sales > US

Oktank > Sales > US > Dashboards


QuickSight folders provide a powerful way for admins and authors to organize, manage, and share content while being a powerful discovery mechanism for readers. Folders are now generally available in QuickSight Enterprise Edition in all supported QuickSight Regions.


About the Author

Jose Kunnackal John is principal product manager for Amazon QuickSight, AWS’ cloud-native, fully managed BI service. Jose started his career with Motorola, writing software for telecom and first responder systems. Later he was Director of Engineering at Trilibis Mobile, where he built a SaaS mobile web platform using AWS services. Jose is excited by the potential of cloud technologies and looks forward to helping customers with their transition to the cloud.

ALOHAnet Introduced Random Access Protocols to the Computing World

Post Syndicated from Joanna Goodrich original https://spectrum.ieee.org/the-institute/ieee-history/alohanet-introduced-random-access-protocols-to-the-computing-world

THE INSTITUTE Until the 1970s, far-flung computers generally connected to one another through telephone networks. In 1968 researchers at the University of Hawaii began to investigate if radio communications could be used to link multiple computers at once.

The team introduced its Additive Links On-line Hawaii Area network, ALOHAnet, in June 1971. The network used a random access protocol, which allowed computers to transmit packets over a shared channel, as soon as they had information to send. ALOHAnet was the first use of wireless communications for a data network. Its protocol is now widely used in nearly all forms of wireless communications.

“We [the team] thought that what we were doing would be important, but I don’t think any of us thought it would be as important as it turned out to be,” IEEE Life Fellow Norman Abramson, who led the team, said in a 2009 interview about ALOHAnet in IEEE Communications Magazine. “It exceeded my wildest expectations.”

ALOHAnet is now an IEEE Milestone. Its nomination was sponsored by the IEEE Hawaii Section. Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world.

The dedication ceremony, originally planned for June 2020 at the University of Hawaii at Manoa, in Honolulu, was postponed until next year due to the COVID-19 pandemic.


The University of Hawaii used ALOHAnet to connect its campuses to one another. Each campus had a small interface computer—a hub machine—that used two distinct radio frequencies: an outbound channel and an inbound channel. In order to connect, one hub machine broadcasted packets to another computer on the outbound channel, and that computer sent data packets to the first hub machine on the inbound channel.

If data was successfully received at the hub, a short acknowledgment packet was sent back. If an acknowledgment was not received by the computer, it would automatically retransmit the data packet after waiting for a randomly selected amount of time. The mechanism detected and corrected collisions that were created when the machine and the computer attempted to send a packet at the same time, according to the Engineering and Technology History Wiki entry about the Milestone.

Computer networks were not well understood at the time, and it took several years for the researchers to perfect their design.

“In a sense, [the acknowledgement mechanism is] an obvious thing to do,” Abramson said in the article. “But when you start off on this kind of research project, some of the obvious things don’t appear as obvious as they do a little later.”

ALOHAnet was connected to ARPANET via satellite in December 1972 under the guidance of the U.S. Defense Advanced Research Projects Agency. The connection allowed for reliable computer communications throughout the United States, according to the Wiki entry.

ALOHAnet used a VHF transponder in 1973 to connect to an experimental NASA satellite in order to demonstrate PacNet, an international satellite data network. The demonstration connected the NASA facility in California with five universities in Australia, Japan, and the United States, the Wiki entry says.

The Milestone plaque is to be displayed at the entrance of Holmes Hall at the University of Hawaii at Manoa, which was where the technology was developed, tested, and demonstrated. The plaque reads:

In June 1971 the ALOHA packet radio data network began providing inter-island access to computing facilities at the University of Hawaii. ALOHAnet was the first to demonstrate that communication channels could be effectively and efficiently shared on a large scale using simple random access protocols. It led directly to the development of Ethernet and personal wireless communication technologies.

This article was written with assistance from the IEEE History Center, which is funded by donations to the IEEE Foundation’s Realize the Full Potential of IEEE campaign.

Field Notes: Working with Route Tables in AWS Transit Gateway

Post Syndicated from Prabhakaran Thirumeni original https://aws.amazon.com/blogs/architecture/field-notes-working-with-route-tables-in-aws-transit-gateway/

An AWS Transit Gateway enables you to attach Amazon VPCs, AWS S2S VPN and AWS Direct Connect connections in the same Region, and route traffic between them. Transit Gateways are designed to be highly scalable and resilient. You can attach up to 5000 VPCs to each gateway and each attachment can handle up to 50 Gbits/second of bursty traffic.

In this post,  I explain the packet flow if both source and destination network are associated to the same or different AWS Transit Gateway Route Table. An AWS Transit Gateway Route Table includes dynamic routes, static routes and blackhole routes. This routing operates at layer 3, where the IP packets are sent to a specific next-hop attachment, based on the destination IP addresses. You can create multiple route tables to separate network access. AWS Transit Gateway controls how traffic is routed to all the connected networks using route tables.

Architecture overview

To illustrate how AWS Transit Gateway route tables work, consider the following architecture with resources in multiple AWS accounts (Prod, Pre-Prod, Staging, Dev, Network Service) and all the accounts are under the same AWS Organization.

Figure 1 – How different AWS accounts are connected via AWS Transit Gateway


I have created a transit gateway in Network Service account and shared the Transit Gateway with other AWS accounts (Prod, Pre-Prod, Staging, Dev) in an AWS Organization with AWS Resource Access Manager (RAM).

AWS RAM enables you to easily and securely share AWS resources with any AWS account, or within your AWS Organization.  I created three Transit Gateway route tables: Prod Route table, Network Service Route table and Staging Route table.

To attach VPCs, VPN and Direct Connect connections in the same region to Transit Gateway, you need to create Transit Gateway attachments. The process is as following:

  • When you attach VPC to Transit Gateway, you must specify one subnet in each Availability Zone to be used by Transit Gateway to route traffic.
  • Create a connectivity subnet in all VPC and define connectivity subnets for the Transit Gateway attachment.
  • Transit Gateway places a network interface in the connectivity subnet using one IP address from the subnet. Specifying one subnet for an Availability Zone enables traffic to reach resources in other subnets in that Availability Zone.
  • If an Availability Zone is not associated when you create Transit Gateway attachments to attach the VPC, resources in that Availability Zone cannot reach the Transit Gateway.

Note: – We can have 20 TGW route tables per Transit Gateway and 10,000 routes per Transit Gateway.

Resources in AWS accounts and on-premises

We have AWS S2S VPN that connects on-premises and TGW in Network Service account. AWS S2S VPN connection is attached to the TGW attachment (TGW Attachment-4) in the Network Service account. Following is the network information for the on-premises network.

The following table shows VPC CIDR blocks in different AWS accounts and the respective Transit Gatewayattachments. It also has one test instance in each VPC in every AWS account.

VPCs, VPN/Direct Connect connections can dynamically propagate routes to the Transit Gateway route table. You can enable or disable route propagation for each Transit Gateway attachment. For a VPC attachment, the CIDR blocks of the VPC are propagated to the Transit Gateway route table. For a VPN/Direct Connect connection attachment, routes in the Transit Gateway route table propagate to your on-premises router/firewall using Border Gateway Protocol (BGP). The prefixes advertised over BGP session from on-premises router/firewall are propagated to the Transit gateway route table.

Transit Gateway Route Table Association

Transit Gateway attachments are associated to a Transit Gateway route table. An attachment can be associated to one route table. However, an attachment can propagate their routes to one or more Transit Gateway route tables.

The following table shows route table and associated Transit Gateway attachments.

To understand how the packet flow within or different Transit Gateway route table. In this post, I have explained it in three different scenarios.

Important:  For all scenarios, I have summarized the network and added static route in VPC’s route table, so that we don’t need to make changes on the subnet’s route table every time you want to access resources in different AWS accounts. However, we are able to control the traffic from TGW route table. This reduces operational overhead and gain the ability to centrally manage all the connectivity.

Scenario 1: Packet flow between Prod account and Pre-Prod Account

You are trying to do SSH to Instance-2 (pre-prod) from Instance-1 (Prod). The following image is the packet flow for first two steps of TCP 3-Way handshake between the instances.

Figure 2. Packet flow between Prod and Pre-Prod accountFigure 2 – Packet flow between Prod and Pre-Prod account

AWS Security Groups and NACLs are configured to allow communication for both the instance.

The source IP address of instance-1 is and destination IP address of  instance-2 is

  1. SYN packet is validated against VPC route table associated to the subnet that has instance-1 and VPC route table has route for the network points to Transit Gateway attachment. Packet is forwarded to TGW attachment.
  2. Transit Gateway receives the traffic on the Prod TGW route table since Transit Gateway attachment (TGW Attachment-1) is associated to the Prod TGW route table. Prod TGW route table has route for the network
  3. SYN packet is evaluated against Prod Transit Gateway route table and forwarded to Pre-Prod VPC. Then it is evaluated against VPC route table of the subnet of TGW attachment and NACL of the connectivity subnet.
  4. Packet is forwarded from the Transit Gateway attachment ENI to instance-2( after the NACL and Security groups are evaluated. For the return traffic (SYN/ACK), source IP is and destination IP is
  5. SYN/ACK packet is evaluated against VPC route table associated to the subnet that has instance-2 and VPC route table has route for the network points to TGW attachment. Packet is forwarded to TGW attachment.
  6. TGW receives the traffic on the Prod TGW route table since the Transit Gateway attachment (TGW Attachment-2) is associated to the Prod TGW route table. Prod TGW route table has route for the network
  7. SYN/ACK packet is evaluated against Prod TGW route table and forwarded to Prod VPC. Then it is evaluated against VPC route table of the subnet of Transit gateway attachment and NACL of the same.
  8. SYN/ACK packet is forwarded from TGW attachment ENI to instance-1 after the NACL and Security group inbound states are evaluated.

Result:  Instance-1 is able to receive SYN/ACK packet from instance-2 successfully.

This is the packet flow when both source and destination networks are associated to the same Transit Gateway route table.

Scenario 2: Packet flow between Staging Account and Pre-Prod account

Pre-Prod account’s Transit Gateway attachment (TGW Attachment-2) is associated to Prod TGW Route table and Staging account’s TGW attachment (TGW Attachment-5) is associated to Staging Transit Gateway route table in Network Service account.

On the Transit Gateway route table, you can either add static route or create propagation to attach Transit Gateway attachment and routes will be propagated in the Transit Gateway  route table.  In this post, I chose the option to create propagation to propagate the route in the Transit Gateway route table.

On the Prod Transit Gateway route table, you must create propagation and choose Transit Gateway attachment (TGW Attachment-5) and the VPC CIDR block is propagated in Prod TGW route table. Similarly, you must create propagation in Staging Transit Gateway route table and choose TGW attachment (TGW Attachment-2), the VPC CIDR block is propagated in the Staging route table.

If you are trying to ping Instance-4 (Staging) from the Instance-2 (Pre-Prod),  the ICMP echo reply doesn’t reach Instance-2 from Instance-4 . Below is the packet flow for ICMP echo request and echo reply between the instances.

Figure 3. Packet flow between Pre-Prod and Staging account

AWS Security Groups and NACLs are configured to allow communication for both the instances.

Source IP address of Instance-2 is and destination IP address of  Instance-4 is

  1. Echo-Request packet is validated against VPC route table associated to the subnet that has instance-1 and VPC route table has route for the network points to the Transit Gateway attachment. Packet is forwarded to the Transit Gateway attachment.
  2. TGW receives the traffic on the Prod TGW route table since the Transit Gateway attachment (TGW Attachment-2) is associated to the Prod TGW route table. Prod TGW route table has route for the network
  3. Echo-Request packet is evaluated against Prod TGW route table and forwarded to the Staging VPC. Then it is evaluated against VPC route table of the subnet of Transit Gateway attachment and NACL of the connectivity subnet.
  4. Packet is forwarded from TGW attachment ENI to instance-4( after the NACL and Security Groups are evaluated. For the return traffic (Echo-Reply), source IP is and destination IP is
  5. Echo-Reply packet is evaluated against VPC route table associated to the subnet that has instance-4 and VPC route table has route for the network points to the Transit Gateway attachment. Packet is forwarded to the Transit gateway attachment.
  6. The Transit Gateway receives the traffic on the Staging Transit gateway route table since the Transit Gateway attachment (TGW Attachment-5) is associated to the Staging Trnasit Gateway route table. Staging Transit Gateway route table does not have route for the network
  7. Echo-Reply packet is evaluated against the Staging Transit Gateway route table and it does not  have route for the destination network The packet is not forwarded to Pre-Prod VPC.

Instance-2 is not able to receive echo-reply packet from Instance-4 since Staging TGW route table does not have route for Instance-2 network.

To have successful communication between the Transit Gateway route table, we must propagate routes on both Prod and Staging Transit Gateway route table.

Scenario 3: On-premises server needs to access resources in Dev AWS account.

S2S VPN connection is configured as an attachment on the TGW (TGW attachment-4) and it  is associated to Network Service route table. VPC in Network Service Account was also associated (TGW attachment-3) to the Network service route table and it propagated VPC CIDR block( in Network Service TGW route table.

VPN tunnel is up and Border Gateway Protocol (BGP) session is established.  BGP neighbors exchange routes via update messages. TGW advertised the network to on-premises and received the network via BGP update messages. Network Service route table has two propagated routes (VPC attachment) and (S2S VPN attachment).

Note: TGW advertised only to on-premises, since that was only route propagated in the Network service Transit Gateway route table.

If you are trying to SSH to the instance-5 (Dev) from the on-premises server Below is the packet flow for first two steps of TCP 3-Way handshake between the instances.

Figure 4. Packet flow between on-premises and Dev account

Figure 4. Packet flow between on-premises and Dev account

On the Network Service TGW route table, you must create propagation and choose Transit Gateway attachment (TGW Attachment-6) and the route propagated in Network service Transit Gateway route table. BGP neighbour advertised the network to on-premises via BGP update messages.

On the Staging TGW Route table, you must create propagation and choose TGW attachment (TGW Attachment-4) and the route is propagated in Staging TGW route table.

You must update the VPC route table in Dev account.  VPC Console → Route Tables → Select the route table belonging to the VPC and add the network with Target as TGW.

Below is the packet flow from on-premises workstation to instance-5 in Dev account. AWS Security group and NACL are allowed on the instance-5 in Dev Account and necessary TCP port is allowed on the on-premise router/firewall to communicate each other. Source IP address server is and destination IP of Instance-5 is

  1. SYN packet validated against On-premise (router/Firewall)  route table and it has received the route for via BGP.  Packet is forwarded to TGW attachment over S2S tunnel.
  2. TGW receives the traffic on the Network Service TGW route table since TGW attachment (TGW Attachment-4) is associated to the Network Service TGW route table. Network Service TGW route table has route for
  3. SYN packet is evaluated against Network Service TGW route table and forwarded to Dev VPC. Then it is evaluated against VPC route table of the subnet of TGW attachment and NACL of the connectivity subnet.
  4. Packet is forwarded from TGW attachment ENI  to instance-5( after the NACL and Security groups are evaluated. For the return traffic, source IP is and destination IP is
  5. SYN/ACK packet is evaluated against VPC route table associated to the subnet that  has instance-5 and VPC route table has route for the network point to the Transit Gateway Transit Gateway attachment. Packet is forwarded to Transit Gateway attachment.
  6. Transit gateway receives the traffic on the Staging TGW route table since TGW attachment (TGW Attachment-6) is associated to the Staging TGW route table. Staging Transit Gateway route table has route for the network
  7. SYN/ACK packet is forwarded to on-premises router/firewall over S2S VPN tunnel. Router/Firewall forwards the packet to the workstation.

SYN/ACK packet returned successfully.

AWS Transit Gateway also has an option to add blackhole static route to the TGW route table. It prevents the source attachment from reaching a specific route. If the static route matches the CIDR of a propagated route, the static route will be preferred than the propagated route.


In this post, you learnt the packet flow if both source and destination network are associated to the same or different TGW route table. AWS Transit Gateway simplifies network architecture, reduces operational overhead, and centrally manages external connectivity. If you have a substantial number of VPCs, it makes it easier for the network team to manage access for a growing environment.

For further reading, review the AWS Transit Gateway User guide.

If you have feedback about this post, submit comments in the Comments section below.

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.





Quickly build STIG-compliant Amazon Machine Images using Amazon EC2 Image Builder

Post Syndicated from Sepehr Samiei original https://aws.amazon.com/blogs/security/quickly-build-stig-compliant-amazon-machine-images-using-amazon-ec2-image-builder/

In this post, we discuss how to implement the operating system security requirements defined by the Defence Information Systems Agency (DISA) Security Technical Implementation Guides (STIGs).

As an Amazon Web Services (AWS) customer, you can use Amazon Machine Images (AMIs) published by AWS or APN partners. These AMIs, which are owned and published by AWS, are pre-configured based on a variety of standards to help you quickly get started with your deployments while helping you follow your required compliance guidelines. For example, you can use AMIs that have been preconfigured for you per STIG standards. You can also use Amazon Elastic Compute Cloud (Amazon EC2) Image Builder to automate configuration of any custom images imported from an on-premises system.

Organizations of all sizes are moving more and more of their workloads to AWS. In most enterprises and organizations, often starting with an AMI with a known configuration is the best way to address the organization’s security requirements for operating system configuration. You can take advantage of the tools available in AWS to ensure this is a consistent and repeatable process.

If you want to use your own custom AMI, you can follow the steps in this post to see how to build a golden Windows operating system image that follows STIG compliance guidelines using Amazon EC2 Image Builder.

Image Builder

We understand that keeping server images hardened and up to date can be time consuming, resource intensive, and subject to human error if performed manually. Currently, customers either manually build the automation scripts to implement STIG security measures to harden the server image, or procure, run, and maintain tools to automate the process to harden the golden image.

Image Builder significantly reduces the effort of keeping images STIG-compliant and updated by providing a simple graphical interface, built-in automation to match the STIG requirements, and AWS-provided security settings. With Image Builder, there are no manual steps needed to update an image, nor do you have to build your own automation pipeline.

Customers can use Image Builder to build an operating system image for use with Amazon EC2, as well as on-premises systems. It simplifies the creation, maintenance, validation, sharing, and deployment of Linux and Windows Server images. This blog post discusses how to build a Windows Server golden image.

Image Builder is provided at no cost to customers and is available in all commercial AWS regions. You’re charged only for the underlying AWS resources that are used to create, store, and share the images.

What is a STIG?

STIGs are the configuration standards submitted by OS or software vendors to DISA for approval. Once approved, the configuration standards are used to configure security hardened information systems and software. STIGs contain technical guidance to help secure information systems or software that might otherwise be vulnerable to a malicious attack.

DISA develops and maintains STIGs and defines the vulnerability Severity Category Codes (CAT) which are referred to as CAT I, II, and III.

Severity category codeDISA category code guidelines
CAT IAny vulnerability, the exploitation of which will directly and immediately result in loss of confidentiality, availability, or integrity.
CAT IIAny vulnerability, the exploitation of which has a potential to result in loss of confidentiality, availability, or integrity.
CAT IIIAny vulnerability, the existence of which degrades measures to protect against loss of confidentiality, availability, or integrity.

For a complete list of STIGs, see Windows 2019, 2016, and 2012. How to View SRGs and STIGs provides instructions for viewing the lists.

Image Builder STIG components

To make your systems compliant with STIG standards, you must install, configure, and test a variety of security settings. Image Builder provides STIG components that you can leverage to quickly build STIG-compliant images on standalone servers by applying local Group Policies. The STIG components of Image Builder scan for misconfigurations and run a remediation script. Image Builder defines the STIG components as low, medium, and high, which align with DISA CAT I, II, and III respectively (with some exceptions as outlined in Windows STIG Components).

Building a golden Windows Server image using STIG-compliance guidelines

Image Builder can be used with the AWS Management Console, AWS CLI, or APIs to create images in your AWS account. In this example, we use AWS console that provides a step-by-step wizard covering the four steps to build a golden image that follows STIG compliance guidelines. A graphical representation outlining the process is provided below, followed by a description of the steps to build the image.

Figure 1: Image Builder Process

Figure 1: Image Builder Process

Step 1: Define the source image

The first step is to define the base OS image to use as the foundation layer of the golden image. You can select an existing image that you own, an image owned by Amazon, or an image shared with you.

Define image recipe

Open the console and search for Image Builder service. Under EC2 Image Builder, select Recipe on the left pane. Select the Create recipe button on the top right corner. Enter a Name, Version, and Description for your recipe, as shown in Figure 2.

Figure 2: Name and describe the image recipe

Figure 2: Name and describe the image recipe

Select source image

Select a source image for your golden image.

  1. Under Source image, select Windows as the image operating system.
  2. For this example, choose Select managed images. A managed image is an Image-Builder-managed image created by you, shared with you, or provided by AWS.
  3. Select Browse images to choose from available images. In the screenshot below, I’ve selected a Windows Server 2016 image provided by AWS.


Figure 3: Select source image

Figure 3: Select source image

Step 2: Build components

You can create your own components using scripts to add or remove software or define the OS configuration along with the required answer files, scripts, and settings from registered repositories and Amazon Simple Storage Service (Amazon S3) buckets. AWS provides pre-defined components for regular updates as well as security settings: for example, STIG, Amazon Inspector and more.

Select Browse build components and then select the STIG component that has the latest version or the one that meets your requirements. You can choose more than one component to perform the desired changes to your golden image as shown in the screenshot below.

Figure 4: Select build components

Figure 4: Select build components

Step 3: Select tests

You can define your own tests based on the level of the compliance required for your specific workload. You can also use AWS-provided tests to validate images before deployment. At the time of writing this blog AWS-provided tests do not include pre-canned tests to validate STIG configuration. For Windows, custom tests are written in PowerShell. In the screenshot below, I’ve added an AWS-provided test to validate Windows activation.

Figure 5: Select tests

Figure 5: Select tests

Once done, select Create Recipe.

Step 4: Create pipeline and distribute images

The last step triggers creation of the golden image and distributes the output AMI to selected AWS Regions and AWS accounts.

Create pipeline

  1. Select the recipe that we just created and select Create pipeline from this recipe from the Actions menu in the upper right corner.
    Figure 6: Select create pipeline from Actions menu

    Figure 6: Select create pipeline from Actions menu

  2. Enter a pipeline Name and Description. For the IAM role, you can use the dropdown menu to select an existing IAM role. The best practice is to use an IAM role with least privileges necessary for the task.
    Figure 7: Pipeline details

    Figure 7: Pipeline details

    If you don’t want to use an existing IAM role, select Create new instance profile role and refer to the user guide to create a role. In the screenshot below I’ve defined a custom policy called ImageBuilder-S3Logs for Amazon S3 to perform limited operations. You can use an AWS managed policy to grant write access to S3 or customise the policy to fit to your organisation’s security requirements. If you choose to customize the policy, the instance profile specified in your infrastructure configuration must have s3:PutObject permission for the target bucket. A sample Amazon S3 policy that grants write access to imagebuilderlog bucket is provided below for your convenience. Please change the bucket name if you are going to use the sample policy.

    Figure 8: IAM Policy for SSM

    Figure 8: IAM Policy for SSM

        "Version": "2012-10-17",
        "Statement": [
                "Effect": "Allow",
                "Action": [
                "Resource": [

  3. Build a schedule to define the frequency at which the pipeline produces new images with the specific customisation defined in steps 1 through 3. You can either choose to run it manually or define a schedule using schedule builder or CRON expression in the Create Pipeline wizard.
  4. Infrastructure Settings will allow Image Builder to launch an Amazon EC2 instance to customise the image. This is an optional step; however, it’s recommended to configure the infrastructure settings. If you don’t provide an entry, AWS chooses service specific defaults. Infrastructure setting allows you to specify the infrastructure within which to build and test your image. You can specify instance type, subnet, security group to associate with the instance that Image Builder uses to capture and build the image.

    Image Builder requires communication with AWS Systems Manager (SSM) Service endpoint to capture and build the image. The communication can happen over public internet or using a VPC endpoint. In both cases, the Security Group must allow SSM Agent running on the instance to talk to Systems Manager. In this example, I’ve used SSM endpoint for the Image Builder instance to communicate with Systems Manager. This article provides details on how to configure endpoints and security group to allow SSM Agent communication with Systems Manager.

    Figure 9: Optional infrastructure settings

    Figure 9: Optional infrastructure settings

Distribute image

Image distribution is configured in the Additional Settings section of the wizard which also provides options to associate license configuration using AWS License Manager and assign a name and tags to the output AMI.

To distribute the image to another AWS Region, choose the target Region from the drop-down menu. The current Region is included by default. In addition, you can add AWS user account numbers to set launch permission for the output AMI. Launch permissions allow specific AWS user account(s) to launch the image in the current Region as well as other Regions.

Figure 10: Image distribution settings

Figure 10: Image distribution settings

Optionally, you can leverage AWS License Manager to track the use of licenses and assist in preventing a licensing breach. You can do so by associating the license configuration with the image. License configuration can be defined in License Manager. Finally, define the output AMI details.

Select Review to review the pipeline configuration, and then select Create Pipeline to create the pipeline.

Figure 11: Review pipeline configuration

Figure 11: Review pipeline configuration

Once the pipeline is created you can create the golden image instantly by selecting Run Pipeline under Actions. You could also configure a schedule to create a new golden image at regular time intervals. Scheduling the pipeline allows Image Builder to automate your ongoing image maintenance process, for example, monthly patch updates.

Optionally, you can select an Amazon Simple Notification Service (Amazon SNS) topic in the configuration section of the pipeline. This way you can get automated notification alerts on progress of your build pipeline. These notification enables you to build further automation in your operations. For example, you could trigger automatic redeployment of application using the most recent golden image.

Figure 12: Run pipeline

Figure 12: Run pipeline


In this post, we showed you how to build a custom Windows server golden image that you can leverage to follow STIG compliance guidelines using Amazon EC2 Image Builder. We used the EC2 Image Builder console to define the source OS image, define software and STIG component, configure test cases, create and schedule an image build pipeline and distribute the image to AWS user and region. Alternatively, you can leverage AMI published by AWS or APN partners to help meet your STIG compliance standards. More details on AWS published AMIs can be found in this link.

Image Builder is a powerful tool that is offered at no cost, other than the cost of the underlying AWS resources used to create, store, and share the images. It automates tasks associated with the creation and maintenance of security hardened server images. In addition, it offers pre-configured components for Windows and Linux that customers can leverage to meet STIG compliance requirements.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon EC2 forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.


Sepehr Samiei

Sepehr is a Senior Microsoft Tech Specialized Solutions Architect at AWS. He started his professional career as a .Net developer, which continued for more than 10 years. Early on, he quickly became a fan of cloud computing and he loves to help customers use the power of Microsoft tech on AWS. His wife and daughter are the most precious parts of his life.


Garry Singh

Garry Singh is a solutions architect at AWS. He provides guidance and technical assistance to help customers achieve the best outcomes for Microsoft workloads on AWS.

[$] Building a Flutter application (part 1)

Post Syndicated from coogle original https://lwn.net/Articles/828475/rss

In this two-part series, we will be implementing a simple RSS reader for LWN
using the UI toolkit Flutter. The project recently announced
version 1.20 of the toolkit on August 5. Flutter is a BSD-licensed UI development platform written in
Dart that is backed by Canonical as a new way to
develop desktop applications targeting Linux. Part one will cover some of the
basics of the project and Flutter, with part two building on that work to
focus on building a full interactive UI for the application.

ICYMI: Season one of Sessions with SAM

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/icymi-season-one-of-sessions-with-sam/

Developers tell us they want to know how to easily build and manage their serverless applications. In 2017 AWS announced AWS Serverless Application Model (SAM) to help with just that. To help developers learn more about SAM, I created a weekly Twitch series called Sessions with SAM. Each session focuses on a specific serverless task or service. It demonstrates deploying and managing that task using infrastructure as code (IaC) with SAM templates. This post recaps each session of the first season to prepare you for Sessions with SAM season two, starting August 13.

Sessions with SAM

Sessions with SAM

What is SAM

AWS SAM is an open source framework designed for building serverless applications. The framework provides shorthand syntax to quickly declare AWS Lambda functions, Amazon DynamoDB tables and more. Additionally, SAM is not limited to serverless resources and can also declare any standard AWS CloudFormation resource. With around 20 lines of code, a developer can create an application with an API, logic, and database layer with the proper permissions in place.

Example of using SAM templates to generate infrastructure

20 Lines of code

By using infrastructure as code to manage and deploy serverless applications, developers gain several advantages. You can version the templates and rollback when necessary. They can be parameterized for flexibility across multiple environments. They can be shared with development teams for consistency across developer environments.


The code and linked videos are listed with the session. See the YouTube playlist and GitHub repository for the entire season.

Session one: JWT authorizers on Amazon API Gateway

In this session, I cover building an application backend using JWT authorizers with the new Amazon API Gateway HTTP API. We also discussed building an application with multiple routes and the ability to change the authorization requirements per route.

Code: https://github.com/aws-samples/sessions-with-aws-sam/tree/master/http-api

Video: https://youtu.be/klOScYEojzY

Session two: Amazon Cognito authentication

In this session, I cover building an Amazon Cognito template for authentication. This includes the user management component with user pools and user groups in addition to a hosted authentication workflow with an app client.

Building an Amazon Cognito authentication provider

Building an Amazon Cognito authentication provider

We also discussed using custom pre-token Lambda functions to modify the JWT token issued by Amazon Cognito. This custom token allows you to insert custom scopes based on the Amazon Cognito user groups. These custom scopes are then used to customize the authorization requirements for the individual routes.

Code: https://github.com/aws-samples/sessions-with-aws-sam/tree/master/cognito

Video: https://youtu.be/nBtWCjKd72M

Session three: Building a translation app with Amazon EventBridge

I covered using AWS SAM to build a basic translation and sentiment app centered around Amazon EventBridge. The SAM template created three Lambda functions, a custom EventBridge bus, and an HTTP API endpoint.

Architecture for serverless translation application

Architecture for serverless translation application

Requests from HTTP API endpoint are put into the custom EventBridge bus via the endpoint Lambda function. Based on the type of request, either the translate function or the sentiment function is invoked. The AWS SAM template manages all the infrastructure in addition to the permissions to invoke the Lambda functions and access Amazon Translate and Amazon Comprehend.

Code: https://github.com/aws-samples/sessions-with-aws-sam/tree/master/eventbridge

Video: https://youtu.be/73R02KufLac

Session four: Building an Amazon Kinesis Data Firehose for ingesting website access logs

In this session, I covered building an Amazon Kinesis Data Firehose for ingesting large amounts of data. This particular application is designed for access logs generated from API Gateway. The logs are first stored to an Amazon DynamoDB data base for immediate processing. Next, the logs are sent through a Kinesis Data Firehose and stored in an Amazon S3 bucket for later processing.

Code: https://github.com/aws-samples/sessions-with-aws-sam/tree/master/kinesis-firehose

Video: https://youtu.be/jdTBtaxs0hA

Session five: Analyzing API Gateway logs with Amazon Kinesis Data Analytics

Continuing from session 4, I discussed configuring API Gateway access logs to use the Kinesis Data Firehose built in the previous session. I also demonstrate an Amazon Kinesis data analytics application for near-real-time analytics of your access logs.

Example of Kinesis Data Analytics in SAM

Example of Kinesis Data Analytics in SAM

Code: https://github.com/aws-samples/sessions-with-aws-sam/tree/master/kinesis-firehose

Video: https://youtu.be/ce0v-q9EVTQ

Session six: Managing Amazon SQS with AWS SAM templates

I demonstrated configuring an Amazon Simple Queue Service (SQS) queue and the queue policy to control access to the queue. We also discuss allowing cross-account and external resources to access the queue. I show how to identify the proper principal resources for building the proper AWS IAM policy templates.

Code: https://github.com/aws-samples/sessions-with-aws-sam/tree/master/SQS

Video: https://youtu.be/q2rbHMyJBDY

Session seven: Creating canary deploys for Lambda functions

In this session, I cover canary and linear deployments for serverless applications. We discuss how canary releases compare to linear releases and how they can be customized. We also spend time discussing pre-traffic and post-traffic tests and how rollbacks are handled when one of these tests fails.

Code: https://github.com/aws-samples/sessions-with-aws-sam/tree/master/safe-deploy

Video: https://youtu.be/RE4r_6edaXc

Session eight: Configuring custom domains for Amazon API Gateway endpoints

In session eight I configured custom domains for API Gateway REST and HTTP APIs. The demonstration included the option to pass in an Amazon Route 53 zone ID or AWS Certificate Manager (ACM) certificate ARN. If either of these are missing, then the template built a zone or SSL cert respectively.

Working with Amazon Route 53 zones

Working with Amazon Route 53 zones

We discussed how to use declarative and imperative methods in our templates. We also discussed how to use a single domain across multiple APIs, regardless of they are REST or HTTP APIs.

Code: https://github.com/aws-samples/sessions-with-aws-sam/tree/master/custom-domains

Video: https://youtu.be/4uXEGNKU5NI

Session nine: Managing AWS Step Functions with AWS SAM

In this session I was joined by fellow Senior Developer Advocate, Rob Sutter. Rob and I demonstrated managing and deploying AWS Step Functions using the new Step Functions support built into SAM. We discussed how SAM offers definition substitutions to pass data from the template into the state machine configuration.

Code: https://github.com/aws-samples/sessions-with-aws-sam/tree/master/step-functions

Video: https://youtu.be/BguUgdZwymQ

Session ten: Using Amazon EFS with Lambda functions in SAM

Joined by Senior Developer Advocate, James Beswick, we covered configuring Amazon Elastic File System (EFS) as a storage option for Lambda functions using AWS SAM. We discussed the Amazon VPC requirements in configuring for EFS. James also walked through using the AWS Command Line Interface (CLI) to aid in configuration of the VPC.

Code: https://github.com/aws-samples/aws-lambda-efs-samples

Video: https://youtu.be/up1op216trk

Session eleven: Ask the experts

This session introduced you to some of our SAM experts. Jeff Griffiths, Senior Product Manager, and Alex Woods, Software Development Engineer, joined me in answering live audience questions. WE discussed best practices for local development and debugging, Docker networking, CORS configurations, roadmap features and more.

SAM experts panel

SAM experts panel

Video: https://youtu.be/2JRa8MugPCY

Session twelve: Managing .Net Lambda function in AWS SAM and Stackery

In this final session of the season, I was joined by Stackery CTO and serverless hero, Chase Douglas. Chase demonstrated using Stackery and AWS SAM to build and deploy .Net Core Lambda functions. We discuss how Stackery’s editor allows developers to visually design a serverless application and how it uses SAM templates under the hood.

Stackery visual editor

Stackery visual editor

Code only examples

In addition to code examples with each video session, the repo includes developer-requested code examples. In this section, I demonstrate how to build an access log pipeline for HTTP API or use the SAM build command to compile Swift for Lambda functions.


Sessions with SAM helps developers bootstrap their serverless applications with instructional video and ready-made IaC templates. From JWT authorizers to EFS storage solutions, over 15 AWS services are represented in SAM templates. The first season of live videos supplements these templates with best practices explained and real developer questions answered.

Season two of Sessions with SAM starts August 13. The series will continue the pattern of explaining best practices, providing usable starter templates, and having some fun along the way.



When the Software Talent is in Africa, and the Jobs Are Everywhere Else

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/at-work/tech-careers/when-the-software-talent-is-in-africa-and-the-jobs-are-everywhere-else

Six years ago, Jeremy Johnson, visited Nairobi to speak at an education summit. Fresh off of taking his startup, 2U, public he saw a lot of smart young people—but not a lot of opportunity. With his background building 2U, he thought that was a problem he could do at least a little something about.

Along with five other entrepreneurs, Johnson started Andela. The company’s mission: to use digital education tools to train software engineers, and then place those engineers in jobs they could perform remotely for companies around the world.

The response from would-be software engineers—initially in Lagos, then expanding to six countries in Africa and recently announcing a continent-wide push—was overwhelming.

“We did a pilot in Lagos in 2014,” Johnson recalled, “by putting out a call for applicants… We were looking to select four people; we had 750 apply and hired six.”

Andela put the group through six months of training before placing them in jobs with tech companies from outside Africa.

For the second pilot program, with 20 slots to fill, Andela brought in an independent testing service to conduct aptitude tests to whittle down the pool of 2400 applicants. “The testing service called us, and asked us if we were aware that 42 applicants tested in the top two percent for cognitive ability of anybody in the world.”

Johnson and his partners realized that they had identified a population of really smart people who wanted to be software engineers. Initially, they aimed to provide free job training for the inexperienced. Then, once they were trained, hire them to work on projects that Andela took on for companies around the world. After five years and training more than 100,000 aspiring engineers in Africa, Andela realized that access to job training was no longer the prime hurdle for aspiring engineers, and dropped the training part of its operation to focus on bringing jobs to already experienced software engineers.

Right now, Andela has more than 1000 developers on its staff, spread throughout six countries in Africa and working for several hundred companies, with 2019 revenues of around $50 million. It’s not a nonprofit—the individuals and firms who have invested $181 million to date expect a financial return.

Navigating the pandemic in the short run has been challenging, says Johnson, though in the long run the sudden and massive switch of tech employers to remote work is likely to be a boon to engineers in Africa.

Here’s what else Johnson has to say about Andela’s operation, the impact of the coronavirus, and prospects for the future.

How Andela manages its remote workforce:

“We generally bring someone on for a specific job for a specific company; they are paid through Andela.  Basically, we make global hiring local [by handling all the logistics].

“To set salaries, we look at local markets. We try to be on the generous side of fair in regards to local market, so we can attract the best talent. However, we don’t want to break local markets; we don’t want people leaving medical and legal professions to become software developers. That said, the average engineer coming in gets a 30 percent pay bump from their previous role.

[Companies generally contract with Andela for a fixed number of engineers for a fixed amount of time. But Andela tries to keep its staff on board.]

“Once we know someone is going to be rolling off, we start looking for their next job.”  

On the impact of the pandemic:

“This has not been a simple year. We saw the storm start to build in February, across the board.  We got really worried in March, because we had so many small business clients. And indeed, in March and April, we saw a significant slowdown in new relationships, in companies being able to make hiring decisions.

[In May, Andela cut 135 employees, mostly operations and back office; no engineers were affected. Senior management also took a pay cut.]

“But we maintained more than 90 percent of our relationships with companies; things there went much better than expected. And in June, saw a turnaround on both sides, with things becoming much smoother.”

On the future of remote work:

“The pandemic and move to remote work increases the obviousness of what we are doing. We are going to see over time a significant shift to building out remote teams as a default strategy, a portion or the entire strategy, accelerating a trend that has been developing for years.

“We are seeing a move to grow the remote workforce in new customers and existing companies. We have also seen a lot of our partner companies who haven’t announced permanent remote work, tell us that they don’t have a timeline for bringing people back into the office, and remote work may be permanent.

“Long term, this is going to be a significant tailwind; it puts everyone on a level playing field. Businesses start working with us because [of the cost savings compared with hiring local talent]. And that’s fine. But from our point of view, we also want to leave the world better than we found it. That happens when a CTO wakes up six months after they start working with us, and realizes that the best engineer on the team is a young woman from Nairobi. It’s fun getting that ‘I love your mission’ call and knowing that this just happened.”

On Africa’s brain drain:

 “If you want to keep people you need to create opportunities for them. It’s not complex. I think of us as being a driver of enabling people to stay in country and build a local ecosystem—to have a local ecosystem you need opportunities that allow people to stay. Giving engineers an opportunity to work with the best engineering teams in the world, while staying in their country, allows them to bring knowledge home; that also contributes to building their own ecosystem, and ultimately creating even more opportunity there.”

China Launches Beidou, Its Own Version of GPS

Post Syndicated from Andrew Jones original https://spectrum.ieee.org/tech-talk/aerospace/satellites/final-piece-of-chinas-beidou-navigation-satellite-system-comes-online

The final satellite needed to complete China’s own navigation and positioning satellite system has passed final on-orbit tests. The completed independent system provides military and commercial value while also facilitating new technologies and services.

The Beidou was launched on a Long March 3B rocket from the Xichang Satellite Launch Center in a hilly region of Sichuan province at 01:43 UTC on Tuesday, 23 June. The satellite was sent into a geosynchronous transfer orbit before entering an orbital slot approximately 35,786 kilometers in altitude which keeps it at a fixed point above the Earth.

Like GPS, the main, initial motivation for Beidou was military. The People’s Liberation Army did not want to be dependent on GPS for accurate positioning data of military units and weapons guidance, as the U.S. Air Force could switch off open GPS signals in the event of conflict. 

As with GPS, Beidou also provides and facilitates a range of civilian and commercial services and activities, with an output value of $48.5 billion in 2019. 

Twenty four satellites in medium Earth orbits (at around 21,500 kilometers above the Earth) provide positioning, navigation and timing (PNT) services. The satellites use rubidium and hydrogen atomic clocks for highly-accurate timing that allows precise measurement of speed and location.

Additionally, thanks to a number of satellites in geosynchronous orbits, Beidou provides a short messaging service through which 120-character messages can be sent to other Beidou receivers. Beidou also aids international search and rescue services. Vessels at sea will be able to seek help from nearby ships in case of emergency despite no cellphone signal.

The Beidou satellite network is also testing inter-satellite links, removing reliance on ground stations for communications across the system.

Beidou joins the United States’ GPS and Russia’s GLONASS in providing global PNT services, with Europe’s Galileo soon to follow. These are all compatible and interoperable, meaning users can draw services from all of these to improve accuracy.

“The BeiDou-3 constellation transmits a civil signal that was designed to be interoperable with civil signals broadcast by Galileo, GPS III, and a future version of GLONASS. This means that civil users around the world will eventually be getting the same signal from more than 100 satellites across all these different constellations, greatly increasing availability, accuracy, and resilience,” says Brian Weeden, Director of Program Planning for Secure World Foundation

“This common signal is the result of international negotiations that have been going on since the mid-2000s within the International Committee of GNSS (ICG).”

The rollout of Beidou has taken two decades. The first Beidou satellites were launched in 2000, providing coverage to China. Second generation Beidou-2 satellites provided coverage for the Asia-Pacific region starting in 2012. Deployment of Beidou-3 satellites began in 2015, with Tuesday’s launch being the 30th such satellite. 

But this is far from the end of the line. China wants to establish a ‘ubiquitous, integrated and intelligent and comprehensive’ national PNT system, with Beidou as its core, by 2035, according to a white paper.

Chinese aerospace firms are also planning satellite constellations in low Earth orbit to augment the Beidou signal, improving accuracy while facilitating high-speed data transmission. Geely, an automotive giant, is now also planning its own constellation to improve accuracy for autonomous driving.

Although the space segment is complete, China still has work to do on the ground to make full use of Beidou, according to Weeden.

“It’s not just enough to launch the satellites; you also have to roll out the ground terminals and get them integrated into everything you want to make use of the system. Doing so is often much harder and takes much longer than putting up the satellites. 

“So, for the Chinese military to make use of the military signals offered by BeiDou-3, they need to install compatible receivers into every plane, tank, ship, bomb, and backpack. That will take a lot of time and effort,” Weeden states.

With the rollout of Beidou satellites complete, inhabitants downrange of Xichang will be spared any further disruption and possible harm. Long March 3B launches of Beidou satellites frequently see spent rocket stages fall near or on inhabited areas. Eighteen such launches have been carried out since 2018.

The areas calculated to be under threat from falling boosters were evacuated ahead of time for safety. Warnings about residual toxic hypergolic propellant were also issued. But close calls and damage to property were all too common.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.