Tag Archives: C

Modernizing and containerizing a legacy MVC .NET application with Entity Framework to .NET Core with Entity Framework Core: Part 2

Post Syndicated from Pratip Bagchi original https://aws.amazon.com/blogs/devops/modernizing-and-containerizing-a-legacy-mvc-net-application-with-entity-framework-to-net-core-with-entity-framework-core-part-2/

This is the second post in a two-part series in which you migrate and containerize a modernized enterprise application. In Part 1, we walked you through a step-by-step approach to re-architect a legacy ASP.NET MVC application and ported it to .NET Core Framework. In this post, you will deploy the previously re-architected application to Amazon Elastic Container Service (Amazon ECS) and run it as a task with AWS Fargate.

Overview of solution

In the first post, you ported the legacy MVC ASP.NET application to ASP.NET Core, you will now modernize the same application as a Docker container and host it in the ECS cluster.

The following diagram illustrates this architecture.

Architecture Diagram

 

You first launch a SQL Server Express RDS (1) instance and create a Cycle Store database on that instance with tables of different categories and subcategories of bikes. You use the previously re-architected and modernized ASP.NET Core application as a starting point for this post, this app is using AWS Secrets Manager (2) to fetch database credentials to access Amazon RDS instance. In the next step, you build a Docker image of the application and push it to Amazon Elastic Container Registry (Amazon ECR) (3). After this you create an ECS cluster (4) to run the Docker image as a AWS Fargate task.

Prerequisites

For this walkthrough, you should have the following prerequisites:

This post implements the solution in Region us-east-1.

Source Code

Clone the source code from the GitHub repo. The source code folder contains the re-architected source code and the AWS CloudFormation template to launch the infrastructure, and Amazon ECS task definition.

Setting up the database server

To make sure that your database works out of the box, you use a CloudFormation template to create an instance of Microsoft SQL Server Express and AWS Secrets Manager secrets to store database credentials, security groups, and IAM roles to access Amazon Relational Database Service (Amazon RDS) and Secrets Manager. This stack takes approximately 15 minutes to complete, with most of that time being when the services are being provisioned.

  1. On the AWS CloudFormation console, choose Create stack.
  2. For Prepare template, select Template is ready.
  3. For Template source, select Upload a template file.
  4. Upload SqlServerRDSFixedUidPwd.yaml, which is available in the GitHub repo.
  5. Choose Next.
    Create AWS CloudFormation stack
  6. For Stack name, enter SQLRDSEXStack.
  7. Choose Next.
  8. Keep the rest of the options at their default.
  9. Select I acknowledge that AWS CloudFormation might create IAM resources with custom names.
  10. Choose Create stack.
    Add IAM Capabilities
  11. When the status shows as CREATE_COMPLETE, choose the Outputs tab.
  12. Record the value for the SQLDatabaseEndpoint key.
    CloudFormation output
  13. Connect the database from the SQL Server Management Studio with the following credentials:User id: DBUserand Password:DBU$er2020

Setting up the CYCLE_STORE database

To set up your database, complete the following steps:

  1. On the SQL Server Management console, connect to the DB instance using the ID and password you defined earlier.
  2. Under File, choose New.
  3. Choose Query with Current Connection.Alternatively, choose New Query from the toolbar.
    Run database restore script
  4. Open CYCLE_STORE_Schema_data.sql from the GitHub repository and run it.

This creates the CYCLE_STORE database with all the tables and data you need.

Setting up the ASP.NET MVC Core application

To set up your ASP.NET application, complete the following steps:

  1. Open the re-architected application code that you cloned from the GitHub repo. The Dockerfile added to the solution enables Docker support.
  2. Open the appsettings.Development.json file and replace the RDS endpoint present in the ConnectionStrings section with the output of the AWS CloudFormation stack without the port number which is :1433 for SQL Server.

The ASP.NET application should now load with bike categories and subcategories. See the following screenshot.

Final run

Setting up Amazon ECR

To set up your repository in Amazon ECR, complete the following steps:

  1. On the Amazon ECR console, choose Repositories.
  2. Choose Create repository.
  3. For Repository name, enter coretoecsrepo.
  4. Choose Create repository Create repositiory
  5. Copy the repository URI to use later.
  6. Select the repository you just created and choose View push commands.View Push Commands
  7. In the folder where you cloned the repo, navigate to the AdventureWorksMVCCore.Web folder.
  8. In the View push commands popup window, complete steps 1–4 to push your Docker image to Amazon ECR. Push to Amazon Elastic Repository

The following screenshot shows completion of Steps 1 and 2 and ensure your working directory is set to AdventureWorksMVCCore.Web as below.

Login
The following screenshot shows completion of Steps 3 and 4.

Amazon ECR Push

Setting up Amazon ECS

To set up your ECS cluster, complete the following steps:

    1. On the Amazon ECS console, choose Clusters.
    2. Choose Create cluster.
    3. Choose the Networking only cluster template.
    4. Name your cluster cycle-store-cluster.
    5. Leave everything else as its default.
    6. Choose Create cluster.
    7. Select your cluster.
    8. Choose Task Definitions and choose Create new Task Definition.
    9. On the Select launch type compatibility page choose FARGATE and click Next step.
    10. On the Configure task and container definitions page, scroll to the bottom of the page and choose Configure via JSON.
    11. In the text area, enter the task definition JSON (task-definition.json) provided in the GitHub repo. Make sure to replace [YOUR-AWS-ACCOUNT-NO] in task-definition.json with your AWS account number on the line number 44, 68 and 71. The task definition file assumes that you named your repository as coretoecsrepo. If you named it something else, modify this file accordingly. It also assumes that you are using us-east-1 as your default region, so please consider replacing the region in the task-definition.json on line number 15 and 44 if you are not using us-east-1 region. Replace AWS Account number
    12. Choose Save.
    13. On the Task Definitions page, select cycle-store-td.
    14. From the Actions drop-down menu, choose Run Task.Run task
    15. Choose Launch type is equal to Fargate.
    16. Choose your default VPC as Cluster VPC.
    17. Select at least one Subnet.
    18. Choose Edit Security Groups and select ECSSecurityGroup (created by the AWS CloudFormation stack).Select security group
    19. Choose Run Task

Running your application

Choose the link under the task and find the public IP. When you navigate to the URL http://your-public-ip, you should see the .NET Core Cycle Store web application user interface running in Amazon ECS.

See the following screenshot.

Final run

Cleaning up

To avoid incurring future charges, delete the stacks you created for this post.

  1. On the AWS CloudFormation console, choose Stacks.
  2. Select SQLRDSEXStack.
  3. In the Stack details pane, choose Delete.

Conclusion

This post concludes your journey towards modernizing a legacy enterprise MVC ASP.NET web application using .NET Core and containerizing using Amazon ECS using the AWS Fargate compute engine on a Linux container. Portability to .NET Core helps you run enterprise workload without any dependencies on windows environment and AWS Fargate gives you a way to run containers directly without managing any EC2 instances and giving you full control. Additionally, couple of recent launched AWS tools in this area.

About the Author

Saleha Haider is a Senior Partner Solution Architect with Amazon Web Services.
Pratip Bagchi is a Partner Solutions Architect with Amazon Web Services.

 

Automated CI/CD pipeline for .NET Core Lambda functions using AWS extensions for dotnet CLI

Post Syndicated from Sundar Narasiman original https://aws.amazon.com/blogs/devops/automated-ci-cd-pipeline-for-net-core-lambda-functions-using-aws-extensions-for-dotnet-cli/

The trend of building AWS Serverless applications using AWS Lambda is increasing at an ever-rapid pace. Common use cases for AWS Lambda include data processing, real-time file processing, and extract, transform, and load (ETL) for data processing, web backends, internet of things (IoT) backends, and mobile backends. Lambda natively supports languages such as Java, Go, PowerShell, Node.js, C#, Python, and Ruby. It also provides a Runtime API that allows you to use any additional programming languages to author your functions.

.NET framework occupies a significant footprint in the technology landscape of enterprises. Nowadays, enterprise customers are modernizing .NET framework applications to .NET Core using AWS Serverless (Lambda). In this journey, you break down a large monolith service into multiple smaller independent and autonomous microservices using.NET Core Lambda functions

When you have several microservices running in production, a change management strategy is key for business agility and time-to-market changes. The change management of .NET Core Lambda functions translates to how well you implement an automated CI/CD pipeline using AWS CodePipeline. In this post, you see two approaches for implementing CI/CD for .NET Core Lambda functions: creating a pipeline with either two or three stages.

Creating a pipeline with two stages

In this approach, you define the pipeline in CodePipeline with two stages: AWS CodeCommit and AWS CodeBuild. CodeCommit is the fully-managed source control repository that stores the source code for .NET Core Lambda functions. It triggers CodeBuild when a new code change is published. CodeBuild defines a compute environment for the build process. It builds the .NET Core Lambda function and creates a deployment package (.zip). Finally, CodeBuild uses AWS extensions for Dotnet CLI to deploy the Lambda packages (.zip) to the Lambda environment. The following diagram illustrates this architecture.

 

CodePipeline with CodeBuild and CodeCommit stages.

CodePipeline with CodeBuild and CodeCommit stages.

Creating a pipeline with three stages

In this approach, you define the pipeline with three stages: CodeCommit, CodeBuild, and AWS CodeDeploy.

CodeCommit stores the source code for .NET Core Lambda functions and triggers CodeBuild when a new code change is published. CodeBuild defines a compute environment for the build process and builds the .NET Core Lambda function. Then CodeBuild invokes the CodeDeploy stage. CodeDeploy uses AWS CloudFormation templates to deploy the Lambda function to the Lambda environment. The following diagram illustrates this architecture.

CodePipeline with CodeCommit, CodeBuild and CodeDeploy stages.

CodePipeline with CodeCommit, CodeBuild and CodeDeploy stages.

Solution Overview

In this post, you learn how to implement an automated CI/CD pipeline using the first approach: CodePipeline with CodeCommit and CodeBuild stages. The CodeBuild stage in this approach implements the build and deploy functionalities. The high-level steps are as follows:

  1. Create the CodeCommit repository.
  2. Create a Lambda execution role.
  3. Create a Lambda project with .NET Core CLI.
  4. Change the Lambda project configuration.
  5. Create a buildspec file.
  6. Commit changes to the CodeCommit repository.
  7. Create your CI/CD pipeline.
  8. Complete and verify pipeline creation.

For the source code and buildspec file, see the GitHub repo.

Prerequisites

Before you get started, you need the following prerequisites:

Creating a CodeCommit repository

You first need a CodeCommit repository to store the Lambda project source code.

1. In the Repository settings section, for Repository name, enter a name for your repository.

2. Choose Create.

Name a repository

 

 

 

 

 

 

 

 

3. Initialize this repository with a markdown file (readme.md). You need this markdown file to create documentation about the repository.

4. Set up an AWS Identity and Access Management (IAM) credential to CodeCommit. Alternatively, you can set up SSH-based access. For instructions, see Setup for HTTPS users using Git credentials and Setup steps for SSH connections to AWS CodeCommit repositories on Linux, MacOS, or Unix. You need this to work with the CodeCommit repository from the development environment.

5. Clone the CodeCommit repository to a local folder.

Proceed to the next step to create an IAM role for Lambda execution.

Creating a Lambda execution role

Every Lambda function needs an IAM role for execution. Create an IAM role for Lambda execution with the appropriate IAM policy, if it doesn’t exist already. You’re now ready to create a Lambda function project using .NET Core Command Line Interface (CLI).

Creating a Lambda function project

You have multiple options for creating .NET Core Lambda function projects, such as using Visual Studio 2019, Visual Studio Code, and .NET Core CLI. In this post, you use .NET Core CLI.

By default, .NET Core CLI doesn’t support Lambda projects. You need the Amazon.Lambda.Templates nuget package to create your project.

  1. Install the nuget package Amazon.Lambda.Templates to have all the Amazon Lambda project templates in the development environment. See the following CLI Command.
    dotnet new -i Amazon.Lambda.Templates::*
  2. Verify the installation with the following CLI Command.
    dotnet new

    You should see the following output reflecting the presence of various Lambda templates in the development environment. You also need to install AWS extensions for Dotnet Lambda CLI to deploy and invoke Lambda functions from the terminal or command prompt.dotnet cli command listing lambda project templates

  3. To install the extensions, enter the following CLI Commands.
    dotnet tool install -g Amazon.Lambda.Tools
    dotnet tool update -g Amazon.Lambda.Tools
    

    You’re now ready to create a Lambda function project in the development environment.

  4. Navigate to the root of the cloned CodeCommit repository (which you created in the previous step).
  5. Create the Lambda function by entering the following CLI Command.
    dotnet new lambda.EmptyFunction --name Dotnetlambda4 --profile default --region us-east-1

    After you create your Lambda function project, you need to make some configuration changes.

Changing the Lambda function project configuration

When you create a .NET Core Lambda function project, it adds the configuration file aws-lambda-tools-defaults.json at the root of the project directory. This file holds the various configuration parameters for Lambda execution. You want to make sure that the function role is set to the IAM role you created earlier, and that the profile is set to default.

The updated aws-lambda-tools-defaults.json file should look like the following code:

{
  "Information": [
    "This file provides default values for the deployment wizard inside Visual Studio and the AWS Lambda commands added to the .NET Core CLI.",
    "To learn more about the Lambda commands with the .NET Core CLI execute the following command at the command line in the project root directory.",

    "dotnet lambda help",

    "All the command line options for the Lambda command can be specified in this file."
  ],

  "profile": "default",
  "region": "us-east-1",
  "configuration": "Release",
  "framework": "netcoreapp3.1",
  "function-runtime": "dotnetcore3.1",
  "function-memory-size": 256,
  "function-timeout": 30,
  "function-handler": "Dotnetlambda4::Dotnetlambda4.Function::FunctionHandler",
  "function-role": "arn:aws:iam::awsaccountnumber:role/testlambdarole"
}

After you update your project configuration, you’re ready to create the buildspec.yml file.

Creating a buildspec file

As a prerequisite to configuring the CodeCommit stage, you created a Lambda function project. For the CodeBuild stage, you need to create a buildspec file.

 

Create a buildspec.yml file with the following definition and save it at the root of the CodeCommit directory:

version: 0.2
env:
  variables:
    DOTNET_ROOT: /root/.dotnet
  secrets-manager:
    AWS_ACCESS_KEY_ID_PARAM: CodeBuild:AWS_ACCESS_KEY_ID
    AWS_SECRET_ACCESS_KEY_PARAM: CodeBuild:AWS_SECRET_ACCESS_KEY
phases:
  install:
    runtime-versions:
      dotnet: 3.1
  pre_build:
    commands:
      - echo Restore started on `date`
      - export PATH="$PATH:/root/.dotnet/tools"
      - pip install --upgrade awscli
      - aws configure set profile $Profile
      - aws configure set region $Region
      - aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID_PARAM
      - aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY_PARAM
      - cd Dotnetlambda4
      - cd src
      - cd Dotnetlambda4
      - dotnet clean 
      - dotnet restore
  build:
    commands:
      - echo Build started on `date`
      - dotnet new -i Amazon.Lambda.Templates::*
      - dotnet tool install -g Amazon.Lambda.Tools
      - dotnet tool update -g Amazon.Lambda.Tools
      - dotnet lambda deploy-function "Dotnetlambda4" --function-role "arn:aws:iam::yourawsaccount:role/youriamroleforlambda" --region "us-east-1"

You’re now ready to commit your changes to the CodeCommit repository.

Committing changes to the CodeCommit repository

To push changes to your CodeCommit repository, enter the following git commands.

git add --all
git commit –a –m “Initial Comment”
git push

After you commit the changes, you can create your CI/CD pipeline using CodePipeline.

Creating a CI/CD pipeline

To create your pipeline with a CodeCommit and CodeBuild stage, complete the following steps:

  1. In the Pipeline settings section, for Pipeline name, enter a name.
  2. For Service role, select New service role.
  3. For Role name, use the auto-generated name.
  4. Select Allow AWS CodePipeline to create a service role so it can be used with this new pipeline.
  5. Choose Next.Choose Pipeline settings
  6. In the Source section, for Source provider, choose AWS CodeCommit.
  7. For Repository name, choose your repository.
  8. For Branch name, choose your branch.
  9. For Change detection options, select Amazon CloudWatch Events.
  10. Choose Next.Populating the Source stage
  11. In the Build section, for Build provider, choose AWS CodeBuild.Populating the CodeBuild stage
  12. For Environment image, choose Managed image.
  13. For Operating system, choose Ubuntu.
  14. For Image, choose aws/codebuild/standard:4.0.
  15. For Image version, choose Always use the latest image for this runtime versionSelecting Codebuild runtime
  16. CodeBuild needs to assume an IAM service role to get the required privileges for successful build operation.Create a new service role for the CodeBuild project.Selecting the Service role
  17. Attach the following IAM policy to the role:
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "SecretManagerRead",
                "Effect": "Allow",
                "Action": [
                    "secretsmanager:GetRandomPassword",
                    "secretsmanager:GetResourcePolicy",
                    "secretsmanager:UntagResource",
                    "secretsmanager:GetSecretValue",
                    "secretsmanager:DescribeSecret",
                    "secretsmanager:ListSecretVersionIds",
                    "secretsmanager:ListSecrets",
                    "secretsmanager:TagResource"
                ],
                "Resource": "*"
            }
        ]
    }
    
  18. You now need to define the compute and environment variables for CodeBuild. For Compute, select your preferred compute.
  19. For Environment variables, enter two variables. For Region, enter your preferred Region. For Profile, Enter Value as default. Selecting CodeBuild env optionsThis allows the environment to use the default AWS profile in the build process.
  20. To set up an AWS profile, the CodeBuild environment needs AccessKeyId and SecretAccessKey. As a best practice, configure AccessKeyId and SecretAccessKey as secrets in AWS Secrets Manager and reference it in buildspec.yml. On the Secrets Manager console, choose Store a new secret.
  21. For Select secret type, select Other type of secrets.Selecting secret types
  22. Configure secrets AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.Configuring secrets
  23. For the encryption key, choose DefaultEncryptionKey.
  24. Choose Next.
  25. For Secret name, enter CodeBuild.Secret name
  26. Leave the rest of selections as default and choose Store.Commented code
  27. In the Add deploy stage section, choose Skip deploy stage.Add Deploy stage

Completing and verifying your pipeline

After you save your pipeline, push the code changes of the Lambda function from the local repository to the remote CodeCommit repository.

After a few seconds, you should see the activation of the CodeCommit stage and transition to CodeBuild stage. Pipeline creation can take up to a few minutes.

CodePipeline

You can verity your pipeline on the CodePipeline console. This should deploy the Lambda function changes to the Lambda environment.

Cleaning up

If you no longer need the following resources, delete them to avoid incurring further charges:

  • CodeCommit repository
  • CodePipeline project
  • CodeBuild project
  • IAM role for Lambda execution
  • Lambda function

Conclusion

In this post, you implemented an automated CI/CD for .NET Core Lambda functions using two stages of CodePipeline: CodeCommit and CodeBuild. You can apply this solution to your own use cases.

About the author

Sundararajan Narasiman works as Senior Partner Solutions Architect with Amazon Web Services.

Modernizing and containerizing a legacy MVC .NET application with Entity Framework to .NET Core with Entity Framework Core: Part 1

Post Syndicated from Pratip Bagchi original https://aws.amazon.com/blogs/devops/modernizing-and-containerizing-a-legacy-mvc-net-application-with-entity-framework-to-net-core-with-entity-framework-core-part-1/

Tens of thousands of .NET applications are running across the world, many of which are ASP.NET web applications. This number becomes interesting when you consider that the .NET framework, as we know it, will be changing significantly. The current release schedule for .NET 5.0 is November 2020, and going forward there will be just one .NET that you can use to target multiple platforms like Windows and Linux. This is important because those .NET applications running in version 4.8 and lower can’t automatically upgrade to this new version of .NET. This is because .NET 5.0 is based on .NET Core and thus has breaking changes when trying to upgrade from an older version of .NET.

This is an important step in the .NET Eco-sphere because it enables .NET applications to move beyond the Windows world. However, this also means that active applications need to go through a refactoring before they can take advantage of this new definition. One choice for this refactoring is to wait until the new version of .NET is released and start the refactoring process at that time. The second choice is to get an early start and start converting your applications to .NET Core v3.1 so that the migration to .NET 5.0 will be smoother. This post demonstrates an approach of migrating an ASP.NET MVC (Model View Controller) web application using Entity Framework 6 to and ASP.NET Core with Entity Framework Core.

This post shows steps to modernize a legacy enterprise MVC ASP.NET web application using .NET core along with converting Entity Framework to Entity Framework Core.

Overview of the solution

The first step that you take is to get an Asp.NET MVC application and its required database server up and working in your AWS environment. We take this approach so you can run the application locally to see how it works. You first set up the database, which is SQL Server running in Amazon Relational Database Service (Amazon RDS). Amazon RDS provides a managed SQL Server experience. After you define the database, you set up schema and data. If you already have your own SQL Server instance running, you can also load data there if desired; you simply need to ensure your connection string points to that server rather than the Amazon RDS server you set up in this walk-through.

Next you launch a legacy MVC ASP.NET web application that displays lists of bike categories and its subcategories. This legacy application uses Entity Framework 6 to fetch data from database.

Finally, you take a step-by-step approach to convert the same use case and create a new ASP.NET Core web application. Here you use Entity Framework Core to fetch data from the database. As a best practice, you also use AWS Secrets Manager to store database login information.

MIgration blog overview

Prerequisites

For this walkthrough, you should have the following prerequisites:

Setting up the database server

For this walk-through, we have provided an AWS CloudFormation template inside the GitHub repository to create an instance of Microsoft SQL Server Express. Which can be downloaded from this link.

  1. On the AWS CloudFormation console, choose Create stack.
  2. For Prepare template, select Template is ready.
  3. For Template source, select Upload a template file.
  4. Upload SqlServerRDSFixedUidPwd.yaml and choose Next.
    Create AWS CloudFormation stack
  5. For Stack name, enter SQLRDSEXStack and choose Next.
  6. Keep the rest of the options at their default.
  7. Select I acknowledge that AWS CloudFormation might create IAM resources with custom names.
  8. Choose Create stack.
    Add IAM Capabilities
  9. When the status shows as CREATE_COMPLETE, choose the Outputs tab and record the value from the SQLDatabaseEndpoint key.AWS CloudFormation output
  10. Connect the database from the SQL Server Management Studio with the following credentials:User id: DBUserPassword: DBU$er2020

Setting up the CYCLE_STORE database

To set up your database, complete the following steps:

  1. On the SQL Server Management console, connect to the DB instance using the ID and password you defined earlier.
  2. Under File, choose New.
  3. Choose Query with Current Connection.
    Alternatively, choose New Query from the toolbar.
    Run database restore script
  4. Download cycle_store_schema_data.sql  and run it.

This creates the CYCLE_STORE database with all the tables and data you need.

Setting up and validating the legacy MVC application

  1. Download the source code from the GitHub repo.
  2. Open AdventureWorksMVC_2013.sln and modify the database connection string in the web.config file by replacing the Data Source property value with the server name from your Amazon RDS setup.

The ASP.NET application should load with bike categories and subcategories. The following screenshot shows the Unicorn Bike Rentals website after configuration.Legacy code output

Now that you have the legacy application running locally, you can look at what it would take to refactor it so that it’s a .NET Core 3.1 application. Two main approaches are available for this:

  • Update in place – You make all the changes within a single code set
  • Move code – You create a new .NET Core solution and move the code over piece by piece

For this post, we show the second approach because it means that you have to do less scaffolding.

Creating a new MVC Core application

To create your new MVC Core application, complete the following steps:

  1. Open Visual Studio.
  2. From the Get Started page, choose Create a New Project.
  3. Choose ASP.NET Core Web Application.
  4. For Project name, enter AdventureWorksMVCCore.Web.
  5. Add a Location you prefer.
  6. For Solution name, enter AdventureWorksMVCCore.
  7. Choose Web Application (Model-View-Controller).
  8. Choose Create. Make sure that the project is set to use .NET Core 3.1.
  9. Choose Build, Build Solution.
  10. Press CTRL + Shift + B to make sure the current solution is building correctly.

You should get a default ASP.NET Core startup page.

Aligning the projects

ASP.NET Core MVC is dependent upon the use of a well-known folder structure; a lot of the scaffolding depends upon view source code files being in the Views folder, controller source code files being in the Controllers folder, etc. Some of the non-.NET specific folders are also at the same level, such as css and images. In .NET Core, the expectations are that static content should be in a new construct, the wwwroot folder. This includes Javascript, CSS, and image files. You also need to update the configuration file with the same database connection string that you used earlier.

Your first step is to move the static content.

  1. In the .NET Core solution, delete all the content created during solution creation.This includes css, js, and lib directories and the favicon.ico file.
  2. Copy over the css, favicon, and Images folders from the legacy solution to the wwwroot folder of the new solution.When completed, your .NET Core wwwroot directory should appear like the following screenshot.
    Static content
  3. Open appsettings.Development.json and add a ConnectionStrings section (replace the server with the Amazon RDS endpoint that you have already been using). See the following code:.
{
  "ConnectionStrings": {
    "DefaultConnection": "Server=sqlrdsdb.xxxxx.us-east-1.rds.amazonaws.com; Database=CYCLE_STORE;User Id=DBUser;Password=DBU$er2020;"
  },
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  }
}

Setting up Entity Framework Core

One of the changes in .NET Core is around changes to Entity Framework. Entity Framework Core is a lightweight, extensible data access technology. It can act as an object-relational mapper (O/RM) that enables interactions with the database using .NET objects, thus abstracting out much of the database access code. To use Entity Framework Core, you first have to add the packages to your project.

  1.  In the Solution Explorer window, choose the project (right-click) and choose Manage Nuget packages…
  2. On the Browse tab, search for the latest stable version of these two Nuget packages.You should see a screen similar to the following screenshot, containing:
    • Microsoft.EntityFrameworkCore.SqlServer
    • Microsoft.EntityFrameworkCore.Tools
      Add Entity Framework Core nuget
      After you add the packages, the next step is to generate database models. This is part of the O/RM functionality; these models map to the database tables and include information about the fields, constraints, and other information necessary to make sure that the generated models match the database. Fortunately, there is an easy way to generate those models. Complete the following steps:
  3. Open Package Manager Console from Visual Studio.
  4. Enter the following code (replace the server endpoint):
    Scaffold-DbContext "Server= sqlrdsdb.xxxxxx.us-east-1.rds.amazonaws.com; Database=CYCLE_STORE;User Id= DBUser;Password= DBU`$er2020;" Microsoft.EntityFrameworkCore.SqlServer -OutputDir Models
    The ` in the password right before the $ is the escape character.
    You should now have a Context and several Model classes from the database stored within the Models folder. See the following screenshot.
    Databse model scaffolding
  5. Open the CYCLE_STOREContext.cs file under the Models folder and comment the following lines of code as shown in the following screenshot. You instead take advantage of the middleware to read the connection string in appsettings.Development.json that you previously configured.
    if (!optionsBuilder.IsConfigured)
                {
    #warning To protect potentially sensitive information in your connection string, you should move it out of source code. See http://go.microsoft.com/fwlink/?LinkId=723263 for guidance on storing connection strings.
                    optionsBuilder.UseSqlServer(
                        "Server=sqlrdsdb.cph0bnedghnc.us-east-1.rds.amazonaws.com; " +
                        "Database=CYCLE_STORE;User Id= DBUser;Password= DBU$er2020;");
                }
  6. Open the startup.cs file and add the following lines of code in the ConfigureServices method. You need to add reference of AdventureWorksMVCCore.Web.Models and Microsoft.EntityFrameworkCore in the using statement. This reads the connection string from the appSettings file and integrates with Entity Framework Core.
    services.AddDbContext<CYCLE_STOREContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));

Setting up a service layer

Because you’re working with an MVC application, that application control logic belongs in the controller. In the previous step, you created the data access layer. You now create the service layer. This service layer is responsible for mediating communication between the controller and the data access layer. Generally, this is where you put considerations such as business logic and validation.

Setting up the interfaces for these services follows the dependency inversion and interface segregation principles.

    1. Create a folder named Service under the project directory.
    2. Create two subfolders, Interface and Implementation, under the new Service folder.
      Service setup
    3. Add a new interface, ICategoryService, to the Service\Interface folder.
    4. Add the following code to that interface
      using AdventureWorksMVCCore.Web.Models;
      using System.Collections.Generic;
       
      namespace AdventureWorksMVCCore.Web.Service.Interface
      {
          public interface ICategoryService
          {
              List<ProductCategory> GetCategoriesWithSubCategory();”
          }
      }
    5. Add a new service file, CategoryService, to the Service/Implementation folder.
    6. Create a class file CategoryService.cs and implement the interface you just created with the following code:
      using AdventureWorksMVCCore.Web.Models;
      using AdventureWorksMVCCore.Web.Service.Interface;
      using Microsoft.EntityFrameworkCore;
      using System.Collections.Generic;
      using System.Linq;
       
      namespace AdventureWorksMVCCore.Web.Service.Implementation
      {
        
          public class CategoryService : ICategoryService
          {
              private readonly CYCLE_STOREContext _context;
              public CategoryService(CYCLE_STOREContext context)
              {
                  _context = context;
              }
              public List<ProductCategory> GetCategoriesWithSubCategory()
              {
                  return _context.ProductCategory
                          .Include(category => category.ProductSubcategory)
                          .ToList(); 
               }
          }
      }
      

      Now that you have the interface and instantiation completed, the next step is to add the dependency resolver. This adds the interface to the application’s service collection and acts as a map on how to instantiate that class when it’s injected into a class constructor.

    7. To add this mapping, open the startup.cs file and add the following line of code below where you added DbContext:
      services.TryAddScoped<ICategoryService, CategoryService>();

      You may also need to add the following references:

      using AdventureWorksMVCCore.Web.Service.Implementation;
      using AdventureWorksMVCCore.Web.Service.Interface;
      using Microsoft.Extensions.DependencyInjection;
      using Microsoft.Extensions.DependencyInjection.Extensions;
      

Setting up view components

In this section, you move the UI to your new ASP.NET Core project. In ASP.NET Core, the default folder structure for managing views is different that it was in ASP.NET MVC. Also, the formatting of the Razor files is slightly different.

    1. Under Views, Shared, create a  folder called Components.
    2. Create a sub folder called Header.
    3. In the Header folder, create a new view called Default.cshtml and enter the following code:
      <div method="post" asp-action="header" asp-controller="home">
          <div id="hd">
              <div id="doc2" class="yui-t3 wrapper">
                  <table>
                      <tr>
                          <td>
                              <h2 class="banner">
                                  <a href="@Url.Action("Default","Home")" id="LnkHome">
                                      <img src="/Images/logo.png" style="width:125px;height:125px" alt="ComponentOne" />
       
                                  </a>
                              </h2>
                          </td>
                          <td class="hd-header"><h2 class="banner">Unicorn Bike Rentals</h2></td>
                      </tr>
                  </table>
              </div>
       
          </div>
      </div>
      
    4. Create a class within the Header folder called HeaderLayout.cs and enter the following code:
      using Microsoft.AspNetCore.Mvc;
       
      namespace AdventureWorksMVCCore.Web
      {
          public class HeaderViewComponent : ViewComponent
          {
              public IViewComponentResult Invoke()
              {
                  return View();
              }
          }
      }
      

      You can now create the content view component, which shows bike categories and subcategories.

    5. Under Views, Shared, Components, create a folder called Content
    6. Create a class ContentLayoutModel.cs and enter the following code:
      using AdventureWorksMVCCore.Web.Models;
      using System.Collections.Generic;
       
      namespace AdventureWorksMVCCore.Web.Views.Components
      {
          public class ContentLayoutModel
          {
              public List<ProductCategory> ProductCategories { get; set; }
          }
      }
      
    7. In this folder, create a view Default.cshtml and enter the following code:
      @model AdventureWorksMVCCore.Web.Views.Components.ContentLayoutModel
       
      <div method="post" asp-action="footer" asp-controller="home">
          <div class="content">
       
              <div class="footerinner">
                  <div id="PnlExpFooter">
                      <div>
                          @foreach (var category in Model.ProductCategories)
                          {
                              <div asp-for="@category.Name" [email protected]($"{category.Name}Menu")>
                                  <h1>
                                      <b>@category.Name</b>
                                  </h1>
                                  <ul [email protected]($"{category.Name}List")>
                                      @foreach (var subCategory in category.ProductSubcategory.ToList())
                                      {
                                          <li>@subCategory.Name</li>
                                      }
                                  </ul>
                              </div>
                          }
                      </div>
                  </div>
              </div>
          </div>
      </div>
    8. Create a class ContentLayout.cs and enter the following code:
      using AdventureWorksMVCCore.Web.Models;
      using AdventureWorksMVCCore.Web.Service.Interface;
      using Microsoft.AspNetCore.Mvc;
       
      namespace AdventureWorksMVCCore.Web.Views.Components
      {
          public class ContentViewComponent : ViewComponent
          {
              private readonly CYCLE_STOREContext _context;
              private readonly ICategoryService _categoryService;
              public ContentViewComponent(CYCLE_STOREContext context,
                  ICategoryService categoryService)
              {
                  _context = context;
                  _categoryService = categoryService;
              }
       
              public IViewComponentResult Invoke()
              {
                  ContentLayoutModel content = new ContentLayoutModel();
                  content.ProductCategories = _categoryService.GetCategoriesWithSubCategory();
                  return View(content);
              }
       
       
          }
      }

      The website layout is driven by _Layout.cshtml file.

    9. To render the header and portal the way you want, modify _Layout.cshtml and replace the existing code with the following code:
      <!DOCTYPE html>
      <html lang="en">
      <head>
          <meta name="viewport" content="width=device-width" />
          <title>Core Cycles Store</title>
          <link rel="apple-touch-icon" sizes="180x180" href="favicon/apple-touch-icon.png" />
          <link rel="icon" type="image/png" href="favicon/favicon-32x32.png" sizes="32x32" />
          <link rel="icon" type="image/png" href="favicon/favicon-16x16.png" sizes="16x16" />
          <link rel="manifest" href="favicon/manifest.json" />
          <link rel="mask-icon" href="favicon/safari-pinned-tab.svg" color="#503b75" />
          <link href="@Url.Content("~/css/StyleSheet.css")" rel="stylesheet" />
      </head>
      <body class='@ViewBag.BodyClass' id="body1">
          @await Component.InvokeAsync("Header");
       
          <div id="doc2" class="yui-t3 wrapper">
              <div id="bd">
                  <div id="yui-main">
                      <div class="content">
                          <div> 
                              @RenderBody()
                          </div>
                      </div>
                  </div>
              </div>
          </div>
      </body>
      </html>
      

      Upon completion your directory should look like the following screenshot.
      View component setup

Modifying the index file

In this final step, you modify the Home, Index.cshtml file to hold this content ViewComponent:

@{
    ViewBag.Title = "Core Cycles";
    Layout = "~/Views/Shared/_Layout.cshtml";
}
 
<div id="homepage" class="">    
    <div class="content-mid">
        @await Component.InvokeAsync("Content");
    </div>
 
</div>

You can now build the solution. You should have an MVC .NET Core 3.1 application running with data from your database. The following screenshot shows the website view.

Re-architected code output

Securing the database user and password

The CloudFormation stack you launched also created a Secrets Manager entry to store the CYCLE_STORE database user ID and password. As an optional step, you can use that to retrieve the database user ID and password instead of hard-coding it to ConnectionString.

To do so, you can use the AWS Secrets Manager client-side caching library. The dependency package is also available through NuGet. For this post, I use NuGet to add the library to the project.

  1. On the NuGet Package Manager console, browse for AWSSDK.SecretsManager.Caching.
  2. Choose the library and install it.
  3. Follow these steps to also install Newtonsoft.Json.
    Add Aws Secrects Manager nuget
  4. Add a new class ServicesConfiguration to this solution and enter the following code in the class. Make sure all the references are added to the class. This is an extension method so we made the class static:
    public static class ServicesConfiguration
       {
           public static async Task<Dictionary<string, string>> GetSqlCredential(this IServiceCollection services, string secretId)
           {
     
               var credential = new Dictionary<string, string>();
     
               using (var secretsManager = new AmazonSecretsManagerClient(Amazon.RegionEndpoint.USEast1))
               using (var cache = new SecretsManagerCache(secretsManager))
               {
     
     
                   var sec = await cache.GetSecretString(secretId);
                   var jo = Newtonsoft.Json.Linq.JObject.Parse(sec);
     
                   credential["username"] = jo["username"].ToObject<string>();
                   credential["password"] = jo["password"].ToObject<string>();
               }
     
               return credential;
           }
       }
    
  5. In appsettings.Development.json, replace the DefaultConnection with the following code:
    "Server=sqlrdsdb.cph0bnedghnc.us-east-1.rds.amazonaws.com; Database=CYCLE_STORE;User Id=<UserId>;Password=<Password>;"
  6. Add the following code in the startup.cs, which replaces the placeholder user ID and password with the value retrieved from Secrets Manager:
    Dictionary<string, string> secrets = services.GetSqlCredential("CycleStoreCredentials").Result;
    connectionString = connectionString.Replace("<UserId>", secrets["username"]);
    connectionString = connectionString.Replace("<Password>", secrets["password"]);
    services.AddDbContext<CYCLE_STOREContext>(options => options.UseSqlServer(connectionString));

    Build the solution again.

Cleaning up

To avoid incurring future charges, on the AWS CloudFormation console, delete the SQLRDSEXStack stack.

Conclusion

This post showed the process to modernize a legacy enterprise MVC ASP.NET web application using .NET Core and convert Entity Framework to Entity Framework Core. In Part 2 of this post, we take this one step further to show you how to host this application in Linux containers.

About the Author

Saleha Haider is a Senior Partner Solution Architect with Amazon Web Services.
Pratip Bagchi is a Partner Solutions Architect with Amazon Web Services.

 

An Introduction to C & GUI Programming – the new book from Raspberry Pi Press

Post Syndicated from Simon Long original https://www.raspberrypi.org/blog/an-introduction-to-c-gui-programming-the-new-book-from-raspberry-pi-press/

The latest book from Raspberry Pi Press, An Introduction to C & GUI Programming, is now available. Author Simon Long explains how it came to be written…

An Introduction to C and GUI programming by Simon Long

Learning C

I remember my first day in a ‘proper’ job very well. I’d just left university, and was delighted to have been taken on by a world-renowned consultancy firm as a software engineer. I was told that most of my work would be in C, which I had never used, so the first order of business was to learn it.

My manager handed me a copy of Kernighan & Ritchie’s The C Programming Language, pointed to a terminal in the corner, said ‘That’s got a compiler. Off you go!’, and left me to it. So, I started reading the book, which is affectionately known to most software engineers as ‘K&R‘.

I didn’t get very far. K&R is basically the specification of the C language. Dennis Ritchie, the eponymous ‘R’, invented C, and while the book he helped write is an excellent reference guide, it is not a great introduction for a beginner. Like most people who know their subject inside out, the authors tend to assume that you know more than you do, so reading the book when you don’t know anything about the language at all is a little frustrating. I do know people who have learned C from K&R, and they have my undying respect!

I ended up learning C on the job as I went along; I looked at other people’s code, hacked stuff together, worked out why things didn’t work, asked for help from my colleagues, made a lot of mistakes, and gradually got the hang of it. I found only one book that was helpful for a beginner: it was called C For Yourself, and was actually one of the manuals for the long-extinct Microsoft QuickC compiler. That book is now impossible to find, so I’ve always had to tell people that the best book for learning C as a beginner is ‘C For Yourself, but you won’t be able to find a copy!’

Writing An Introduction to C & GUI Programming

When I embarked on this project, the editor of The MagPi and I were discussing possible series for the magazine, and we thought about creating a guide to writing GUI applications in C — that’s what I do in my day job at Raspberry Pi, so it seemed a logical place to start. We realised that the reader would need to know C to benefit from the series, and they wouldn’t be able to find a copy of C For Yourself. We decided that I ought to solve that problem first, so I wrote the original beginners’ guide to C series for The MagPi.

(At this point, I should stress that the series is aimed at absolute beginners. I freely admit that I have simplified parts of the language so that the reader does not have to absorb as much in one go. So yes, I do know about returning a success/fail code from a program, but beginners really don’t need to learn about that in the first chapter — especially when many will never need to write a program which does it. That’s why it isn’t explained until Chapter 9.)

An Introduction to C and GUI programming by Simon Long published by Raspberry Pi Press

So, the beginners’ guide to C came first, and I have now got round to writing the second part, which was what I’d planned to write all along. The section on GUIs describes how to write applications using the GTK toolkit, which is used for most of the Raspberry Pi Desktop and its associated applications. GTK is very powerful, and allows you to write rich graphical user interfaces with relatively few lines of code, but it’s not the most intuitive for beginners. (Much like C itself!) The book walks you through the basics of creating a window, putting widgets on it, and making the widgets do useful things, and gets you to the point where you know enough to be able to write an application like the ones I have written for the Raspberry Pi Desktop.

An Introduction to C and GUI programming by Simon Long published by Raspberry Pi Press

It then seemed logical to bring the two parts together in a single volume, so that someone with no experience of C has enough information to go from a standing start to writing useful desktop applications.

I hope that I’ve achieved that and if nothing else, I hope that I’ve written a book which is a bit more approachable for beginners than K&R!

Get An Introduction to C & GUI Programming today!

An Introduction to C & GUI Programming is available today from the Raspberry Pi Press online store, or as a free download here. You can also pick up a copy from the Raspberry Pi Store in Cambridge, or ask your local bookstore if they have it in stock or can order it in for you.

Alex interjects to state the obvious: Basically, what we’re saying here is that there’s no reason for you not to read Simon’s book. Oh, and it feels really nice too.

The post An Introduction to C & GUI Programming – the new book from Raspberry Pi Press appeared first on Raspberry Pi.

Best Practices for Porting Applications to the EC2 A1 Instance Type

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/best-practices-for-porting-applications-to-the-ec2-a1-instance-type/

This post courtesy of Dr. Jonathan Shapiro-Ward, AWS Solutions Architect

The new Amazon EC2 A1 instance types are powered by an ARM based AWS Graviton CPU. A1 instances are extremely cost effective and are ideal for scale-out scenarios where a large number of smaller instances are required.

Prior the launch of A1 instances, all AWS instances were x86 based. There are a number of significant differences between the two instruction sets. The predominant difference is that ARM follows a RISC (Reduced Instruction Set Computer) design, whereas x86 is a CISC (Complex Instruction Set Computer) architecture. In short, ARM is a comparatively simple architecture composed of simple instructions which execute within a single cycle. Meanwhile, x86 has mostly complex instructions that execute over multiple cycles. This key difference has a range of implications for compiler and hardware complexity, power efficiency, and performance, but the key question to application developers is portability.

Workloads built for x86 will not run on the A1 family of instances, they must be ported. In many cases this is trivial. An extensive range of Free and Open Source software supports ARM and requires no modification to port workloads. Aside from installing a different binary, the process for installing the Apache Web Server, Nginx, PostgreSQL, Docker, and many more applications is unchanged. Unfortunately, porting is not always so simple, especially for applications developed in house. Ideally, software is written to be portable, but this is not always the case. Even workloads written in languages designed for portability such as Java can prove a challenge. In this blog post, we’ll review common challenges and migration paths from porting from x86 based architectures to ARM.

A General Porting Strategy

  1. Check for Core Language and Platform Support. The vast majority of common languages such as Java, Python, Perl, Ruby, PHP, Go, Rust, and so forth support modern ARM architectures. Likewise, major frameworks such as Django, Spring, Hadoop, Apache Spark, and many more run on ARM. More niche languages and frameworks may not have as robust support. If your language or crucial framework is dependent on x86, you may not be able to run your application on the A1 instance type.
  2. Identify all third party libraries and dependencies. All non-trivial applications rely on third party libraries to provide essential functionality. These can range from a standard library, to open source libraries, to paid for proprietary libraries. Examine these libraries and determine if they support ARM. In the event that a library is dependent upon a specific architecture, search for an open issue around ARM support or inquire with the vendor as to the roadmap. If possible, investigate alternative libraries if ARM support is not forthcoming.
  3. Identify Porting Path. There are three common strategies for porting. The strategy that applies will depend upon the language being used.
    1. For interpreted and those compiled to bytecode, the first step is to translate runbooks, scripts, AMIs, and templates to install the ARM equivalent of the interpreter or language VM. For instance, installing the ARM version of the JVM or CPython. If installation is done via package manager, no change may be necessary. Subsequently it is necessary to ensure that any native code libraries are replaced with the ARM equivalent. For Java, this would entail swapping out libraries leveraging JNI. For Python, this would entail swapping out CPython C-API based libraries (such as numpy). Once again, if this is done via a package manager such as yum or pip, manual intervention should not be necessary.
    2. In the case of compiled languages such as C/C++, the application will have to be re-compiled for ARM. If your application utilizes any machine specific features or relies on behavior that varies between compilers, it may be necessary to re-write parts of your application. This is discussed in more detail below. If your application follows common standards such as ANSI C and avoids any machine specific dependencies, the majority of effort will be spent in modifying the build process. This will depend upon your build process and will likely involve modifying build configurations, makefiles, configure scripts, or other assets. The binary can either be compiled on an A1 instance or cross compiled. Once a binary has been produced, the deployment process should be adapted to target the A1 instance.
    3. For applications built using platform specific languages and frameworks, which have an ARM alternative, the application will have to be ported to this alternative. By far the most common example of this is the .NET Framework. In order to run on ARM, .NET applications must be ported to .NET Core or to Mono. This will likely entail a non-trivial modification to the application codebase. This is discussed in more detail below.
  4. Test! All tests must be ported over and, if there is insufficient test coverage, it may be necessary to write new tests. This is especially pertinent if you had to recompile your application. A strong test suite should identify any issues arising from machine specific code that behaves incorrectly on the A1 instance type. Perform the usual range of unit testing, acceptance testing, and pre-prod testing.
  5. Update your infrastructure as code resources, such as AWS CloudFormation templates, to provision your application to A1 instances. This will likely be a simple change, modifying the instance type, AMI, and user data to reflect the change to the A1 instance.
  6. Perform a Green/Blue Deployment. Create your new A1 based stack alongside your existing stack and leverage Route 53 weighted routing to route 10% of requests to the new stack. Monitor error rates, user behavior, load, and other critical factors in order to determine the health of the ported application in production. If the application behaves correctly, swap over all traffic to the new stack. Otherwise, reexamine the application and identify the root cause of any errors.

Porting C/C++ Applications to A1 Instances

C and C++ are very portable languages. Indeed, C runs on more architectures than any other language but that is not to say that all C will run on all systems. When porting applications to A1 instances, many of the challenges one might expect do not arise. The AWS Graviton chip is little endian, just like x86 and int, float, double, and other common types are the same size between architectures. This does not, however, guarantee portability. Most commonly, issues porting a C based application arise from aspects of the C standard which are dependent upon the architecture and implementation. Let’s briefly look at some examples of C that are not portable between architectures.

The most frequently discussed issue in porting C from x86 to ARM is the use of the character datatype. This issue arises from the C99 standard which requires the implementation to decide if the char datatype is signed or unsigned. On x86 Linux a char is signed by default. On ARM Linux a char is unsigned. This discrepancy is due to performance, with unsigned char types resulting in more efficient ARM assembly. This can, however, cause issues. Let’s examine the following code listing:

//Code Listing 1
#include <stdio.h>

int main(){
    char c = -1;
    if (c < 0){
        printf("The value of the char is less than 0\n");
    } else {
        printf("The value of the char is greater than 0\n");
    }
    return 0;
}

On an x86 instance (in this case a t3.large) the above code has the expected result – printing “The value of the char is less than 0”. On an A1 instance, it does not. There are mechanisms around this, for example gcc has the -fsigned-char flag which forces all char types to become signed upon compilation. Crucially, a developer must be aware of these types of issues ahead of time. Not all compilers and warn levels will provide appropriate warnings around char signedness (and indeed many other issues arising from architectural differences). Resultantly, without rigorous testing, it is possible to introduce unexpected errors by porting. This makes a comprehensive set of tests for your application an essential part of the porting process.

If your application has only ever been built for a single target environment, (e.g. x86 Linux with gcc) there are potentially unexpected behaviors which will emerge when that application is built for a different architecture or with a different compiler.

The key best practice for ensuring portability of your C applications (and other compiled languages) is to adhere to a standard. Vanilla C99 will ensure the broadest compatibility across architectures and operating systems. A compiler specific standard such as gnu99 will ensure compatibility but can tether you to one compiler.

Static analysis should always be used when building your applications. Static analysis helps to detect bugs, security flaws, and pertinent to our discussion: compatibility issues.

Porting .NET Applications to A1 Instances

Porting a .NET application from a Windows instance to an A1 Linux instance can yield significant cost reductions and is an effective way to economically scale out web apps and other parallelizable workloads.

For all intents and purposes, the .NET Framework only runs on x86 Windows. This limitation does not necessarily prevent running your .NET application on an A1 instance. There are two .NET implementations, .NET Core and .NET Framework. The .NET Framework is the modern evolution of the original .NET release and is tightly coupled to x86 Windows. Meanwhile, .NET Core is a recent open source project, developed by Microsoft, which is decoupled from Windows and runs on a variety of platforms, including ARM Linux. There is one final option, Mono – an open source implementation of the .NET Framework which runs on a variety of architectures.

For greenfield projects .NET Core has become the de facto option as it has a number of advantages over the alternatives. Projects based on .NET Core are cross platform and significantly lighter than .Net Framework and Mono projects – making them far better suited to developing microservices and to running in containers or serverless environments. From version 2.1 onward, .NET Core supports ARM.

For existing projects, .NET Core is the best migration path to containers and to running on A1 instances. There are, however, a number of factors that might prohibit replatforming to .NET Core. These include

• Dependency on Windows specific APIs
• Reliance on features that are only available in .NET Framework such as WPF or Windows Forms. Many of these features are coming to .NET Core as part of .NET Core 3, but at time or writing, this is in preview.
• Use of third-party libraries that do not support .NET Core.
• Use of an unsupported language Currently, .NET Core only supports C#, F#, Visual Studio.

There are a number of tools to help port from .NET Framework to .NET Core. These include the .Net portability analyzer, which will analyze a .NET codebase and determine any factors that might prohibit porting.

Conclusion

The A1 instances can deliver significant cost savings over other instances types and are ideal for scale-out applications such as microservices. In many cases, moving to A1 instances can be easy. Many languages, frameworks, and applications have strong support for ARM. For Python, Java, Ruby, and other open source languages, porting can be trivial. For other applications such as native binary applications or .net applications, there can be some challenges. By examining your application and determining what, if any, x86 dependencies exist you can devise a migration strategy which will enable you to make use of A1 instances.

Introducing the C++ Lambda Runtime

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/introducing-the-c-lambda-runtime/

This post is courtesy of Marco Magdy, AWS Software Development Engineer – AWS SDKs and Tools

Today, AWS Lambda announced the availability of the Runtime API. The Runtime API allows you to write your Lambda functions in any language, provided that you bundle it with your application artifact or as a Lambda layer that your application uses.

As an example of using this API and based on the customer demand, AWS is releasing a reference implementation of a C++ runtime for Lambda. This C++ runtime brings the simplicity and expressiveness of interpreted languages while maintaining the superiority of C++ performance and low memory footprint. These are benefits that align well with the event-driven, function-based, development model of Lambda applications.

Hello World

Start by writing a Hello World Lambda function in C++ using this runtime.

Prerequisites

You need a Linux-based environment (I recommend Amazon Linux), with the following packages installed:

  • A C++11 compiler, either GCC 5.x or later or Clang 3.3 or later. On Amazon Linux, run the following commands:
    $ yum install gcc64-c++ libcurl-devel
    $ export CC=gcc64
    $ export CXX=g++64
  • CMake v.3.5 or later. On Amazon Linux, run the following command:
    $ yum install cmake3
  • Git

Download and compile the runtime

The first step is to download & compile the runtime:

$ cd ~ 
$ git clone https://github.com/awslabs/aws-lambda-cpp.git
$ cd aws-lambda-cpp
$ mkdir build
$ cd build
$ cmake3 .. -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=OFF \
   -DCMAKE_INSTALL_PREFIX=~/out
$ make && make install

This builds and installs the runtime as a static library under the directory ~/out.

Create your C++ function

The next step is to build the Lambda C++ function.

  1. Create a new directory for this project:
    $ mkdir hello-cpp-world
    $ cd hello-cpp-world
  2. In that directory, create a file named main.cpp with the following content:
    // main.cpp
    #include <aws/lambda-runtime/runtime.h>
    
    using namespace aws::lambda_runtime;
    
    invocation_response my_handler(invocation_request const& request)
    {
       return invocation_response::success("Hello, World!", "application/json");
    }
    
    int main()
    {
       run_handler(my_handler);
       return 0;
    }
  3. Create a file named CMakeLists.txt in the same directory, with the following content:
    cmake_minimum_required(VERSION 3.5)
    set(CMAKE_CXX_STANDARD 11)
    project(hello LANGUAGES CXX)
    
    find_package(aws-lambda-runtime REQUIRED)
    add_executable(${PROJECT_NAME} "main.cpp")
    target_link_libraries(${PROJECT_NAME} PUBLIC AWS::aws-lambda-runtime)
    aws_lambda_package_target(${PROJECT_NAME})
  4. To build this executable, create a build directory and run CMake from there:
    $ mkdir build
    $ cd build
    $ cmake3 .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH=~/out
    $ make

    This compiles and links the executable in release mode.

  5. To package this executable along with all its dependencies, run the following command:
    $ make aws-lambda-package-hello

    This creates a zip file in the same directory named after your project, in this case hello.zip.

Create the Lambda function

Using the AWS CLI, you create the Lambda function. First, create a role for the Lambda function to execute under.

  1. Create the following JSON file for the trust policy and name it trust-policy.json.
    {
     "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": ["lambda.amazonaws.com"]
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
  2. Using the AWS CLI, run the following command:
    $ aws iam create-role \
    --role-name lambda-cpp-demo \
    --assume-role-policy-document file://trust-policy.json

    This should output JSON that contains the newly created IAM role information. Make sure to note down the “Arn” value from that JSON. You need it later. The Arn looks like the following:

    “Arn”: “arn:aws:iam::<account_id>:role/lambda-cpp-demo”

  3. Create the Lambda function:
    $ aws lambda create-function \
    --function-name hello-world \
    --role <specify the role arn from the previous step> \
    --runtime provided \
    --timeout 15 \
    --memory-size 128 \
    --handler hello \
    --zip-file fileb://hello.zip
  4. Invoke the function using the AWS CLI:
    <bash>

    $ aws lambda invoke --function-name hello-world --payload '{ }' output.txt

    You should see the following output:

    {
      "StatusCode": 200
    }

    A file named output.txt containing the words “Hello, World!” should be in the current directory.

Beyond Hello

OK, well that was exciting, but how about doing something slightly more interesting?

The following example shows you how to download a file from Amazon S3 and do some basic processing of its contents. To interact with AWS, you need the AWS SDK for C++.

Prerequisites

If you don’t have them already, install the following libraries:

  • zlib-devel
  • openssl-devel
  1. Build the AWS SDK for C++:
    $ cd ~
    $ git clone https://github.com/aws/aws-sdk-cpp.git
    $ cd aws-sdk-cpp
    $ mkdir build
    $ cd build
    $ cmake3 .. -DBUILD_ONLY=s3 \
     -DBUILD_SHARED_LIBS=OFF \
     -DENABLE_UNITY_BUILD=ON \
     -DCMAKE_BUILD_TYPE=Release \
     -DCMAKE_INSTALL_PREFIX=~/out
    
    $ make && make install

    This builds the S3 SDK as a static library and installs it in ~/out.

  2. Create a directory for the new application’s logic:
    $ cd ~
    $ mkdir cpp-encoder-example
    $ cd cpp-encoder-example
  3. Now, create the following main.cpp:
    // main.cpp
    #include <aws/core/Aws.h>
    #include <aws/core/utils/logging/LogLevel.h>
    #include <aws/core/utils/logging/ConsoleLogSystem.h>
    #include <aws/core/utils/logging/LogMacros.h>
    #include <aws/core/utils/json/JsonSerializer.h>
    #include <aws/core/utils/HashingUtils.h>
    #include <aws/core/platform/Environment.h>
    #include <aws/core/client/ClientConfiguration.h>
    #include <aws/core/auth/AWSCredentialsProvider.h>
    #include <aws/s3/S3Client.h>
    #include <aws/s3/model/GetObjectRequest.h>
    #include <aws/lambda-runtime/runtime.h>
    #include <iostream>
    #include <memory>
    
    using namespace aws::lambda_runtime;
    
    std::string download_and_encode_file(
        Aws::S3::S3Client const& client,
        Aws::String const& bucket,
        Aws::String const& key,
        Aws::String& encoded_output);
    
    std::string encode(Aws::String const& filename, Aws::String& output);
    char const TAG[] = "LAMBDA_ALLOC";
    
    static invocation_response my_handler(invocation_request const& req, Aws::S3::S3Client const& client)
    {
        using namespace Aws::Utils::Json;
        JsonValue json(req.payload);
        if (!json.WasParseSuccessful()) {
            return invocation_response::failure("Failed to parse input JSON", "InvalidJSON");
        }
    
        auto v = json.View();
    
        if (!v.ValueExists("s3bucket") || !v.ValueExists("s3key") || !v.GetObject("s3bucket").IsString() ||
            !v.GetObject("s3key").IsString()) {
            return invocation_response::failure("Missing input value s3bucket or s3key", "InvalidJSON");
        }
    
        auto bucket = v.GetString("s3bucket");
        auto key = v.GetString("s3key");
    
        AWS_LOGSTREAM_INFO(TAG, "Attempting to download file from s3://" << bucket << "/" << key);
    
        Aws::String base64_encoded_file;
        auto err = download_and_encode_file(client, bucket, key, base64_encoded_file);
        if (!err.empty()) {
            return invocation_response::failure(err, "DownloadFailure");
        }
    
        return invocation_response::success(base64_encoded_file, "application/base64");
    }
    
    std::function<std::shared_ptr<Aws::Utils::Logging::LogSystemInterface>()> GetConsoleLoggerFactory()
    {
        return [] {
            return Aws::MakeShared<Aws::Utils::Logging::ConsoleLogSystem>(
                "console_logger", Aws::Utils::Logging::LogLevel::Trace);
        };
    }
    
    int main()
    {
        using namespace Aws;
        SDKOptions options;
        options.loggingOptions.logLevel = Aws::Utils::Logging::LogLevel::Trace;
        options.loggingOptions.logger_create_fn = GetConsoleLoggerFactory();
        InitAPI(options);
        {
            Client::ClientConfiguration config;
            config.region = Aws::Environment::GetEnv("AWS_REGION");
            config.caFile = "/etc/pki/tls/certs/ca-bundle.crt";
    
            auto credentialsProvider = Aws::MakeShared<Aws::Auth::EnvironmentAWSCredentialsProvider>(TAG);
            S3::S3Client client(credentialsProvider, config);
            auto handler_fn = [&client](aws::lambda_runtime::invocation_request const& req) {
                return my_handler(req, client);
            };
            run_handler(handler_fn);
        }
        ShutdownAPI(options);
        return 0;
    }
    
    std::string encode(Aws::IOStream& stream, Aws::String& output)
    {
        Aws::Vector<unsigned char> bits;
        bits.reserve(stream.tellp());
        stream.seekg(0, stream.beg);
    
        char streamBuffer[1024 * 4];
        while (stream.good()) {
            stream.read(streamBuffer, sizeof(streamBuffer));
            auto bytesRead = stream.gcount();
    
            if (bytesRead > 0) {
                bits.insert(bits.end(), (unsigned char*)streamBuffer, (unsigned char*)streamBuffer + bytesRead);
            }
        }
        Aws::Utils::ByteBuffer bb(bits.data(), bits.size());
        output = Aws::Utils::HashingUtils::Base64Encode(bb);
        return {};
    }
    
    std::string download_and_encode_file(
        Aws::S3::S3Client const& client,
        Aws::String const& bucket,
        Aws::String const& key,
        Aws::String& encoded_output)
    {
        using namespace Aws;
    
        S3::Model::GetObjectRequest request;
        request.WithBucket(bucket).WithKey(key);
    
        auto outcome = client.GetObject(request);
        if (outcome.IsSuccess()) {
            AWS_LOGSTREAM_INFO(TAG, "Download completed!");
            auto& s = outcome.GetResult().GetBody();
            return encode(s, encoded_output);
        }
        else {
            AWS_LOGSTREAM_ERROR(TAG, "Failed with error: " << outcome.GetError());
            return outcome.GetError().GetMessage();
        }
    }

    This Lambda function expects an input payload to contain an S3 bucket and S3 key. It then downloads that resource from S3, encodes it as base64, and sends it back as the response of the Lambda function. This can be useful to display an image in a webpage, for example.

  4. Next, create the following CMakeLists.txt file in the same directory.
    cmake_minimum_required(VERSION 3.5)
    set(CMAKE_CXX_STANDARD 11)
    project(encoder LANGUAGES CXX)
    
    find_package(aws-lambda-runtime REQUIRED)
    find_package(AWSSDK COMPONENTS s3)
    
    add_executable(${PROJECT_NAME} "main.cpp")
    target_link_libraries(${PROJECT_NAME} PUBLIC
                          AWS::aws-lambda-runtime
                           ${AWSSDK_LINK_LIBRARIES})
    
    aws_lambda_package_target(${PROJECT_NAME})
  5. Follow the same build steps as before:
    $ mkdir build
    $ cd build
    $ cmake3 .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH=~/out
    $ make
    $ make aws-lambda-package-encoder

    Notice how the target name for packaging has changed to aws-lambda-package-encoder. The CMake function aws_lambda_package_target() always creates a target based on its input name.

    You should now have a file named “encoder.zip” in your build directory.

  6. Before you create the Lambda function, modify the IAM role that you created earlier to allow it to access S3.
    $ aws iam attach-role-policy \
    --role-name lambda-cpp-demo \
    --policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
  7. Using the AWS CLI, create the new Lambda function:
    $ aws lambda create-function \
    --function-name encode-file \
    --role <specify the same role arn used in the prior Lambda> \
    --runtime provided \
    --timeout 15 \
    --memory-size 128 \
    --handler encoder \
    --zip-file fileb://encoder.zip
  8. Using the AWS CLI, run the function. Make sure to use a S3 bucket in the same Region as the Lambda function:
    $ aws lambda invoke --function-name encode-file --payload '{"s3bucket": "your_bucket_name", "s3key":"your_file_key" }' base64_image.txt

    You can use an online base64 image decoder and paste the contents of the output file to verify that everything is working. In a real-world scenario, you would inject the output of this Lambda function in an HTML img tag, for example.

Conclusion

With the new Lambda Runtime API, a new door of possibilities is open. This C++ runtime enables you to do more with Lambda than you ever could have before.

More in-depth details, along with examples, can be found on the GitHub repository. With it, you can start writing Lambda functions with C++ today. AWS will continue evolving the contents of this repository with additional enhancements and samples. I’m so excited to see what you build using this runtime. I appreciate feedback sent via issues in GitHub.

Happy hacking!

[$] A filesystem “change journal” and other topics

Post Syndicated from jake original https://lwn.net/Articles/755277/rss

At the 2017 Linux Storage, Filesystem, and Memory-Management Summit
(LSFMM), Amir Goldstein presented his work
on adding a superblock watch mechanism to provide a scalable way to notify
applications
of changes in a filesystem. At the 2018 edition of LSFMM, he was back to
discuss adding NTFS-like change
journals
to the kernel in support of backup solutions of various
sorts. As a second topic for the session, he also wanted to discuss doing
more performance-regression testing
for filesystems.

EC2 Instance Update – M5 Instances with Local NVMe Storage (M5d)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-instance-update-m5-instances-with-local-nvme-storage-m5d/

Earlier this month we launched the C5 Instances with Local NVMe Storage and I told you that we would be doing the same for additional instance types in the near future!

Today we are introducing M5 instances equipped with local NVMe storage. Available for immediate use in 5 regions, these instances are a great fit for workloads that require a balance of compute and memory resources. Here are the specs:

Instance NamevCPUsRAMLocal StorageEBS-Optimized BandwidthNetwork Bandwidth
m5d.large28 GiB1 x 75 GB NVMe SSDUp to 2.120 GbpsUp to 10 Gbps
m5d.xlarge416 GiB1 x 150 GB NVMe SSDUp to 2.120 GbpsUp to 10 Gbps
m5d.2xlarge832 GiB1 x 300 GB NVMe SSDUp to 2.120 GbpsUp to 10 Gbps
m5d.4xlarge1664 GiB1 x 600 GB NVMe SSD2.210 GbpsUp to 10 Gbps
m5d.12xlarge48192 GiB2 x 900 GB NVMe SSD5.0 Gbps10 Gbps
m5d.24xlarge96384 GiB4 x 900 GB NVMe SSD10.0 Gbps25 Gbps

The M5d instances are powered by Custom Intel® Xeon® Platinum 8175M series processors running at 2.5 GHz, including support for AVX-512.

You can use any AMI that includes drivers for the Elastic Network Adapter (ENA) and NVMe; this includes the latest Amazon Linux, Microsoft Windows (Server 2008 R2, Server 2012, Server 2012 R2 and Server 2016), Ubuntu, RHEL, SUSE, and CentOS AMIs.

Here are a couple of things to keep in mind about the local NVMe storage on the M5d instances:

Naming – You don’t have to specify a block device mapping in your AMI or during the instance launch; the local storage will show up as one or more devices (/dev/nvme*1 on Linux) after the guest operating system has booted.

Encryption – Each local NVMe device is hardware encrypted using the XTS-AES-256 block cipher and a unique key. Each key is destroyed when the instance is stopped or terminated.

Lifetime – Local NVMe devices have the same lifetime as the instance they are attached to, and do not stick around after the instance has been stopped or terminated.

Available Now
M5d instances are available in On-Demand, Reserved Instance, and Spot form in the US East (N. Virginia), US West (Oregon), EU (Ireland), US East (Ohio), and Canada (Central) Regions. Prices vary by Region, and are just a bit higher than for the equivalent M5 instances.

Jeff;

 

AWS Online Tech Talks – June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-june-2018/

AWS Online Tech Talks – June 2018

Join us this month to learn about AWS services and solutions. New this month, we have a fireside chat with the GM of Amazon WorkSpaces and our 2nd episode of the “How to re:Invent” series. We’ll also cover best practices, deep dives, use cases and more! Join us and register today!

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

 

Analytics & Big Data

June 18, 2018 | 11:00 AM – 11:45 AM PTGet Started with Real-Time Streaming Data in Under 5 Minutes – Learn how to use Amazon Kinesis to capture, store, and analyze streaming data in real-time including IoT device data, VPC flow logs, and clickstream data.
June 20, 2018 | 11:00 AM – 11:45 AM PT – Insights For Everyone – Deploying Data across your Organization – Learn how to deploy data at scale using AWS Analytics and QuickSight’s new reader role and usage based pricing.

 

AWS re:Invent
June 13, 2018 | 05:00 PM – 05:30 PM PTEpisode 2: AWS re:Invent Breakout Content Secret Sauce – Hear from one of our own AWS content experts as we dive deep into the re:Invent content strategy and how we maintain a high bar.
Compute

June 25, 2018 | 01:00 PM – 01:45 PM PTAccelerating Containerized Workloads with Amazon EC2 Spot Instances – Learn how to efficiently deploy containerized workloads and easily manage clusters at any scale at a fraction of the cost with Spot Instances.

June 26, 2018 | 01:00 PM – 01:45 PM PTEnsuring Your Windows Server Workloads Are Well-Architected – Get the benefits, best practices and tools on running your Microsoft Workloads on AWS leveraging a well-architected approach.

 

Containers
June 25, 2018 | 09:00 AM – 09:45 AM PTRunning Kubernetes on AWS – Learn about the basics of running Kubernetes on AWS including how setup masters, networking, security, and add auto-scaling to your cluster.

 

Databases

June 18, 2018 | 01:00 PM – 01:45 PM PTOracle to Amazon Aurora Migration, Step by Step – Learn how to migrate your Oracle database to Amazon Aurora.
DevOps

June 20, 2018 | 09:00 AM – 09:45 AM PTSet Up a CI/CD Pipeline for Deploying Containers Using the AWS Developer Tools – Learn how to set up a CI/CD pipeline for deploying containers using the AWS Developer Tools.

 

Enterprise & Hybrid
June 18, 2018 | 09:00 AM – 09:45 AM PTDe-risking Enterprise Migration with AWS Managed Services – Learn how enterprise customers are de-risking cloud adoption with AWS Managed Services.

June 19, 2018 | 11:00 AM – 11:45 AM PTLaunch AWS Faster using Automated Landing Zones – Learn how the AWS Landing Zone can automate the set up of best practice baselines when setting up new

 

AWS Environments

June 21, 2018 | 11:00 AM – 11:45 AM PTLeading Your Team Through a Cloud Transformation – Learn how you can help lead your organization through a cloud transformation.

June 21, 2018 | 01:00 PM – 01:45 PM PTEnabling New Retail Customer Experiences with Big Data – Learn how AWS can help retailers realize actual value from their big data and deliver on differentiated retail customer experiences.

June 28, 2018 | 01:00 PM – 01:45 PM PTFireside Chat: End User Collaboration on AWS – Learn how End User Compute services can help you deliver access to desktops and applications anywhere, anytime, using any device.
IoT

June 27, 2018 | 11:00 AM – 11:45 AM PTAWS IoT in the Connected Home – Learn how to use AWS IoT to build innovative Connected Home products.

 

Machine Learning

June 19, 2018 | 09:00 AM – 09:45 AM PTIntegrating Amazon SageMaker into your Enterprise – Learn how to integrate Amazon SageMaker and other AWS Services within an Enterprise environment.

June 21, 2018 | 09:00 AM – 09:45 AM PTBuilding Text Analytics Applications on AWS using Amazon Comprehend – Learn how you can unlock the value of your unstructured data with NLP-based text analytics.

 

Management Tools

June 20, 2018 | 01:00 PM – 01:45 PM PTOptimizing Application Performance and Costs with Auto Scaling – Learn how selecting the right scaling option can help optimize application performance and costs.

 

Mobile
June 25, 2018 | 11:00 AM – 11:45 AM PTDrive User Engagement with Amazon Pinpoint – Learn how Amazon Pinpoint simplifies and streamlines effective user engagement.

 

Security, Identity & Compliance

June 26, 2018 | 09:00 AM – 09:45 AM PTUnderstanding AWS Secrets Manager – Learn how AWS Secrets Manager helps you rotate and manage access to secrets centrally.
June 28, 2018 | 09:00 AM – 09:45 AM PTUsing Amazon Inspector to Discover Potential Security Issues – See how Amazon Inspector can be used to discover security issues of your instances.

 

Serverless

June 19, 2018 | 01:00 PM – 01:45 PM PTProductionize Serverless Application Building and Deployments with AWS SAM – Learn expert tips and techniques for building and deploying serverless applications at scale with AWS SAM.

 

Storage

June 26, 2018 | 11:00 AM – 11:45 AM PTDeep Dive: Hybrid Cloud Storage with AWS Storage Gateway – Learn how you can reduce your on-premises infrastructure by using the AWS Storage Gateway to connecting your applications to the scalable and reliable AWS storage services.
June 27, 2018 | 01:00 PM – 01:45 PM PTChanging the Game: Extending Compute Capabilities to the Edge – Discover how to change the game for IIoT and edge analytics applications with AWS Snowball Edge plus enhanced Compute instances.
June 28, 2018 | 11:00 AM – 11:45 AM PTBig Data and Analytics Workloads on Amazon EFS – Get best practices and deployment advice for running big data and analytics workloads on Amazon EFS.

Отново ме блокираха във Facebook

Post Syndicated from Боян Юруков original https://yurukov.net/blog/2018/block-facebook-2/

За пореден път съм блокиран във Facebook и за втори път причината е, че цитирам с ирония думите на някого друг. В случая коментирах под пост на Йордан Стефанов, който списва „Наука и критично мислене“ и блога 6nine. Поводът беше негово участие при Коритаров, където говори за скорошните протести на антиваксърте срещу задължителния характер на ваксините. В общи линии малко повече може да се каже, отколкото при предишния им протест, но явно има нужда да се повтаря отново и отново. Затова е добре, че Йордан се наема да го прави.

За това си участие си той спечели поредна доза определения из едни групи във фейса. Едно от по-цветущите му изпратих и цитирах под поста му. Ето оригиналния коментар пуснат на стената на един от организаторите на активакс протеста:

Два дни по-късно профилът ми беше блокиран за 24 часа, тъй като коментарът ми не отговарял на стандартите на общността. Ясно съм го означил като цитат, а и от контекста после става ясно. Това, разбира се, няма значение, тъй като алгоритмите не разбират от контекст. Те търсят ключови думи, а в коментара ми определено има няколко притеснителни такива. Затова съм си спечелил и блокиране.

Тъй като статусът на Йордан беше частен, не мисля, че някой е подавал оплаквания конкретно за него. По-скоро са изпращали повече сигнали за мен и други неща, които съм писал. Когато алгоритъмът е хванал въпросната ключова дума, се е задействал. Нещо подобно стана преди две години, когато пак иронизирам мъжкарите, които се пънеха, че бежанците в Германия са били жени, но като стане дума за домашното насилие, всички опорстват. Предвид дебатът за оная конвенция този статус май е все по-актуален.

Интересното в този случай е, че цитираният от мен коментар си стои. Въпреки използването на същите ключови думи и въпреки многобройните докладвания. Моят вече е изтрит. Това всъщност е един дребен пример защо не трябва да се позволява на алгоритми да филтрират съдържание и да раздават наказания. Именно това се готви като законодателство на европейско ниво, но не просто в сферата на интернет тормоза, но и като защитата на интелектуална собственост.

Но в крайна сметка мярката е само за 24 часа и отново ще получа извинение от Fb, както последните пъти. Щом участието на Йордан толкова е подразнило антиваксърите, значи си заслужава да го видите, както и да се насочите към блога му 6nine.

[$] Advanced computing with IPython

Post Syndicated from jake original https://lwn.net/Articles/756192/rss

If you use Python, there’s a good chance you have heard of IPython, which provides an enhanced read-eval-print
loop (REPL) for Python. But there is more to IPython than just a more
convenient REPL. Today’s IPython comes with integrated libraries that turn
it into an assistant for several advanced computing tasks. We will look at
two of those tasks, using multiple languages and distributed computing, in
this article.

Security updates for Monday

Post Syndicated from ris original https://lwn.net/Articles/756489/rss

Security updates have been issued by CentOS (procps, xmlrpc, and xmlrpc3), Debian (batik, prosody, redmine, wireshark, and zookeeper), Fedora (jasper, kernel, poppler, and xmlrpc), Mageia (git and wireshark), Red Hat (rh-java-common-xmlrpc), Slackware (git), SUSE (bzr, dpdk-thunderxdpdk, and ocaml), and Ubuntu (exempi).

AWS Resources Addressing Argentina’s Personal Data Protection Law and Disposition No. 11/2006

Post Syndicated from Leandro Bennaton original https://aws.amazon.com/blogs/security/aws-and-resources-addressing-argentinas-personal-data-protection-law-and-disposition-no-112006/

We have two new resources to help customers address their data protection requirements in Argentina. These resources specifically address the needs outlined under the Personal Data Protection Law No. 25.326, as supplemented by Regulatory Decree No. 1558/2001 (“PDPL”), including Disposition No. 11/2006. For context, the PDPL is an Argentine federal law that applies to the protection of personal data, including during transfer and processing.

A new webpage focused on data privacy in Argentina features FAQs, helpful links, and whitepapers that provide an overview of PDPL considerations, as well as our security assurance frameworks and international certifications, including ISO 27001, ISO 27017, and ISO 27018. You’ll also find details about our Information Request Report and the high bar of security at AWS data centers.

Additionally, we’ve released a new workbook that offers a detailed mapping as to how customers can operate securely under the Shared Responsibility Model while also aligning with Disposition No. 11/2006. The AWS Disposition 11/2006 Workbook can be downloaded from the Argentina Data Privacy page or directly from this link. Both resources are also available in Spanish from the Privacidad de los datos en Argentina page.

Want more AWS Security news? Follow us on Twitter.

 

Microsoft acquires GitHub

Post Syndicated from corbet original https://lwn.net/Articles/756443/rss

Here’s the
press release
announcing Microsoft’s agreement to acquire GitHub for a
mere $7.5 billion. “GitHub will retain its developer-first
ethos and will operate independently to provide an open platform for all
developers in all industries. Developers will continue to be able to use
the programming languages, tools and operating systems of their choice for
their projects — and will still be able to deploy their code to any
operating system, any cloud and any device.

Build your own weather station with our new guide!

Post Syndicated from Richard Hayler original https://www.raspberrypi.org/blog/build-your-own-weather-station/

One of the most common enquiries I receive at Pi Towers is “How can I get my hands on a Raspberry Pi Oracle Weather Station?” Now the answer is: “Why not build your own version using our guide?”

Build Your Own weather station kit assembled

Tadaaaa! The BYO weather station fully assembled.

Our Oracle Weather Station

In 2016 we sent out nearly 1000 Raspberry Pi Oracle Weather Station kits to schools from around the world who had applied to be part of our weather station programme. In the original kit was a special HAT that allows the Pi to collect weather data with a set of sensors.

The original Raspberry Pi Oracle Weather Station HAT – Build Your Own Raspberry Pi weather station

The original Raspberry Pi Oracle Weather Station HAT

We designed the HAT to enable students to create their own weather stations and mount them at their schools. As part of the programme, we also provide an ever-growing range of supporting resources. We’ve seen Oracle Weather Stations in great locations with a huge differences in climate, and they’ve even recorded the effects of a solar eclipse.

Our new BYO weather station guide

We only had a single batch of HATs made, and unfortunately we’ve given nearly* all the Weather Station kits away. Not only are the kits really popular, we also receive lots of questions about how to add extra sensors or how to take more precise measurements of a particular weather phenomenon. So today, to satisfy your demand for a hackable weather station, we’re launching our Build your own weather station guide!

Build Your Own Raspberry Pi weather station

Fun with meteorological experiments!

Our guide suggests the use of many of the sensors from the Oracle Weather Station kit, so can build a station that’s as close as possible to the original. As you know, the Raspberry Pi is incredibly versatile, and we’ve made it easy to hack the design in case you want to use different sensors.

Many other tutorials for Pi-powered weather stations don’t explain how the various sensors work or how to store your data. Ours goes into more detail. It shows you how to put together a breadboard prototype, it describes how to write Python code to take readings in different ways, and it guides you through recording these readings in a database.

Build Your Own Raspberry Pi weather station on a breadboard

There’s also a section on how to make your station weatherproof. And in case you want to move past the breadboard stage, we also help you with that. The guide shows you how to solder together all the components, similar to the original Oracle Weather Station HAT.

Who should try this build

We think this is a great project to tackle at home, at a STEM club, Scout group, or CoderDojo, and we’re sure that many of you will be chomping at the bit to get started. Before you do, please note that we’ve designed the build to be as straight-forward as possible, but it’s still fairly advanced both in terms of electronics and programming. You should read through the whole guide before purchasing any components.

Build Your Own Raspberry Pi weather station – components

The sensors and components we’re suggesting balance cost, accuracy, and easy of use. Depending on what you want to use your station for, you may wish to use different components. Similarly, the final soldered design in the guide may not be the most elegant, but we think it is achievable for someone with modest soldering experience and basic equipment.

You can build a functioning weather station without soldering with our guide, but the build will be more durable if you do solder it. If you’ve never tried soldering before, that’s OK: we have a Getting started with soldering resource plus video tutorial that will walk you through how it works step by step.

Prototyping HAT for Raspberry Pi weather station sensors

For those of you who are more experienced makers, there are plenty of different ways to put the final build together. We always like to hear about alternative builds, so please post your designs in the Weather Station forum.

Our plans for the guide

Our next step is publishing supplementary guides for adding extra functionality to your weather station. We’d love to hear which enhancements you would most like to see! Our current ideas under development include adding a webcam, making a tweeting weather station, adding a light/UV meter, and incorporating a lightning sensor. Let us know which of these is your favourite, or suggest your own amazing ideas in the comments!

*We do have a very small number of kits reserved for interesting projects or locations: a particularly cool experiment, a novel idea for how the Oracle Weather Station could be used, or places with specific weather phenomena. If have such a project in mind, please send a brief outline to [email protected], and we’ll consider how we might be able to help you.

The post Build your own weather station with our new guide! appeared first on Raspberry Pi.

Kernel 4.17 released

Post Syndicated from corbet original https://lwn.net/Articles/756373/rss

Linus has released the 4.17 kernel, which
will indeed be called “4.17”.
No, I didn’t call it 5.0, even though all the git object count
numerology was in place for that. It will happen in the not _too_
distant future, and I’m told all the release scripts on kernel.org are
ready for it, but I didn’t feel there was any real reason for it.

Headline features in this release include
improved load estimation in the CPU
scheduler,
raw
BPF tracepoints
,
lazytime support in the XFS filesystem,
full in-kernel TLS protocol support,
histogram triggers for tracing,
mitigations for the latest Spectre variants,
and, of course, the removal of support for eight unloved processor
architectures.

Неделя, 3 Юни 2018

Post Syndicated from georgi original http://georgi.unixsol.org/diary/archive.php/2018-06-03

Всеки има нужда да бъде спасен от свинщината, наречена “реклама” във
всичките и форми. За хората с компютър и бразуер, това отдавна е решен
проблем благодарение на AdBlock и подобни плъгини (стига да не
използвате браузер като Chrome, но в този случай си заслужавате
всичко дето ви се случва).

По-принцип не оставям компютър без инсталиран AdBlock, това си е направо
обществено полезна дейност. Кофтито е, че на мобилния телефон, дори и да
използвате Firefox и да имате подходящите Addons, програмчетата пак
се изхитряват и ви спамят.

Сега, ако сте root-нали телефона (което никой не прави), можете да
направите нещо по въпроса, но си е разправия, а както всички знаем,
удобството винаги печели пред сигурността.

За щастие има има много лесен начин, да се отървете от долните
спамери в две прости стъпки:

1. Инсталирате си Blokada.

2. Активирате я.

Et voilà – никъде повече няма да ви изкача спам,

Как работи нещото? Прави се на vpn защото това му дава възможност
да филтрира dns заявките и съответно когато някоя програма пита
за pagead.doubleclick.net и подобни – просто му отговаря с 0.0.0.0

Просто, ефективно, не изисква root и бърка директно в джоба на
всичката интернет паплач, която си въобразява, че може да ви залива
с лайна 24/7.

Storing Encrypted Credentials In Git

Post Syndicated from Bozho original https://techblog.bozho.net/storing-encrypted-credentials-in-git/

We all know that we should not commit any passwords or keys to the repo with our code (no matter if public or private). Yet, thousands of production passwords can be found on GitHub (and probably thousands more in internal company repositories). Some have tried to fix that by removing the passwords (once they learned it’s not a good idea to store them publicly), but passwords have remained in the git history.

Knowing what not to do is the first and very important step. But how do we store production credentials. Database credentials, system secrets (e.g. for HMACs), access keys for 3rd party services like payment providers or social networks. There doesn’t seem to be an agreed upon solution.

I’ve previously argued with the 12-factor app recommendation to use environment variables – if you have a few that might be okay, but when the number of variables grow (as in any real application), it becomes impractical. And you can set environment variables via a bash script, but you’d have to store it somewhere. And in fact, even separate environment variables should be stored somewhere.

This somewhere could be a local directory (risky), a shared storage, e.g. FTP or S3 bucket with limited access, or a separate git repository. I think I prefer the git repository as it allows versioning (Note: S3 also does, but is provider-specific). So you can store all your environment-specific properties files with all their credentials and environment-specific configurations in a git repo with limited access (only Ops people). And that’s not bad, as long as it’s not the same repo as the source code.

Such a repo would look like this:

project
└─── production
|   |   application.properites
|   |   keystore.jks
└─── staging
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client1
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client2
|   |   application.properites
|   |   keystore.jks

Since many companies are using GitHub or BitBucket for their repositories, storing production credentials on a public provider may still be risky. That’s why it’s a good idea to encrypt the files in the repository. A good way to do it is via git-crypt. It is “transparent” encryption because it supports diff and encryption and decryption on the fly. Once you set it up, you continue working with the repo as if it’s not encrypted. There’s even a fork that works on Windows.

You simply run git-crypt init (after you’ve put the git-crypt binary on your OS Path), which generates a key. Then you specify your .gitattributes, e.g. like that:

secretfile filter=git-crypt diff=git-crypt
*.key filter=git-crypt diff=git-crypt
*.properties filter=git-crypt diff=git-crypt
*.jks filter=git-crypt diff=git-crypt

And you’re done. Well, almost. If this is a fresh repo, everything is good. If it is an existing repo, you’d have to clean up your history which contains the unencrypted files. Following these steps will get you there, with one addition – before calling git commit, you should call git-crypt status -f so that the existing files are actually encrypted.

You’re almost done. We should somehow share and backup the keys. For the sharing part, it’s not a big issue to have a team of 2-3 Ops people share the same key, but you could also use the GPG option of git-crypt (as documented in the README). What’s left is to backup your secret key (that’s generated in the .git/git-crypt directory). You can store it (password-protected) in some other storage, be it a company shared folder, Dropbox/Google Drive, or even your email. Just make sure your computer is not the only place where it’s present and that it’s protected. I don’t think key rotation is necessary, but you can devise some rotation procedure.

git-crypt authors claim to shine when it comes to encrypting just a few files in an otherwise public repo. And recommend looking at git-remote-gcrypt. But as often there are non-sensitive parts of environment-specific configurations, you may not want to encrypt everything. And I think it’s perfectly fine to use git-crypt even in a separate repo scenario. And even though encryption is an okay approach to protect credentials in your source code repo, it’s still not necessarily a good idea to have the environment configurations in the same repo. Especially given that different people/teams manage these credentials. Even in small companies, maybe not all members have production access.

The outstanding questions in this case is – how do you sync the properties with code changes. Sometimes the code adds new properties that should be reflected in the environment configurations. There are two scenarios here – first, properties that could vary across environments, but can have default values (e.g. scheduled job periods), and second, properties that require explicit configuration (e.g. database credentials). The former can have the default values bundled in the code repo and therefore in the release artifact, allowing external files to override them. The latter should be announced to the people who do the deployment so that they can set the proper values.

The whole process of having versioned environment-speific configurations is actually quite simple and logical, even with the encryption added to the picture. And I think it’s a good security practice we should try to follow.

The post Storing Encrypted Credentials In Git appeared first on Bozho's tech blog.

Стратегията на Фейсбук срещу фалшивите новини

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/06/02/fb-8/

В края на май Фейсбук публикува стратегията си срещу фалшивите новини.

Стратегията е в три части:

  • Премахване на профили и съдържание, които нарушават правилата   или правилата   за рекламиране
  • Намаляване на разпространението на неверни новини и  съдържание като clickbait
  • Информиране

Повече за всяка част