Tag Archives: osx

OWASP ZSC – Obfuscated Code Generator Tool

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/01/owasp-zsc-obfuscated-code-generator-tool/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

OWASP ZSC – Obfuscated Code Generator Tool

OWASP ZSC is an open source obfuscated code generator tool in Python which lets you generate customized shellcodes and convert scripts to an obfuscated script.

Shellcodes are small codes in Assembly language which could be used as the payload in software exploitation. Other usages are in malware, bypassing antivirus software, obfuscating code for protection and so on.

This software can be run on Windows/Linux/OSX under Python.

Why use OWASP ZSC Obfuscated Code Generator Tool

Another good reason for obfuscating files or generating shellcode with ZSC is that it can be used for pen-testing assignments.

Read the rest of OWASP ZSC – Obfuscated Code Generator Tool now! Only available at Darknet.

New – Managed Device Authentication for Amazon WorkSpaces

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-managed-device-authentication-for-amazon-workspaces/

Amazon WorkSpaces allows you to access a virtual desktop in the cloud from the web and from a wide variety of desktop and mobile devices. This flexibility makes WorkSpaces ideal for environments where users have the ability to use their existing devices (often known as BYOD, or Bring Your Own Device). In these environments, organizations sometimes need the ability to manage the devices which can access WorkSpaces. For example, they may have to regulate access based on the client device operating system, version, or patch level in order to help meet compliance or security policy requirements.

Managed Device Authentication
Today we are launching device authentication for WorkSpaces. You can now use digital certificates to manage client access from Apple OSX and Microsoft Windows. You can also choose to allow or block access from iOS, Android, Chrome OS, web, and zero client devices. You can implement policies to control which device types you want to allow and which ones you want to block, with control all the way down to the patch level. Access policies are set for each WorkSpaces directory. After you have set the policies, requests to connect to WorkSpaces from a client device are assessed and either blocked or allowed. In order to make use of this feature, you will need to distribute certificates to your client devices using Microsoft System Center Configuration Manager or a mobile device management (MDM) tool.

Here’s how you set your access control options from the WorkSpaces Console:

Here’s what happens if a client is not authorized to connect:

 

Available Today
This feature is now available in all Regions where WorkSpaces is available.

Jeff;

 

SAML for Your Serverless JavaScript Application: Part II

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/saml-for-your-serverless-javascript-application-part-ii/

Contributors: Richard Threlkeld, Gene Ting, Stefano Buliani

The full code for both scenarios—including SAM templates—can be found at the samljs-serverless-sample GitHub repository. We highly recommend you use the SAM templates in the GitHub repository to create the resources, opitonally you can manually create them.


This is the second part of a two part series for using SAML providers in your application and receiving short-term credentials to access AWS Services. These credentials can be limited with IAM roles so the users of the applications can perform actions like fetching data from databases or uploading files based on their level of authorization. For example, you may want to build a JavaScript application that allows a user to authenticate against Active Directory Federation Services (ADFS). The user can be granted scoped AWS credentials to invoke an API to display information in the application or write to an Amazon DynamoDB table.

Part I of this series walked through a client-side flow of retrieving SAML claims and passing them to Amazon Cognito to retrieve credentials. This blog post will take you through a more advanced scenario where logic can be moved to the backend for a more comprehensive and flexible solution.

Prerequisites

As in Part I of this series, you need ADFS running in your environment. The following configurations are used for reference:

  1. ADFS federated with the AWS console. For a walkthrough with an AWS CloudFormation template, see Enabling Federation to AWS Using Windows Active Directory, ADFS, and SAML 2.0.
  2. Verify that you can authenticate with user example\bob for both the ADFS-Dev and ADFS-Production groups via the sign-in page.
  3. Create an Amazon Cognito identity pool.

Scenario Overview

The scenario in the last blog post may be sufficient for many organizations but, due to size restrictions, some browsers may drop part or all of a query string when sending a large number of claims in the SAMLResponse. Additionally, for auditing and logging reasons, you may wish to relay SAML assertions via POST only and perform parsing in the backend before sending credentials to the client. This scenario allows you to perform custom business logic and validation as well as putting tracking controls in place.

In this post, we want to show you how these requirements can be achieved in a Serverless application. We also show how different challenges (like XML parsing and JWT exchange) can be done in a Serverless application design. Feel free to mix and match, or swap pieces around to suit your needs.

This scenario uses the following services and features:

  • Cognito for unique ID generation and default role mapping
  • S3 for static website hosting
  • API Gateway for receiving the SAMLResponse POST from ADFS
  • Lambda for processing the SAML assertion using a native XML parser
  • DynamoDB conditional writes for session tracking exceptions
  • STS for credentials via Lambda
  • KMS for signing JWT tokens
  • API Gateway custom authorizers for controlling per-session access to credentials, using JWT tokens that were signed with KMS keys
  • JavaScript-generated SDK from API Gateway using a service proxy to DynamoDB
  • RelayState in the SAMLRequest to ADFS to transmit the CognitoID and a short code from the client to your AWS backend

At a high level, this solution is similar to that of Scenario 1; however, most of the work is done in the infrastructure rather than on the client.

  • ADFS still uses a POST binding to redirect the SAMLResponse to API Gateway; however, the Lambda function does not immediately redirect.
  • The Lambda function decodes and uses an XML parser to read the properties of the SAML assertion.
  • If the user’s assertion shows that they belong to a certain group matching a specified string (“Prod” in the sample), then you assign a role that they can assume (“ADFS-Production”).
  • Lambda then gets the credentials on behalf of the user and stores them in DynamoDB as well as logging an audit record in a separate table.
  • Lambda then returns a short-lived, signed JSON Web Token (JWT) to the JavaScript application.
  • The application uses the JWT to get their stored credentials from DynamoDB through an API Gateway custom authorizer.

The architecture you build in this tutorial is outlined in the following diagram.

lambdasamltwo_1.png

First, a user visits your static website hosted on S3. They generate an ephemeral random code that is transmitted during redirection to ADFS, where they are prompted for their Active Directory credentials.

Upon successful authentication, the ADFS server redirects the SAMLResponse assertion, along with the code (as the RelayState) via POST to API Gateway.

The Lambda function parses the SAMLResponse. If the user is part of the appropriate Active Directory group (AWS-Production in this tutorial), it retrieves credentials from STS on behalf of the user.

The credentials are stored in a DynamoDB table called SAMLSessions, along with the short code. The user login is stored in a tracking table called SAMLUsers.

The Lambda function generates a JWT token, with a 30-second expiration time signed with KMS, then redirects the client back to the static website along with this token.

The client then makes a call to an API Gateway resource acting as a DynamoDB service proxy that retrieves the credentials via a DeleteItem call. To make this call, the client passes the JWT in the authorization header.

A custom authorizer runs to validate the token using the KMS key again as well as the original random code.

Now that the client has credentials, it can use these to access AWS resources.

Tutorial: Backend processing and audit tracking

Before you walk through this tutorial you will need the source code from the samljs-serverless-sample Github Repository. You should use the SAM template provided in order to streamline the process but we’ll outline how you you would manually create resources too. There is a readme in the repository with instructions for using the SAM template. Either way you will still perform the manual steps of KMS key configuration, ADFS enablement of RelayState, and Amazon Cognito Identity Pool creation. The template will automate the process in creating the S3 website, Lambda functions, API Gateway resources and DynamoDB tables.

We walk through the details of all the steps and configuration below for illustrative purposes in this tutorial calling out the sections that can be omitted if you used the SAM template.

KMS key configuration

To sign JWT tokens, you need an encrypted plaintext key, to be stored in KMS. You will need to complete this step even if you use the SAM template.

  1. In the IAM console, choose Encryption Keys, Create Key.
  2. For Alias, type sessionMaster.
  3. For Advanced Options, choose KMS, Next Step.
  4. For Key Administrative Permissions, select your administrative role or user account.
  5. For Key Usage Permissions, you can leave this blank as the IAM Role (next section) will have individual key actions configured. This allows you to perform administrative actions on the set of keys while the Lambda functions have rights to just create data keys for encryption/decryption and use them to sign JWTs.
  6. Take note of the Key ID, which is needed for the Lambda functions.

IAM role configuration

You will need an IAM role for executing your Lambda functions. If you are using the SAM template this can be skipped. The sample code in the GitHub repository under Scenario2 creates separate roles for each function, with limited permissions on individual resources when you use the SAM template. We recommend separate roles scoped to individual resources for production deployments. Your Lambda functions need the following permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1432927122000",
            "Effect": "Allow",
            "Action": [
                "dynamodb:PutItem",
                “dynamodb:GetItem”,
                “dynamodb:DeleteItem”,
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents",
                "kms:GenerateDataKey*",
                “kms:Decrypt”
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

Lambda function configuration

If you are not using the SAM template, create the following three Lambda functions from the GitHub repository in /Scenario2/lambda using the following names and environment variables. The Lambda functions are written in Node.js.

  • GenerateKey_awslabs_samldemo
  • ProcessSAML_awslabs_samldemo
  • SAMLCustomAuth_awslabs_samldemo

The functions above are built, packaged, and uploaded to Lambda. For two of the functions, this can be done from your workstation (the sample commands for each function assume OSX or Linux). The third will need to be built on an AWS EC2 instance running the current Lambda AMI.

GenerateKey_awslabs_samldemo

This function is only used one time to create keys in KMS for signing JWT tokens. The function calls GenerateDataKey and stores the encrypted CipherText blob as Base64 in DynamoDB. This is used by the other two functions for getting the PlainTextKey for signing with a Decrypt operation.

This function only requires a single file. It has the following environment variables:

  • KMS_KEY_ID: Unique identifier from KMS for your sessionMaster Key
  • SESSION_DDB_TABLE: SAMLSessions
  • ENC_CONTEXT: ADFS (or something unique to your organization)
  • RAND_HASH: us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

Navigate into /Scenario2/lambda/GenerateKey and run the following commands:

zip –r generateKey.zip .

aws lambda create-function --function-name GenerateKey_awslabs_samldemo --runtime nodejs4.3 --role LAMBDA_ROLE_ARN --handler index.handler --timeout 10 --memory-size 512 --zip-file fileb://generateKey.zip --environment Variables={SESSION_DDB_TABLE=SAMLSessions,ENC_CONTEXT=ADFS,RAND_HASH=us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX,KMS_KEY_ID=<kms key="KEY" id="ID">}

SAMLCustomAuth_awslabs_samldemo

This is an API Gateway custom authorizer called after the client has been redirected to the website as part of the login workflow. This function calls a GET against the service proxy to DynamoDB, retrieving credentials. The function uses the KMS key signing validation of the JWT created in the ProcessSAML_awslabs_samldemo function and also validates the random code that was generated at the beginning of the login workflow.

You must install the dependencies before zipping this function up. It has the following environment variables:

  • SESSION_DDB_TABLE: SAMLSessions
  • ENC_CONTEXT: ADFS (or whatever was used in GenerateKey_awslabs_samldemo)
  • ID_HASH: us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

Navigate into /Scenario2/lambda/CustomAuth and run:

npm install

zip –r custom_auth.zip .

aws lambda create-function --function-name SAMLCustomAuth_awslabs_samldemo --runtime nodejs4.3 --role LAMBDA_ROLE_ARN --handler CustomAuth.handler --timeout 10 --memory-size 512 --zip-file fileb://custom_auth.zip --environment Variables={SESSION_DDB_TABLE=SAMLSessions,ENC_CONTEXT=ADFS,ID_HASH= us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX }

ProcessSAML_awslabs_samldemo

This function is called when ADFS sends the SAMLResponse to API Gateway. The function parses the SAML assertion to select a role (based on a simple string search) and extract user information. It then uses this data to get short-term credentials from STS via AssumeRoleWithSAML and stores this information in a SAMLSessions table and tracks the user login via a SAMLUsers table. Both of these are DynamoDB tables but you could also store the user information in another AWS database type, as this is for auditing purposes. Finally, this function creates a JWT (signed with the KMS key) which is only valid for 30 seconds and is returned to the client as part of a 302 redirect from API Gateway.

This function needs to be built on an EC2 server running Amazon Linux. This function leverages two main external libraries:

  • nJwt: Used for secure JWT creation for individual client sessions to get access to their records
  • libxmljs: Used for XML XPath queries of the decoded SAMLResponse from AD FS

Libxmljs uses native build tools and you should run this on EC2 running the same AMI as Lambda and with Node.js v4.3.2; otherwise, you might see errors. For more information about current Lambda AMI information, see Lambda Execution Environment and Available Libraries.

After you have the correct AMI launched in EC2 and have SSH open to that host, install Node.js. Ensure that the Node.js version on EC2 is 4.3.2, to match Lambda. If your version is off, you can roll back with NVM.

After you have set up Node.js, run the following command:

yum install -y make gcc*

Now, create a /saml folder on your EC2 server and copy up ProcessSAML.js and package.json from /Scenario2/lambda/ProcessSAML to the EC2 server. Here is a sample SCP command:

cd ProcessSAML/

ls

package.json    ProcessSAML.js

scp -i ~/path/yourpemfile.pem ./* [email protected]:/home/ec2-user/saml/

Then you can SSH to your server, cd into the /saml directory, and run:

npm install

A successful build should look similar to the following:

lambdasamltwo_2.png

Finally, zip up the package and create the function using the following AWS CLI command and these environment variables. Configure the CLI with your credentials as needed.

  • SESSION_DDB_TABLE: SAMLSessions
  • ENC_CONTEXT: ADFS (or whatever was used in GenerateKeyawslabssamldemo)
  • PRINCIPAL_ARN: Full ARN of the AD FS IdP created in the IAM console
  • USER_DDB_TABLE: SAMLUsers
  • REDIRECT_URL: Endpoint URL of your static S3 website (or CloudFront distribution domain name if you did that optional step)
  • ID_HASH: us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
zip –r saml.zip .

aws lambda create-function --function-name ProcessSAML_awslabs_samldemo --runtime nodejs4.3 --role LAMBDA_ROLE_ARN --handler ProcessSAML.handler --timeout 10 --memory-size 512 --zip-file fileb://saml.zip –environment Variables={USER_DDB_TABLE=SAMLUsers,SESSION_DDB_TABLE= SAMLSessions,REDIRECT_URL=<your S3 bucket and test page path>,ID_HASH=us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX,ENC_CONTEXT=ADFS,PRINCIPAL_ARN=<your ADFS IdP ARN>}

If you built the first two functions on your workstation and created the ProcessSAML_awslabs_samldemo function separately in the Lambda console before building on EC2, you can update the code after building on with the following command:

aws lambda update-function-code --function-name ProcessSAML_awslabs_samldemo --zip-file fileb://saml.zip

Role trust policy configuration

This scenario uses STS directly to assume a role. You will need to complete this step even if you use the SAM template. Modify the trust policy, as you did before when Amazon Cognito was assuming the role. In the GitHub repository sample code, ProcessSAML.js is preconfigured to filter and select a role with “Prod” in the name via the selectedRole variable.

This is an example of business logic you can alter in your organization later, such as a callout to an external mapping database for other rules matching. In this tutorial, it corresponds to the ADFS-Production role that was created.

  1. In the IAM console, choose Roles and open the ADFS-Production Role.
  2. Edit the Trust Permissions field and replace the content with the following:

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Federated": [
              "arn:aws:iam::ACCOUNTNUMBER:saml-provider/ADFS"
    ]
          },
          "Action": "sts:AssumeRoleWithSAML"
        }
      ]
    }

If you end up using another role (or add more complex filtering/selection logic), ensure that those roles have similar trust policy configurations. Also note that the sample policy above purposely uses an array for the federated provider matching the IdP ARN that you added. If your environment has multiple SAML providers, you could list them here and modify the code in ProcessSAML.js to process requests from different IdPs and grant or revoke credentials accordingly.

DynamoDB table creation

If you are not using the SAM template, create two DynamoDB tables:

  • SAMLSessions: Temporarily stores credentials from STS. Credentials are removed by an API Gateway Service Proxy to the DynamoDB DeleteItem call that simultaneously returns the credentials to the client.
  • SAMLUsers: This table is for tracking user information and the last time they authenticated in the system via ADFS.

The following AWS CLI commands creates the tables (indexed only with a primary key hash, called identityHash and CognitoID respectively):

aws dynamodb create-table \
    --table-name SAMLSessions \
    --attribute-definitions \
        AttributeName=group,AttributeType=S \
    --key-schema AttributeName=identityhash,KeyType=HASH \
    --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
aws dynamodb create-table \
    --table-name SAMLUsers \
    --attribute-definitions \
        AttributeName=CognitoID,AttributeType=S \
    --key-schema AttributeName=CognitoID,KeyType=HASH \
    --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5

After the tables are created, you should be able to run the GenerateKey_awslabs_samldemo Lambda function and see a CipherText key stored in SAMLSessions. This is only for convenience of this post, to demonstrate that you should persist CipherText keys in a data store and never persist plaintext keys that have been decrypted. You should also never log plaintext keys in your code.

API Gateway configuration

If you are not using the SAM template, you will need to create API Gateway resources. If you have created resources for Scenario 1 in Part I, then the naming of these resources may be similar. If that is the case, then simply create an API with a different name (SAMLAuth2 or similar) and follow these steps accordingly.

  1. In the API Gateway console for your API, choose Authorizers, Custom Authorizer.
  2. Select your region and enter SAMLCustomAuth_awslabs_samldemo for the Lambda function. Choose a friendly name like JWTParser and ensure that Identity token source is method.request.header.Authorization. This tells the custom authorizer to look for the JWT in the Authorization header of the HTTP request, which is specified in the JavaScript code on your S3 webpage. Save the changes.

    lambdasamltwo_3.png

Now it’s time to wire up the Lambda functions to API Gateway.

  1. In the API Gateway console, choose Resources, select your API, and then create a Child Resource called SAML. This includes a POST and a GET method. The POST method uses the ProcessSAML_awslabs_samldemo Lambda function and a 302 redirect, while the GET method uses the JWTParser custom authorizer with a service proxy to DynamoDB to retrieve credentials upon successful authorization.
  2. lambdasamltwo_4.png

  3. Create a POST method. For Integration Type, choose Lambda and add the ProcessSAML_awslabs_samldemo Lambda function. For Method Request, add headers called RelayState and SAMLResponse.

    lambdasamltwo_5.png

  4. Delete the Method Response code for 200 and add a 302. Create a response header called Location. In the Response Models section, for Content-Type, choose application/json and for Models, choose Empty.

    lambdasamltwo_6.png

  5. Delete the Integration Response section for 200 and add one for 302 that has a Method response status of 302. Edit the response header for Location to add a Mapping value of integration.response.body.location.

    lambdasamltwo_7.png

  6. Finally, in order for Lambda to capture the SAMLResponse and RelayState values, choose Integration Request.

  7. In the Body Mapping Template section, for Content-Type, enter application/x-www-form-urlencoded and add the following template:

    {
    "SAMLResponse" :"$input.params('SAMLResponse')",
    "RelayState" :"$input.params('RelayState')",
    "formparams" : $input.json('$')
    }

  8. Create a GET method with an Integration Type of Service Proxy. Select the region and DynamoDB as the AWS Service. Use POST for the HTTP method and DeleteItem for the Action. This is important as you leverage a DynamoDB feature to return the current records when you perform deletion. This simultaneously allows credentials in this system to not be stored long term and also allows clients to retrieve them. For Execution role, use the Lambda role from earlier or a new role that only has IAM scoped permissions for DeleteItem on the SAMLSessions table.

    lambdasamltwo_8.png

  9. Save this and open Method Request.

  10. For Authorization, select your custom authorizer JWTParser. Add in a header called COGNITO_ID and save the changes.

    lambdasamltwo_9.png

  11. In the Integration Request, add in a header name of Content-Type and a value for Mapped of ‘application/x-amzn-json-1.0‘ (you need the single quotes surrounding the entry).

  12. Next, in the Body Mapping Template section, for Content-Type, enter application/json and add the following template:

    {
        "TableName": "SAMLSessions",
        "Key": {
            "identityhash": {
                "S": "$input.params('COGNITO_ID')"
            }
        },
        "ReturnValues": "ALL_OLD"
    }

Inspect this closely for a moment. When your client passes the JWT in an Authorization Header to this GET method, the JWTParser Custom Authorizer grants/denies executing a DeleteItem call on the SAMLSessions table.

ADF

If it is granted, then there needs to be an item to delete the reference as a primary key to the table. The client JavaScript (seen in a moment) passes its CognitoID through as a header called COGNITO_ID that is mapped above. DeleteItem executes to remove the credentials that were placed there via a call to STS by the ProcessSAML_awslabs_samldemo Lambda function. Because the above action specifies ALL_OLD under the ReturnValues mapping, DynamoDB returns these credentials at the same time.

lambdasamltwo_10.png

  1. Save the changes and open your /saml resource root.
  2. Choose Actions, Enable CORS.
  3. In the Access-Control-Allow-Headers section, add COGNITO_ID into the end (inside the quotes and separated from other headers by a comma), then choose Enable CORS and replace existing CORS headers.
  4. When completed, choose Actions, Deploy API. Use the Prod stage or another stage.
  5. In the Stage Editor, choose SDK Generation. For Platform, choose JavaScript and then choose Generate SDK. Save the folder someplace close. Take note of the Invoke URL value at the top, as you need this for ADFS configuration later.

Website configuration

If you are not using the SAM template, create an S3 bucket and configure it as a static website in the same way that you did for Part I.

If you are using the SAM template this will automatically be created for you however the steps below will still need to be completed:

In the source code repository, edit /Scenario2/website/configs.js.

  1. Ensure that the identityPool value matches your Amazon Cognito Pool ID and the region is correct.
  2. Leave adfsUrl the same if you’re testing on your lab server; otherwise, update with the AD FS DNS entries as appropriate.
  3. Update the relayingPartyId value as well if you used something different from the prerequisite blog post.

Next, download the minified version of the AWS SDK for JavaScript in the Browser (aws-sdk.min.js) and place it along with the other files in /Scenario2/website into the S3 bucket.

Copy the files from the API Gateway Generated SDK in the last section to this bucket so that the apigClient.js is in the root directory and lib folder is as well. The imports for these scripts (which do things like sign API requests and configure headers for the JWT in the Authorization header) are already included in the index.html file. Consult the latest API Gateway documentation if the SDK generation process updates in the future

ADFS configuration

Now that the AWS setup is complete, modify your ADFS setup to capture RelayState information about the client and to send the POST response to API Gateway for processing. You will need to complete this step even if you use the SAM template.

If you’re using Windows Server 2008 with ADFS 2.0, ensure that Update Rollup 2 is installed before enabling RelayState. Please see official Microsoft documentation for specific download information.

  1. After Update Rollup 2 is installed, modify %systemroot%\inetpub\adfs\ls\web.config. If you’re on a newer version of Windows Server running AD FS 3.0, modify %systemroot%\ADFS\Microsoft.IdentityServer.Servicehost.exe.config.
  2. Find the section in the XML marked <Microsoft.identityServer.web> and add an entry for <useRelayStateForIdpInitiatedSignOn enabled="true">. If you have the proper ADFS rollup or version installed, this should allow the RelayState parameter to be accepted by the service provider.
  3. In the ADFS console, open Relaying Party Trusts for Amazon Web Services and choose Endpoints.
  4. For Binding, choose POST and for Invoke URL,enter the URL to your API Gateway from the stage that you noted earlier.

At this point, you are ready to test out your webpage. Navigate to the S3 static website Endpoint URL and it should redirect you to the ADFS login screen. If the user login has been recent enough to have a valid SAML cookie, then you should see the login pass-through; otherwise, a login prompt appears. After the authentication has taken place, you should quickly end up back at your original webpage. Using the browser debugging tools, you see “Successful DDB call” followed by the results of a call to STS that were stored in DynamoDB.

lambdasamltwo_11.png

As in Scenario 1, the sample code under /scenario2/website/index.html has a button that allows you to “ping” an endpoint to test if the federated credentials are working. If you have used the SAM template this should already be working and you can test it out (it will fail at first – keep reading to find out how to set the IAM permissions!). If not go to API Gateway and create a new Resource called /users at the same level of /saml in your API with a GET method.

lambdasamltwo_12.png

For Integration type, choose Mock.

lambdasamltwo_13.png

In the Method Request, for Authorization, choose AWS_IAM. In the Integration Response, in the Body Mapping Template section, for Content-Type, choose application/json and add the following JSON:

{
    "status": "Success",
    "agent": "${context.identity.userAgent}"
}

lambdasamltwo_14.png

Before using this new Mock API as a test, configure CORS and re-generate the JavaScript SDK so that the browser knows about the new methods.

  1. On the /saml resource root and choose Actions, Enable CORS.
  2. In the Access-Control-Allow-Headers section, add COGNITO_ID into the endpoint and then choose Enable CORS and replace existing CORS headers.
  3. Choose Actions, Deploy API. Use the stage that you configured earlier.
  4. In the Stage Editor, choose SDK Generation and select JavaScript as your platform. Choose Generate SDK.
  5. Upload the new apigClient.js and lib directory to the S3 bucket of your static website.

One last thing must be completed before testing (You will need to complete this step even if you use the SAM template) if the credentials can invoke this mock endpoint with AWS_IAM credentials. The ADFS-Production Role needs execute-api:Invoke permissions for this API Gateway resource.

  1. In the IAM console, choose Roles, and open the ADFS-Production Role.

  2. For testing, you can attach the AmazonAPIGatewayInvokeFullAccess policy; however, for production, you should scope this down to the resource as documented in Control Access to API Gateway with IAM Permissions.

  3. After you have attached a policy with invocation rights and authenticated with AD FS to finish the redirect process, choose PING.

If everything has been set up successfully you should see an alert with information about the user agent.

Final Thoughts

We hope these scenarios and sample code help you to not only begin to build comprehensive enterprise applications on AWS but also to enhance your understanding of different AuthN and AuthZ mechanisms. Consider some ways that you might be able to evolve this solution to meet the needs of your own customers and innovate in this space. For example:

  • Completing the CloudFront configuration and leveraging SSL termination for site identification. See if this can be incorporated into the Lambda processing pipeline.
  • Attaching a scope-down IAM policy if the business rules are matched. For example, the default role could be more permissive for a group but if the user is a contractor (username with –C appended) they get extra restrictions applied when assumeRoleWithSaml is called in the ProcessSAML_awslabs_samldemo Lambda function.
  • Changing the time duration before credentials expire on a per-role basis. Perhaps if the SAMLResponse parsing determines the user is an Administrator, they get a longer duration.
  • Passing through additional user claims in SAMLResponse for further logical decisions or auditing by adding more claim rules in the ADFS console. This could also be a mechanism to synchronize some Active Directory schema attributes with AWS services.
  • Granting different sets of credentials if a user has accounts with multiple SAML providers. While this tutorial was made with ADFS, you could also leverage it with other solutions such as Shibboleth and modify the ProcessSAML_awslabs_samldemo Lambda function to be aware of the different IdP ARN values. Perhaps your solution grants different IAM roles for the same user depending on if they initiated a login from Shibboleth rather than ADFS?

The Lambda functions can be altered to take advantage of these options which you can read more about here. For more information about ADFS claim rule language manipulation, see The Role of the Claim Rule Language on Microsoft TechNet.

We would love to hear feedback from our customers on these designs and see different secure application designs that you’re implementing on the AWS platform.

Stitch – Python Remote Administration Tool AKA RAT

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/ni6lXu8TvAg/

Stitch is a cross-platform Python Remote Administration Tool, commonly known as a RAT. This framework allows you to build custom payloads for Windows, Mac OSX and Linux as well. You are able to select whether the payload binds to a specific IP and port, listens for a connection on a port, option to send an […]

The post Stitch – Python…

Read the full post at darknet.org.uk

OWASP VBScan – vBulletin Vulnerability Scanner

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/TTIz-sCvWbk/

OWASP VBScan short for vBulletin Vulnerability Scanner is an open-source project in Perl programming language to detect VBulletin CMS vulnerabilities and analyse them. Features VBScan currently has the following: Compatible with Windows, Linux & OSX Up to date exploit database Full path disclosure Firewall detect & bypass Version check…

Read the full post at darknet.org.uk

Scripting Languages for AWS Lambda: Running PHP, Ruby, and Go

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/scripting-languages-for-aws-lambda-running-php-ruby-and-go/

Dimitrij Zub
Dimitrij Zub, Solutions Architect
Raphael Sack
Raphael Sack, Technical Trainer

In our daily work with partners and customers, we see a lot of different amazing skills, expertise and experience in many fields, and programming languages. From languages that have been around for a while to languages on the cutting edge, many teams have developed a deep understanding of concepts of each language; they want to apply these languages with and within the innovations coming from AWS, such as AWS Lambda.

Lambda provides native support for a wide array of languages, such as Scala, Java, Node.js, Python, and C/C++ Linux native applications. In this post, we outline how you can use Lambda with different scripting languages.

For each language, you perform the following tasks:

  • Prepare: Launch an instance from an AMI and log in via SSH
  • Compile and package the language for Lambda
  • Install: Create the Lambda package and test the code

The preparation and installation steps are similar between languages, but we provide step-by-step guides and examples for compiling and packaging PHP, Go, and Ruby.

Common steps to prepare

You can use the capabilities of Lambda to run arbitrary executables to prepare the binaries to be executed within the Lambda environment.

The following steps are only an overview on how to get PHP, Go, or Ruby up and running on Lambda; however, using this approach, you can add more specific libraries, extend the compilation scope, and leverage JSON to interconnect your Lambda function to Amazon API Gateway and other services.

After your binaries have been compiled and your basic folder structure is set up, you won’t need to redo those steps for new projects or variations of your code. Simply write the code to accept inputs from STDIN and return to STDOUT and the written Node.js wrapper takes care of bridging the runtimes for you.

For the sake of simplicity, we demonstrate the preparation steps for PHP only, but these steps are also applicable for the other environments described later.

In the Amazon EC2 console, choose Launch instance. When you choose an AMI, use one of the AMIs in the Lambda Execution Environment and Available Libraries list, for the same region in which you will run the PHP code and launch an EC2 instance to have a compiler. For more information, see Step 1: Launch an Instance.

Pick t2.large as the EC2 instance type to have two cores and 8 GB of memory for faster PHP compilation times.

languages_1png

Choose Review and Launch to use the defaults for storage and add the instance to a default, SSH only, security group generated by the wizard.

Choose Launch to continue; in the launch dialog, you can select an existing key-pair value for your login or create a new one. In this case, create a new key pair called “php” and download it.

languages_2png

After downloading the keys, navigate to the download folder and run the following command:

chmod 400 php.pem

This is required because of SSH security standards. You can now connect to the instance using the EC2 public DNS. Get the value by selecting the instance in the console and looking it up under Public DNS in the lower right part of the screen.

ssh -i php.pem [email protected][PUBLIC DNS]

You’re done! With this instance up and running, you have the right AMI in the right region to be able to continue with all the other steps.

Getting ready for PHP

After you have logged in to your running AMI, you can start compiling and packaging your environment for Lambda. With PHP, you compile the PHP 7 environment from the source and make it ready to be packaged for the Lambda environment.

Setting up PHP on the instance

The next step is to prepare the instance to compile PHP 7, configure the PHP 7 compiler to output in a defined directory, and finally compile PHP 7 to the Lambda AMI.

Update the package manager by running the following command:

sudo yum update –y

Install the minimum necessary libraries to be able to compile PHP 7:

sudo yum install gcc gcc-c++ libxml2-devel -y 

With the dependencies installed, you need to download the PHP 7 sources available from

PHP Downloads.

For this post, we were running the EC2 instance in Ireland, so we selected http://ie1.php.net/get/php-7.0.7.tar.bz2/from/this/mirror as our mirror. Run the following command to download the sources to the instance and choose your own mirror for the appropriate region.

cd ~

wget http://ie1.php.net/distributions/php-7.0.7.tar.bz2 .

Extract the files using the following command:

tar -jxvf php-7.0.7.tar.bz2

This creates the php-7.0.7 folder in your home directory. Next, create a dedicated folder for the php-7 binaries by running the following commands.

mkdir /home/ec2-user/php-7-bin

./configure --prefix=/home/ec2-user/php-7-bin/

This makes sure the PHP compilation is nicely packaged into the php binaries folder you created in your home directory. Keep in mind, that you only compile the baseline PHP here to reduce the amount of dependencies required for your Lambda function.

You can add more dependencies and more compiler options to your PHP binaries using the options available in ./configure. Run ./configure –h for more information about what can be packaged into your PHP distribution to be used with Lambda, but also keep in mind that this will increase the overall binaries package.

Finally, run the following command to start the compilation:

make install

languages_3png

https://xkcd.com/303/

After the compilation is complete, you can quickly confirm that PHP is functional by running the following command:

cd ~/php-7-bin/bin/

./php –v

PHP 7.0.7 (cli) (built: Jun 16 2016 09:14:04) ( NTS )

Copyright (c) 1997-2016 The PHP Group

Zend Engine v3.0.0, Copyright (c) 1998-2016 Zend Technologies

Time to code

Using your favorite editor, you can create an entry point PHP file, which in this case reads input from a Linux pipe and provide its output to stdout. Take a simple JSON document and count the amounts of top-level attributes for this matter. Name the file HelloLambda.php.

<?php

$data = stream_get_contents(STDIN);

$json = json_decode($data, true);

$result = json_encode(array('result' => count($json)));

echo $result."n";

?>

Creating the Lambda package

With PHP compiled and ready to go, all you need to do now is to create your Lambda package with the Node.js wrapper as an entry point.

First, tar the php-7-bin folder where the binaries reside using the following command:

cd ~

tar -zcvf php-7-bin.tar.gz php-7-bin/

Download it to your local project folder where you can continue development, by logging out and running the following command from your local machine (Linux or OSX), or using tools like WinSCP on Windows:

scp -i php.pem [email protected][EC2_HOST]:~/php-7-bin.tar.gz .

With the package download, you create your Lambda project in a new folder, which you can call php-lambda for this specific example. Unpack all files into this folder, which should result in the following structure:

php-lambda 

+-- php-7-bin

The next step is to create a Node.js wrapper file. The file takes the inputs of the Lambda invocations, invoke the PHP binary with helloLambda.php as a parameter, and provide the inputs via Linux pipe to PHP for processing. Call the file php.js and copy the following content:

process.env['PATH'] = process.env['PATH'] + ':' + process.env['LAMBDA_TASK_ROOT'];

const spawn = require('child_process').spawn;

exports.handler = function(event, context) {

    //var php = spawn('php',['helloLambda.php']); //local debug only
    var php = spawn('php-7-bin/bin/php',['helloLambda.php']);
    var output = "";

    //send the input event json as string via STDIN to php process
    php.stdin.write(JSON.stringify(event));

    //close the php stream to unblock php process
    php.stdin.end();

    //dynamically collect php output
    php.stdout.on('data', function(data) {
          output+=data;
    });

    //react to potential errors
    php.stderr.on('data', function(data) {
            console.log("STDERR: "+data);
    });

    //finalize when php process is done.
    php.on('close', function(code) {
            context.succeed(JSON.parse(output));
    });
}

//local debug only
//exports.handler(JSON.parse("{"hello":"world"}"));

With all the files finalized, the folder structure should look like the following:

php-lambda

+– php-7-bin

— helloLambda.php

— php.js

The final step before the deployment is to zip the package into an archive which can be uploaded to Lambda. Call the package LambdaPHP.zip. Feel free to remove unnecessary files, such as phpdebug, from the php-7-bin/bin folder to reduce the size of the archive.

Go, Lambda, go!

The following steps are an overview of how to compile and execute Go applications on Lambda. As with the PHP section, you are free to enhance and build upon the Lambda function with other AWS services and your application infrastructure. Though this example allows you to use your own Linux machine with a fitting distribution to work locally, it might still be useful to understand the Lambda AMIs for test and automation.

To further enhance your environment, you may want to create an automatic compilation pipeline and even deployment of the Go application to Lambda. Consider using versioning and aliases, as they help in managing new versions and dev/test/production code.

Setting up Go on the instance

The next step is to set up the Go binaries on the instance, so that you can compile the upcoming application.

First, make sure your packages are up to date (always):

sudo yum update -y

Next, visit the official Go site, check for the latest version, and download it to EC2 or to your local machine if using Linux:

cd ~

wget https://storage.googleapis.com/golang/go1.6.2.linux-amd64.tar.gz .

Extract the files using the following command:

tar -xvf go1.6.2.linux-amd64.tar.

This creates a folder named “go” in your home directory.

Time to code

For this example, you create a very simple application that counts the amount of objects in the provided JSON element. Using your favorite editor, create a file named “HelloLambda.go” with the following code directly on the machine to which you have downloaded the Go package, which may be the EC2 instance you started in the beginning or your local environment, in which case you are not stuck with vi.

package main

import (
    "fmt"
    "os"
    "encoding/json"
)

func main() {
    var dat map[string]interface{}
    
    fmt.Printf( "Welcome to Lambda Go, now Go Go Go!n" )
    if len( os.Args ) < 2 {
        fmt.Println( "Missing args" )
        return
    }

    err := json.Unmarshal([]byte(os.Args[1]), &dat)

    if err == nil {
        fmt.Println( len( dat ) )
    } else {
        fmt.Println(err)
    }
}

Before compiling, configure an environment variable to tell the Go compiler where all the files are located:

export GOROOT=~/go/

You are now set to compile a nifty new application!

~/go/bin/go build ./HelloLambda.go

Start your application for the very first time:

./HelloLambda '{ "we" : "love", "using" : "Lambda" }'

You should see output similar to:

Welcome to Lambda Go, now Go Go Go!

2

Creating the Lambda package

You have already set up your machine to compile Go applications, written the code, and compiled it successfully; all that is left is to package it up and deploy it to Lambda.

If you used an EC2 instance, copy the binary from the compilation instance and prepare it for packaging. To copy out the binary, use the following command from your local machine (Linux or OSX), or using tools such as WinSCP on Windows.

scp -i GoLambdaGo.pem [email protected]:~/goLambdaGo .

With the binary ready, create the Lambda project in a new folder, which you can call go-lambda.

The next step is to create a Node.js wrapper file to invoke the Go application; call it go.js. The file takes the inputs of the Lambda invocations and invokes the Go binary.

Here’s the content for another example of a Node.js wrapper:

const exec = require('child_process').exec;
exports.handler = function(event, context) {
    const child = exec('./goLambdaGo ' + ''' + JSON.stringify(event) + ''', (error) => {
        // Resolve with result of process
        context.done(error, 'Process complete!');
    });

    // Log process stdout and stderr
    child.stdout.on('data', console.log);
    child.stderr.on('data', console.error);

}

With all the files finalized and ready, your folder structure should look like the following:

go-lambda

— go.js

— goLambdaGo

The final step before deployment is to zip the package into an archive that can be uploaded to Lambda; call the package LambdaGo.zip.

On a Linux or OSX machine, run the following command:

zip -r go.zip ./goLambdaGo ./go.js

A gem in Lambda

For convenience, you can use the same previously used instance, but this time to compile Ruby for use with Lambda. You can also create a new instance using the same instructions.

Setting up Ruby on the instance

The next step is to set up the Ruby binaries and dependencies on the EC2 instance or local Linux environment, so that you can package the upcoming application.

First, make sure your packages are up to date (always):

sudo yum update -y

For this post, you use Traveling Ruby, a project that helps in creating “portable”, self-contained Ruby packages. You can download the latest version from Traveling Ruby linux-x86:

cd ~

wget http://d6r77u77i8pq3.cloudfront.net/releases/traveling-ruby-20150715-2.2.2-linux-x86_64.tar.gz .

Extract the files to a new folder using the following command:

mkdir LambdaRuby

tar -xvf traveling-ruby-20150715-2.2.2-linux-x86_64.tar.gz -C LambdaRuby

This creates the “LambdaRuby” folder in your home directory.

Time to code

For this demonstration, you create a very simple application that counts the amount of objects in a provided JSON element. Using your favorite editor, create a file named “lambdaRuby.rb” with the following code:

#!./bin/ruby

require 'json'

# You can use this to check your Ruby version from within puts(RUBY_VERSION)

if ARGV.length > 0
    puts JSON.parse( ARGV[0] ).length
else
    puts "0"
end

Now, start your application for the very first time, using the following command:

./lambdaRuby.rb '{ "we" : "love", "using" : "Lambda" }'

You should see the amount of fields in the JSON as output (2).

Creating the Lambda package

You have downloaded the Ruby gem, written the code, and tested it successfully… all that is left is to package it up and deploy it to Lambda. Because Ruby is an interpreter-based language, you create a Node.js wrapper and package it with the Ruby script and all the Ruby files.

The next step is to create a Node.js wrapper file to invoke your Ruby application; call it ruby.js. The file takes the inputs of the Lambda invocations and invoke your Ruby application. Here’s the content for a sample Node.js wrapper:

const exec = require('child_process').exec;

exports.handler = function(event, context) {
    const child = exec('./lambdaRuby.rb ' + ''' + JSON.stringify(event) + ''', (result) => {
        // Resolve with result of process
        context.done(result);
    });

    // Log process stdout and stderr
    child.stdout.on('data', console.log);
    child.stderr.on('data', console.error);
}

With all the files finalized and ready, your folder structure should look like this:

LambdaRuby

+– bin

+– bin.real

+– info

— lambdaRuby.rb

+– lib

— ruby.js

The final step before the deployment is to zip the package into an archive to be uploaded to Lambda. Call the package LambdaRuby.zip.

On a Linux or OSX machine, run the following command:

zip -r ruby.zip ./

Copy your zip file from the instance so you can upload it. To copy out the archive, use the following command from your local machine (Linux or OSX), or using tools such as WinSCP on Windows.

scp -i RubyLambda.pem [email protected]:~/LambdaRuby/LambdaRuby.zip .

Common steps to install

With the package done, you are ready to deploy the PHP, Go, or Ruby runtime into Lambda.

Log in to the AWS Management Console and navigate to Lambda; make sure that the region matches the one which you selected the AMI for in the preparation step.

For simplicity, I’ve used PHP as an example for the deployment; however, the steps below are the same for Go and Ruby.

Creating the Lambda function

Choose Create a Lambda function, Skip. Select the following fields and upload your previously created archive.

languages_4png

The most important areas are:

  • Name: The name to give your Lambda function
  • Runtime: Node.js
  • Lambda function code: Select the zip file created in the PHP, Go, or Ruby section, such as php.zip, go.zip, or ruby.zip
  • Handler: php.handler (as in the code, the entry function is called handler and the file is php.js. If you have used the file names from the Go and Ruby sections use the following format: [js file name without .js].handler, i.e., go.handler)
  • Role: Choose Basic Role if you have not yet created one, and create a role for your Lambda function execution

Choose Next, Create function to continue to testing.

Testing the Lambda function

To test the Lambda function, choose Test in the upper right corner, which displays a sample event with three top-level attributes.

languages_5png

Feel free to add more, or simply choose Save and test to see that your function has executed properly.

languages_6png

Conclusion

In this post, we outlined three different ways to create scripting language runtimes for Lambda, from compiling against the Lambda runtime for PHP and being able to run scripts, compiling the actuals as in Go, or using packaged binaries as much as possible with Ruby. We hope you enjoyed the ideas, found the hidden gems, and are now ready to go to create some pretty hefty projects in your favorite language, enjoying serverless, Amazon Kinesis, and API Gateway along the way.

If you have questions or suggestions, please comment below.

Androguard – Reverse Engineering & Malware Analysis For Android

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/6-3ScpU6zF8/

Androguard is a toolkit built in Python which provides reverse engineering and malware analysis for Android. It’s buyilt to examine * Dex/Odex (Dalvik virtual machine) (.dex) (disassemble, decompilation), * APK (Android application) (.apk), * Android’s binary xml (.xml) and * Android Resources (.arsc). Androguard is available for Linux/OSX/Windows…

Read the full post at darknet.org.uk

Building End-to-End Continuous Delivery and Deployment Pipelines in AWS and TeamCity

Post Syndicated from Balaji Iyer original https://aws.amazon.com/blogs/devops/building-end-to-end-continuous-delivery-and-deployment-pipelines-in-aws-and-teamcity/

By Balaji Iyer, Janisha Anand, and Frank Li

Organizations that transform their applications to cloud-optimized architectures need a seamless, end-to-end continuous delivery and deployment workflow: from source code, to build, to deployment, to software delivery.

Continuous delivery is a DevOps software development practice where code changes are automatically built, tested, and prepared for a release to production. The practice expands on continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage. When continuous delivery is implemented properly, developers will always have a deployment-ready build artifact that has undergone a standardized test process.

Continuous deployment is the process of deploying application revisions to a production environment automatically, without explicit approval from a developer. This process makes the entire software release process automated. Features are released as soon as they are ready, providing maximum value to customers.

These two techniques enable development teams to deploy software rapidly, repeatedly, and reliably.

In this post, we will build an end-to-end continuous deployment and delivery pipeline using AWS CodePipeline (a fully managed continuous delivery service), AWS CodeDeploy (an automated application deployment service), and TeamCity’s AWS CodePipeline plugin. We will use AWS CloudFormation to setup and configure the end-to-end infrastructure and application stacks. The ­­pipeline pulls source code from an Amazon S3 bucket, an AWS CodeCommit repository, or a GitHub repository. The source code will then be built and tested using TeamCity’s continuous integration server. Then AWS CodeDeploy will deploy the compiled and tested code to Amazon EC2 instances.

Prerequisites

You’ll need an AWS account, an Amazon EC2 key pair, and administrator-level permissions for AWS Identity and Access Management (IAM), AWS CloudFormation, AWS CodeDeploy, AWS CodePipeline, Amazon EC2, and Amazon S3.

Overview

Here are the steps:

  1. Continuous integration server setup using TeamCity.
  2. Continuous deployment using AWS CodeDeploy.
  3. Building a delivery pipeline using AWS CodePipeline.

In less than an hour, you’ll have an end-to-end, fully-automated continuous integration, continuous deployment, and delivery pipeline for your application. Let’s get started!

1. Continuous integration server setup using TeamCity

Click here on this button launch-stack to launch an AWS CloudFormation stack to set up a TeamCity server. If you’re not already signed in to the AWS Management Console, you will be prompted to enter your AWS credentials. This stack provides an automated way to set up a TeamCity server based on the instructions here. You can download the template used for this setup from here.

The CloudFormation template does the following:

  1. Installs and configures the TeamCity server and its dependencies in Linux.
  2. Installs the AWS CodePipeline plugin for TeamCity.
  3. Installs a sample application with build configurations.
  4. Installs PHP meta-runners required to build the sample application.
  5. Redirects TeamCity port 8111 to 80.

Choose the AWS region where the TeamCity server will be hosted. For this demo, choose US East (N. Virginia).

Select a region

On the Select Template page, choose Next.

On the Specify Details page, do the following:

  1. In Stack name, enter a name for the stack. The name must be unique in the region in which you are creating the stack.
  2. In InstanceType, choose the instance type that best fits your requirements. The default value is t2.medium.

Note: The default instance type exceeds what’s included in the AWS Free Tier. If you use t2.medium, there will be charges to your account. The cost will depend on how long you keep the CloudFormation stack and its resources.

  1. In KeyName, choose the name of your Amazon EC2 key pair.
  2. In SSHLocation, enter the IP address range that can be used to connect through SSH to the EC2 instance. SSH and HTTP access is limited to this IP address range.

Note: You can use checkip.amazonaws.com or whatsmyip.org to find your IP address. Remember to add /32 to any single domain or, if you are representing a larger IP address space, use the correct CIDR block notation.

Specify Details

Choose Next.

Although it’s optional, on the Options page, type TeamCityServer for the instance name. This is the name used in the CloudFormation template for the stack. It’s a best practice to name your instance, because it makes it easier to identify or modify resources later on.

Choose Next.

On the Review page, choose Create button. It will take several minutes for AWS CloudFormation to create the resources for you.

Review

When the stack has been created, you will see a CREATE_COMPLETE message on the Overview tab in the Status column.

Events

You have now successfully created a TeamCity server. To access the server, on the EC2 Instance page, choose Public IP for the TeamCityServer instance.

Public DNS

On the TeamCity First Start page, choose Proceed.

TeamCity First Start

Although an internal database based on the HSQLDB database engine can be used for evaluation, TeamCity strongly recommends that you use an external database as a back-end TeamCity database in a production environment. An external database provides better performance and reliability. For more information, see the TeamCity documentation.

On the Database connection setup page, choose Proceed.

Database connection setup

The TeamCity server will start, which can take several minutes.

TeamCity is starting

Review and Accept the TeamCity License Agreement, and then choose Continue.

Next, create an Administrator account. Type a user name and password, and then choose Create Account.

You can navigate to the demo project from Projects in the top-left corner.

Projects

Note: You can create a project from a repository URL (the option used in this demo), or you can connect to your managed Git repositories, such as GitHub or BitBucket. The demo app used in this example can be found here.

We have already created a sample project configuration. Under Build, choose Edit Settings, and then review the settings.

Demo App

Choose Build Step: PHP – PHPUnit.

Build Step

The fields on the Build Step page are already configured.

Build Step

Choose Run to start the build.

Run Test

To review the tests that are run as part of the build, choose Tests.

Build

Build

You can view any build errors by choosing Build log from the same drop-down list.

Now that we have a successful build, we will use AWS CodeDeploy to set up a continuous deployment pipeline.

2. Continuous deployment using AWS CodeDeploy

Click here on this button launch-stack to launch an AWS CloudFormation stack that will use AWS CodeDeploy to set up a sample deployment pipeline. If you’re not already signed in to the AWS Management Console, you will be prompted to enter your AWS credentials.

You can download the master template used for this setup from here. The template nests two CloudFormation templates to execute all dependent stacks cohesively.

  1. Template 1 creates a fleet of up to three EC2 instances (with a base operating system of Windows or Linux), associates an instance profile, and installs the AWS CodeDeploy agent. The CloudFormation template can be downloaded from here.
  2. Template 2 creates an AWS CodeDeploy deployment group and then installs a sample application. The CloudFormation template can be downloaded from here.

Choose the same AWS region you used when you created the TeamCity server (US East (N. Virginia)).

Note: The templates contain Amazon Machine Image (AMI) mappings for us-east-1, us-west-2, eu-west-1, and ap-southeast-2 only.

On the Select Template page, choose Next.

Picture17

On the Specify Details page, in Stack name, type a name for the stack. In the Parameters section, do the following:

  1. In AppName, you can use the default, or you can type a name of your choice. The name must be between 2 and 15 characters long. It can contain lowercase and alphanumeric characters, hyphens (-), and periods (.), but the name must start with an alphanumeric character.
  1. In DeploymentGroupName, you can use the default, or you type a name of your choice. The name must be between 2 and 25 characters long. It can contain lowercase and alphanumeric characters, hyphens (-), and periods (.), but the name must start with an alphanumeric character.

Picture18

  1. In InstanceType, choose the instance type that best fits the requirements of your application.
  2. In InstanceCount, type the number of EC2 instances (up to three) that will be part of the deployment group.
  3. For Operating System, choose Linux or Windows.
  4. Leave TagKey and TagValue at their defaults. AWS CodeDeploy will use this tag key and value to locate the instances during deployments. For information about Amazon EC2 instance tags, see Working with Tags Using the Console.Picture19
  5. In S3Bucket and S3Key, type the bucket name and S3 key where the application is located. The default points to a sample application that will be deployed to instances in the deployment group. Based on what you selected in the OperatingSystem field, use the following values.
    Linux:
    S3Bucket: aws-codedeploy
    S3Key: samples/latest/SampleApp_Linux.zip
    Windows:
    S3Bucket: aws-codedeploy
    S3Key: samples/latest/SampleApp_Windows.zip
  1. In KeyName, choose the name of your Amazon EC2 key pair.
  2. In SSHLocation, enter the IP address range that can be used to connect through SSH/RDP to the EC2 instance.

Picture20

Note: You can use checkip.amazonaws.com or whatsmyip.org to find your IP address. Remember to add /32 to any single domain or, if you are representing a larger IP address space, use the correct CIDR block notation.

Follow the prompts, and then choose Next.

On the Review page, select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box. Review the other settings, and then choose Create.

Picture21

It will take several minutes for CloudFormation to create all of the resources on your behalf. The nested stacks will be launched sequentially. You can view progress messages on the Events tab in the AWS CloudFormation console.

Picture22

You can see the newly created application and deployment groups in the AWS CodeDeploy console.

Picture23

To verify that your application was deployed successfully, navigate to the DNS address of one of the instances.

Picture24

Picture25

Now that we have successfully created a deployment pipeline, let’s integrate it with AWS CodePipeline.

3. Building a delivery pipeline using AWS CodePipeline

We will now create a delivery pipeline in AWS CodePipeline with the TeamCity AWS CodePipeline plugin.

  1. Using AWS CodePipeline, we will build a new pipeline with Source and Deploy stages.
  2. Create a custom action for the TeamCity Build stage.
  3. Create an AWS CodePipeline action trigger in TeamCity.
  4. Create a Build stage in the delivery pipeline for TeamCity.
  5. Publish the build artifact for deployment.

Step 1: Build a new pipeline with Source and Deploy stages using AWS CodePipeline.

In this step, we will create an Amazon S3 bucket to use as the artifact store for this pipeline.

  1. Install and configure the AWS CLI.
  1. Create an Amazon S3 bucket that will host the build artifact. Replace account-number with your AWS account number in the following steps.
    $ aws s3 mb s3://demo-app-build-account-number
  1. Enable bucket versioning
    $ aws s3api put-bucket-versioning --bucket demo-app-build-account-number --versioning-configuration Status=Enabled
  1. Download the sample build artifact and upload it to the Amazon S3 bucket created in step 2.
  • OSX/Linux:
    $ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/Sample_Linux_App.zip | aws s3 cp - s3://demo-app-build-account-number
  • Windows:
    $ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/Sample_Windows_App.zip
    $ aws s3 cp ./Sample_Windows_App.zip s3://demo-app-account-number

Note: You can use AWS CloudFormation to perform these steps in an automated way. When you choose launch-stack, this template will be used. Use the following commands to extract the Amazon S3 bucket name, enable versioning on the bucket, and copy over the sample artifact.

$ export bucket-name ="$(aws cloudformation describe-stacks --stack-name “S3BucketStack” --output text --query 'Stacks[0].Outputs[?OutputKey==`S3BucketName`].OutputValue')"
$ aws s3api put-bucket-versioning --bucket $bucket-name --versioning-configuration Status=Enabled && wget https://s3.amazonaws.com/teamcity-demo-app/Sample_Linux_App.zip && aws s3 cp ./Sample_Linux_App.zip s3://$bucket-name

You can create a pipeline by using a CloudFormation stack or the AWS CodePipeline console.

Option 1: Use AWS CloudFormation to create a pipeline

We’re going to create a two-stage pipeline that uses a versioned Amazon S3 bucket and AWS CodeDeploy to release a sample application. (You can use an AWS CodeCommit repository or a GitHub repository as the source provider instead of Amazon S3.)

Click here on this button launch-stack to launch an AWS CloudFormation stack to set up a new delivery pipeline using the application and deployment group created in an earlier step. If you’re not already signed in to the AWS Management Console, you will be prompted to enter your AWS credentials.

Choose the US East (N. Virginia) region, and then choose Next.

Leave the default options, and then choose Next.

Picture26

On the Options page, choose Next.

Picture27

Select the I acknowledge that AWS CloudFormation might create IAM resources check box, and then choose Create. This will create the delivery pipeline in AWS CodePipeline.

Option 2: Use the AWS CodePipeline console to create a pipeline

On the Create pipeline page, in Pipeline name, type a name for your pipeline, and then choose Next step.
Picture28

Depending on where your source code is located, you can choose Amazon S3, AWS CodeCommit, or GitHub as your Source provider. The pipeline will be triggered automatically upon every check-in to your GitHub or AWS CodeCommit repository or when an artifact is published into the S3 bucket. In this example, we will be accessing the product binaries from an Amazon S3 bucket.

Choose Next step.Picture29

s3://demo-app-build-account-number/Sample_Linux_App.zip (or) Sample_Windows_App.zip

Note: AWS CodePipeline requires a versioned S3 bucket for source artifacts. Enable versioning for the S3 bucket where the source artifacts will be located.

On the Build page, choose No Build. We will update the build provider information later on.Picture31

For Deployment provider, choose CodeDeploy. For Application name and Deployment group, choose the application and deployment group we created in the deployment pipeline step, and then choose Next step.Picture32

An IAM role will provide the permissions required for AWS CodePipeline to perform the build actions and service calls.  If you already have a role you want to use with the pipeline, choose it on the AWS Service Role page. Otherwise, type a name for your role, and then choose Create role.  Review the predefined permissions, and then choose Allow. Then choose Next step.

 

For information about AWS CodePipeline access permissions, see the AWS CodePipeline Access Permissions Reference.

Picture34

Review your pipeline, and then choose Create pipeline

Picture35

This will trigger AWS CodePipeline to execute the Source and Beta steps. The source artifact will be deployed to the AWS CodeDeploy deployment groups.

Picture36

Now you can access the same DNS address of the AWS CodeDeploy instance to see the updated deployment. You will see the background color has changed to green and the page text has been updated.

Picture37

We have now successfully created a delivery pipeline with two stages and integrated the deployment with AWS CodeDeploy. Now let’s integrate the Build stage with TeamCity.

Step 2: Create a custom action for TeamCity Build stage

AWS CodePipeline includes a number of actions that help you configure build, test, and deployment resources for your automated release process. TeamCity is not included in the default actions, so we will create a custom action and then include it in our delivery pipeline. TeamCity’s CodePipeline plugin will also create a job worker that will poll AWS CodePipeline for job requests for this custom action, execute the job, and return the status result to AWS CodePipeline.

TeamCity’s custom action type (Build/Test categories) can be integrated with AWS CodePipeline. It’s similar to Jenkins and Solano CI custom actions. TeamCity’s CodePipeline plugin will also create a job worker that will poll AWS CodePipeline for job requests for this custom action, execute the job, and return the status result to AWS CodePipeline.

The TeamCity AWS CodePipeline plugin is already installed on the TeamCity server we set up earlier. To learn more about installing TeamCity plugins, see install the plugin. We will now create a custom action to integrate TeamCity with AWS CodePipeline using a custom-action JSON file.

Download this file locally: https://github.com/JetBrains/teamcity-aws-codepipeline-plugin/blob/master/custom-action.json

Open a terminal session (Linux, OS X, Unix) or command prompt (Windows) on a computer where you have installed the AWS CLI. For information about setting up the AWS CLI, see here.

Use the AWS CLI to run the aws codepipeline create-custom-action-type command, specifying the name of the JSON file you just created.

For example, to create a build custom action:

$ aws codepipeline create-custom-action-type --cli-input-json file://teamcity-custom-action.json

This should result in an output similar to this:

{
    "actionType": {
        "inputArtifactDetails": {
            "maximumCount": 5,
            "minimumCount": 0
        },
        "actionConfigurationProperties": [
            {
                "description": "The expected URL format is http[s]://host[:port]",
                "required": true,
                "secret": false,
                "key": true,
                "queryable": false,
                "name": "TeamCityServerURL"
            },
            {
                "description": "Corresponding TeamCity build configuration external ID",
                "required": true,
                "secret": false,
                "key": true,
                "queryable": false,
                "name": "BuildConfigurationID"
            },
            {
                "description": "Must be unique, match the corresponding field in the TeamCity build trigger settings, satisfy regular expression pattern: [a-zA-Z0-9_-]+] and have length <= 20",
                "required": true,
                "secret": false,
                "key": true,
                "queryable": true,
                "name": "ActionID"
            }
        ],
        "outputArtifactDetails": {
            "maximumCount": 5,
            "minimumCount": 0
        },
        "id": {
            "category": "Build",
            "owner": "Custom",
            "version": "1",
            "provider": "TeamCity"
        },
        "settings": {
            "entityUrlTemplate": "{Config:TeamCityServerURL}/viewType.html?buildTypeId={Config:BuildConfigurationID}",
            "executionUrlTemplate": "{Config:TeamCityServerURL}/viewLog.html?buildId={ExternalExecutionId}&tab=buildResultsDiv"
        }
    }
}

Before you add the custom action to your delivery pipeline, make the following changes to the TeamCity build server. You can access the server by opening the Public IP of the TeamCityServer instance from the EC2 Instance page.

Picture38

In TeamCity, choose Projects. Under Build Configuration Settings, choose Version Control Settings. You need to remove the version control trigger here so that the TeamCity build server will be triggered during the Source stage in AWS CodePipeline. Choose Detach.

Picture39

Step 3: Create a new AWS CodePipeline action trigger in TeamCity

Now add a new AWS CodePipeline trigger in your build configuration. Choose Triggers, and then choose Add new trigger

Picture40

From the drop-down menu, choose AWS CodePipeline Action.

Picture41

 

In the AWS CodePipeline console, choose the region in which you created your delivery pipeline. Enter your access key credentials, and for Action ID, type a unique name. You will need this ID when you add a TeamCity Build stage to the pipeline.

Picture42

Step 4: Create a new Build stage in the delivery pipeline for TeamCity

Add a stage to the pipeline and name it Build.

Picture43

From the drop-down menu, choose Build. In Action name, type a name for the action. In Build provider, choose TeamCity, and then choose Add action.

Select TeamCity, click Add action

Picture44

For TeamCity Action Configuration, use the following:

TeamCityServerURL:  http://<Public DNS address of the TeamCity build server>[:port]

Picture45

BuildConfigurationID: In your TeamCity project, choose Build. You’ll find this ID (AwsDemoPhpSimpleApp_Build) under Build Configuration Settings.

Picture46

ActionID: In your TeamCity project, choose Build. You’ll find this ID under Build Configuration Settings. Choose Triggers, and then choose AWS CodePipeline Action.

Picture47

Next, choose input and output artifacts for the Build stage, and then choose Add action.

Picture48

We will now publish a new artifact to the Amazon S3 artifact bucket we created earlier, so we can see the deployment of a new app and its progress through the delivery pipeline. The demo app used in this artifact can be found here for Linux or here for Windows.

Download the sample build artifact and upload it to the Amazon S3 bucket created in step 2.

OSX/Linux:

$ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/PhpArtifact.zip | aws s3 cp - s3://demo-app-build-account-number

Windows:

$ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/WindowsArtifact.zip
$ aws s3 cp ./WindowsArtifact.zip s3://demo-app-account-number

From the AWS CodePipeline dashboard, under delivery-pipeline, choose Edit.Picture49

Edit Source stage by choosing the edit icon on the right.

Amazon S3 location:

Linux: s3://demo-app-account-number/PhpArtifact.zip

Windows: s3://demo-app-account-number/WindowsArtifact.zip

Under Output artifacts, make sure My App is displayed for Output artifact #1. This will be the input artifact for the Build stage.Picture50

The output artifact of the Build stage should be the input artifact of the Beta deployment stage (in this case, MyAppBuild).Picture51

Choose Update, and then choose Save pipeline changes. On the next page, choose Save and continue.

Step 5: Publish the build artifact for deploymentPicture52

Step (a)

In TeamCity, on the Build Steps page, for Runner type, choose Command Line, and then add the following custom script to copy the source artifact to the TeamCity build checkout directory.

Note: This step is required only if your AWS CodePipeline source provider is either AWS CodeCommit or Amazon S3. If your source provider is GitHub, this step is redundant, because the artifact is copied over automatically by the TeamCity AWS CodePipeline plugin.

In Step name, enter a name for the Command Line runner to easily distinguish the context of the step.

Syntax:

$ cp -R %codepipeline.artifact.input.folder%/<CodePipeline-Name>/<build-input-artifact-name>/* % teamcity.build.checkoutDir%
$ unzip *.zip -d %teamcity.build.checkoutDir%
$ rm –rf %teamcity.build.checkoutDir%/*.zip

For Custom script, use the following commands:

cp -R %codepipeline.artifact.input.folder%/delivery-pipeline/MyApp/* %teamcity.build.checkoutDir%
unzip *.zip -d %teamcity.build.checkoutDir%
rm –rf %teamcity.build.checkoutDir%/*.zip

Picture53

Step (b):

For Runner type, choose Command Line runner type, and then add the following custom script to copy the build artifact to the output folder.

For Step name, enter a name for the Command Line runner.

Syntax:

$ mkdir -p %codepipeline.artifact.output.folder%/<CodePipeline-Name>/<build-output-artifact-name>/
$ cp -R %codepipeline.artifact.input.folder%/<CodePipeline-Name>/<build-input-artifact-name>/* %codepipeline.artifact.output.folder%/<CodePipeline-Name/<build-output-artifact-name>/

For Custom script, use the following commands:

$ mkdir -p %codepipeline.artifact.output.folder%/delivery-pipeline/MyAppBuild/
$ cp -R %codepipeline.artifact.input.folder%/delivery-pipeline/MyApp/* %codepipeline.artifact.output.folder%/delivery-pipeline/MyAppBuild/

CopyToOutputFolderIn Build Steps, choose Reorder build steps to ensure the copying of the source artifact step is executed before the PHP – PHP Unit step.Picture54

Drag and drop Copy Source Artifact To Build Checkout Directory to make it the first build step, and then choose Apply.Picture55

Navigate to the AWS CodePipeline console. Choose the delivery pipeline, and then choose Release change. When prompted, choose Release.

Choose Release on the next prompt.

Picture56

The most recent change will run through the pipeline again. It might take a few moments before the status of the run is displayed in the pipeline view.

Here is what you’d see after AWS CodePipeline runs through all of the stages in the pipeline:Picture57

Let’s access one of the instances to see the new application deployment on the EC2 Instance page.Picture58

If your base operating system is Windows, accessing the public DNS address of one of the AWS CodeDeploy instances will result in the following page.

Windows: http://public-dns/Picture59

If your base operating system is Linux, when we access the public DNS address of one of the AWS CodeDeploy instances, we will see the following test page, which is the sample application.

Linux: http://public-dns/www/index.phpPicture60

Congratulations! You’ve created an end-to-end deployment and delivery pipeline ─ from source code, to build, to deployment ─ in a fully automated way.

Summary:

In this post, you learned how to build an end-to-end delivery and deployment pipeline on AWS. Specifically, you learned how to build an end-to-end, fully automated, continuous integration, continuous deployment, and delivery pipeline for your application, at scale, using AWS deployment and management services. You also learned how AWS CodePipeline can be easily extended through the use of custom triggers to integrate other services like TeamCity.

If you have questions or suggestions, please leave a comment below.

Yahoo Daily Fantasy: Everyone’s Invited—and We Mean “Everyone”

Post Syndicated from davglass original https://yahooeng.tumblr.com/post/129855575131

imbrianj:

Photo of a Yahoo accessibility specialist assisting a colleague with keyboard navigation. The Fantasy Sport logo is superimposed.
When we’re building products at Yahoo we get really excited about our work. No surprise. We envision that millions of people are going to love our products and be absolutely delighted when using them.

With our new Yahoo Sports Daily Fantasy game, we wanted to include everyone.

We support all major modern browsers on desktop and mobile as well as native apps. However, that, in and of itself, won’t ensure that the billion individuals around the world who use assistive technology will be able to manage and play our fantasy games. One billion. That’s a lot of everyone.

Daily Fantasy baked in accessibility. Baked in. Important point. In order to ensure that everyone is able to compete in our games at the same level, accessibility can’t be an add-on.

Check out our pages. Title and ARIA attributes. Structured headers. Brilliant labels. TabIndex and other attributes that are convenience features for many of us and a necessity for a great experience for others—especially our assistive technology users. There are a lot of them and if we work to make our pages and apps accessible, well, we figure, there can be a lot more of them using Daily Fantasy.

Think about it: whether you’re a sighted user and just need to hover over an icon to get the full description of what it indicates—or a totally blind user who would otherwise miss that valuable content—it makes sense to work on making our game as enjoyable and as easy to use as possible for everyone.

So, the technical bits. What specific things did we do to ensure good accessibility on Daily Fantasy?

A properly accessible site starts on a foundation of good, semantic markup. We work to ensure that content is presented in the markup in the order that makes the most sense, then worry about how to style it to look as we desire. The markup we choose is also important: while <div> and <span> are handy wrappers, we try to make sure the context is appropriate. Should this player info be a <dl>? Should this alert be a <p>?

One of the biggest impacts to screen readers is the appropriate use of header tags and well-written labels. With these a user can quickly navigate to the appropriate part of the page based on the headers presented—allowing them to skip some of the navigation stuff that sighted users take for granted—and know exactly what they can do when, for example, they need to subtract or add a player to their roster. When content changes, we make use of ARIA attributes. With a single-page web app (that does not do a page refresh as you navigate) we make use of ARIA’s role=“alert” to give a cue to users what change has occurred. Similarly, we’ve tried to ensure some components, such as our tab selectors and sliders, are compatible and present information that is as helpful as possible. With our scrolling table headers, we had to use ARIA to “ignore” them, as it’d be redundant for screen readers as the natural <th> elements were intentionally left in place but visibly hidden.

Although we have done some testing with OSX and VoiceOver, our primary testing platform is NVDA on Windows using Chrome. NVDA’s support has been good – and, it’s free and open source. Even if you’re on OSX, you can install a free Windows VM for testing thanks to a program Microsoft has set up (thank you!). These free tools make it so anyone is able to ensure a great experience for all users:

https://dev.modern.ie/tools/vms/mac/
https://www.virtualbox.org/wiki/Downloads
http://www.nvaccess.org/download/
http://www.google.com/chrome/

Accessibility should not be considered a competitive advantage. It’s something everyone should strive for and something we should all be supporting. If you’re interested in participating in the conversation, give us a tweet, reblog, join in the forum conversations or drop us a line! We share your love of Daily Fantasy games and want to make sure everyone’s invited.

If you have a suggestion on what could improve our product, please let us know! For Daily Fantasy we personally lurk in some of the more popular forums and have gotten some really great feedback from their users. It’s not uncommon to read a comment and have a fix in to address it within hours.

Did I mention that we are excited about our work and delighting users—everyone?

– Gary, Darren and Brian

Why Greet Apple’s Swift 2.0 With Open Arms?

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2015/06/15/apple-is-not-our-friend.html

Apple announced last week that its Swift programming language — a
currently fully proprietary software successor to Objective C — will
probably be partially released under an OSI-approved license eventually.
Apple explicitly stated though that such released software will not be
copylefted. (Apple’s pathological hatred of copyleft is reasonably well
documented.) Apple’s announcement remained completely silent on patents,
and we should expect the chosen non-copyleft license
will not contain a patent grant.
(I’ve explained at
great length in the past why software patents are a particularly dangerous
threat to programming language infrastructure
.)

Apple’s dogged pursuit for non-copyleft replacements for copylefted
software is far from new. For example, Apple has worked to create
replacements for Samba so they need not ship Samba in OSX. But, their
anti-copyleft witch hunt goes back much further. It began
when Richard
Stallman himself famously led the world’s first GPL enforcement effort
against NeXT
, and Objective-C was liberated. For a time, NeXT and
Apple worked upstream with GCC to make Objective-C better for the
community. But, that whole time, Apple was carefully plotting its escape
from the copyleft world. Fortuitously, Apple eventually discovered a
technically brilliant (but sadly non-copylefted) research programming
language and compiler system called LLVM. Since then, Apple has sunk
millions of dollars into making LLVM better. On the surface, that seems
like a win for software freedom, until you look at the bigger picture:
their goal is to end copyleft compilers. Their goal is to pick and choose
when and how programming language software is liberated. Swift is not a
shining example of Apple joining us in software freedom; rather, it’s a
recent example of Apple’s long-term strategy to manipulate open source
— giving our community occasional software freedom on Apple’s own
terms. Apple gives us no bread but says let them eat cake
instead.

Apple’s got PR talent. They understand that merely announcing the
possibility of liberating proprietary software gets press. They know that
few people will follow through and determine how it went. Meanwhile, the
standing story becomes: Wait, didn’t Apple open source Swift
anyway?. Already, that false soundbite’s grip strengthens, even though
the answer remains a resounding No!. However, I suspect that
Apple will probably meet most
of their
public pledges
. We’ll likely see pieces of Swift 2.0 thrown over the
wall. But the best stuff will be kept proprietary. That’s already happening
with LLVM, anyway; Apple already ships a no-source-available fork of
LLVM.

Thus, Apple’s announcement incident hasn’t happened in a void. Apple
didn’t just discover open source after years of neutrality on the topic.
Apple’s move is calculated, which
led various
industry pundits like O’Grady and Weinberg to ask hard questions (some of
which are similar to mine)
. Yet, Apple’s hype is so good, that
it did
convince one trade association leader
.

To me, Apple’s not-yet-executed move to liberate some of the Swift 2.0
code seems a tactical stunt to win over developers who currently prefer the
relatively more open nature of the Android/Linux platform. While nearly
all the Android userspace applications are proprietary, and GPL violations on
Android devices abound, at least the copyleft license of Linux itself
provides the opportunity to keep the core operating system of Android
liberated. No matter how much Swift code is released, such will never be
true with Apple.

I’m often pointing out
in my recent
talks
how complex and treacherous the Open Source and Free Software
political climate became in the last decade. Here’s a great example: Apple
is a wily opponent, utilizing Open Source (the cooption of Free Software) to
manipulate the press and hoodwink the would-be spokespeople for Linux to
support them. Many of us software freedom advocates have predicted for
years that Free Software unfriendly companies like Apple would liberate
more and more code under non-copyleft licenses in an effort to create
walled gardens of seeming software freedom. I don’t revel in my past
accuracy of such predictions; rather, I feel simply the hefty weight of
Cassandra’s curse.

You’re Living in the Past, Dude!

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/08/05/living-in-the-past.html

At the 2000 Usenix
Technical Conference
(which was the primary “generalist”
conference for Free Software developers in those days), I met Miguel De
Icaza for the third time in my life. In those days, he’d just started
Helix Code (anyone else remember what Ximian used to be called?) and was
still president of the GNOME Foundation. To give you some context:
Bonobo was a centerpiece of new and active GNOME development then.

Out of curiosity and a little excitement about GNOME, I asked Miguel if
he could show me how to get the GNOME 1.2 running on my laptop. Miguel
agreed to help, quickly taking control of the keyboard and frantically
typing and editing my sources.list.

Debian potato was the just-becoming-stable release in those days, and
of course, I was still running potato (this was before
my experiment
with running things from testing began
).

After a few minutes hacking on my keyboard, Miguel realized that I
wasn’t running woody, Debian’s development release. Miguel looked at
me, and said: You aren’t running woody; I can’t make GNOME run on
this thing. There’s nothing I can do for you. You’re living in the
past, dude!. (Those who know Miguel IRL can imagine easily how he’d
sound saying this.)

So, I’ve told that story many times for the last eleven years. I
usually tell it for laughs, as it seems an equal-opportunity humorous
anecdote. It pokes some fun at Miguel, at me, at Debian for its release
cycle, and also at GNOME (which has, since its inception, tried
to never live in the past, dude).

Fact is, though, I rather like living in the past, at least
with regard to my computer setup. By way of desktop GUIs, I
used twm well into the
late 1990s, and used fvwm well into
the early 2000s. I switched to sawfish (then sawmill) during the
relatively brief period when GNOME used it as its default window
manager. When Metacity became the default, I never switched because I’d
configured sawfish so heavily.

In fact, the only actual parts of GNOME 2 that I ever used on a daily
basis have been (a) a small unobtrusive panel, (b) dbus (and its related
services), and (c) the Network Manager applet. When GNOME 3 was
released, I had no plans to switch to it, and frankly I still don’t.

I’m not embarrassed that I consistently live in the past; it’s
sort of the point. GNOME 3 isn’t for me; it’s for people who want their
desktop to operate in new and interesting ways. Indeed, it’s (in many
ways) for the people who are tempted to run OSX because its desktop is
different than the usual, traditional, “desktop metaphor”
experience that had been standard since the mid-1990s.

GNOME 3 just wasn’t designed with old-school Unix hackers in mind.
Those of us who don’t believe a computer is any good until we see a
command line aren’t going to be the early adopters who embrace GNOME 3.
For my part, I’ll actually try to avoid it as long as possible, continue
to run my little GNOME 2 panel and sawfish, until slowly, GNOME 3 will
seep into my workflow the way the GNOME 2 panel and sawfish did
when they were current, state-of-the-art GNOME
technologies.

I hope that other old-school geeks will see this distinction: we’re
past the era when every Free Software project is targeted at us hackers
specifically. Failing to notice this will cause us to ignore the deeper
problem software freedom faces. GNOME Foundation’s Executive Director
(and my good friend), Karen
Sandler
, pointed out in
her OSCON
keynote
something that’s bothered her and me for years: the majority
computer at OSCON is Apple hardware running OSX. (In fact, I even
noticed Simon Phipps has one
now!) That’s the world we’re living in now. Users who
actually know about “Open Source” are now regularly
enticed to give up software freedom for shiny things.

Yes, as you just read, I can snicker as quickly as any
old-school command-line geek (just as
Linus
Torvalds did earlier this week
) at the pointlessness of wobbly
windows, desktop cubes, and zoom effects. I could also easily give a
treatise on how I can get work done faster, better, and smarter because
I have the technology of years ago that makes every keystroke
matter.

Notwithstanding that, I’d even love to have the same versatility with
GNOME 3 that I have with sawfish. And, if it turns out GNOME 3’s
embedded Javascript engine will give me the same hackability I prefer
with sawfish, I’ll adopt GNOME 3 happily. But, no matter what, I’ll
always be living in the past, because like every other human, I hate
changing anything, unless it’s strictly necessary or it’s my own
creation and derivation. Humans are like that: no matter who you are,
if it wasn’t your idea, you’re always slow to adopt something new and
change old habits.

Nevertheless, there’s actually nothing wrong with living in the
past — I quite like it myself. However, I’d suggest that care
be taken to not admonish those who make a go at creating the future.
(At this risk of making a conclusion that sounds like a time travel
joke,) don’t forget that their future will eventually
become that very past where I and others would prefer to
live.

Mac OSX 10.7 to include full disk encryption

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/Jy_Sl5uXF2g/mac-osx-107-to-include-full-disk.html

Apple’s recent developer preview announcement for 10.7 notes that it will include:”the all new FileVault, that provides high performance full disk encryption for local and external drives, and the ability to wipe data from your Mac instantaneously”This means that both Windows (BitLocker) and MacOS (FileVault) will have free, OS integrated full disk encryption.

_uacct = “UA-1423386-1”;
urchinTracker();

Mac OSX 10.7 to include full disk encryption

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/Jy_Sl5uXF2g/mac-osx-107-to-include-full-disk.html

Apple’s recent developer preview announcement for 10.7 notes that it will include:”the all new FileVault, that provides high performance full disk encryption for local and external drives, and the ability to wipe data from your Mac instantaneously”This means that both Windows (BitLocker) and MacOS (FileVault) will have free, OS integrated full disk encryption.

_uacct = “UA-1423386-1”;
urchinTracker();

Proprietary Licenses Are Even Worse Than They Look

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2010/04/07/proprietary-licenses.html

There are lots of evil things that proprietary software companies might
do. Companies put their own profit above the rights and freedoms of
their users, and to that end, much can be done that subjugates
users. Even as someone who avoids proprietary software, I still read
many proprietary license agreements (mainly to see how bad they are).
I’ve certainly become numb to the constant barrage of horrible
restrictions they place on users. But, sometimes, proprietary licenses
go so far that I’m taken aback by their gratuitous cruelty.

Apple’s licenses are probably the easiest example of proprietary
licensing terms that are well beyond reasonableness. Of course, Apple’s
licenses do the usual things like forbidding users from copying,
modifying, sharing, and reverse engineering the software. But even
worse, Apple also forbid users from running Apple software on any
hardware that is not produced by Apple.

The decoupling of one’s hardware vendor from one’s software vendor was
a great innovation brought about by the PC revolution, in which,
ironically, Apple played a role. Computing history has shown us that when
your software vendor also controls your hardware, you can easily be
“locked in“ in ways that make mundane proprietary software
licenses seem almost nonthreatening.

Film image from Tron of the Master Control Program (MCP)

Indeed, Apple has such a good hype machine that
they even
have convinced some users this restrictive policy makes computing
better
. In this worldview, the paternalistic vendor will use its
proprietary controls over as many pieces of the technology as possible
to keep the infantile users from doing something that’s “just bad
for them”. The tyrannical
MCP
of Tron comes quickly to my mind.

I’m amazed that so many otherwise Free Software supporters are quite
happy using OSX and buying Apple products, given these kinds of utterly
unacceptable policies. The scariest part, though, is that this practice
isn’t confined to Apple. I’ve been recently reminded that other
companies, such
as IBM, do exactly the same thing
. As a Free Software advocate, I’m
critical of any company that uses their control of a proprietary
software license to demand that users run that software only on the
original company’s hardware as well. The production and distribution of
mundane proprietary software is bad enough. It’s unfortunate that
companies like Apple and IBM are going the extra mile to treat users
even worse.

Oh Nine Fifteen

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/oh-nine-fifteen.html

Last week I’ve released a
test version
for the upcoming 0.9.15 release of PulseAudio. It’s going to be a major one,
so here’s a little overview what’s new from the user’s perspective.

Flat Volumes

Based on code originally contributed by Marc-André Lureau we now
support Flat Volumes. The idea behind flat volumes has been
inspired by how Windows Vista handles volume control: instead of
maintaining one volume control per application stream plus one device
volume we instead fix the device volume automatically to the “loudest”
application stream volume. Sounds confusing? Actually it’s right the contrary, it feels pretty natural and
easy to use and brings us a big step forward to reduce a bit the
number of volume sliders in the entire audio pipeline from the application to what you hear.

The flat volumes logic only applies to devices where we know the
actual multiplication factor of the hardware volume slider. That’s
most devices supported by the ALSA kernel drivers except for a few
older devices and some cheap USB hardware that exports invalid
dB information.

On-the-fly Reconfiguration of Devices (aka “S/PDIF Support”)

PulseAudio will now automatically probe all possible combinations
of configurations how to use your sound card for playback and
capturing and then allow on-the-fly switching of the
configuration. What does that mean? Basically you may now switch
beetween “Analog Stereo”, “Digital S/PDIF Stereo”, “Analog Surround
5.1” (… and so on) on-the-fly without having to reconfigure PA on
the configuration file level or even having to stop your streams. This
fixes a couple of issues PA had previously, including proper SPDIF
support, and per-device configuration of the channel map of
devices.

Unfortunately there is no UI for this yet, and hence you need to
use pactl/pacmd on the command line to switch between the
profiles. Typing list-cards in pacmd will tell you
which profiles your card supports.

In a later PA version this functionality will be extended to also
allow input connector switching (i.e. microphone vs. line-in) and
output connector switching (i.e. internal speakers vs. line-out)
on-the-fly.

Native support for 24bit samples

PA now supports 24bit packed samples as well as 24bit stored in
the LSBs of 32bit integers natively. Previously these formats were
always converted into 32bit MSB samples.

Airport Express Support

Colin Guthrie contributed native Airport Express support. This will
make the RAOP
audio output of ApEx routers appear like local sound devices
(unfortunately sound devices with a very long latency), i.e. any
application connecting to PulseAudio can output audio to ApEx devices
in a similar way to how iTunes can do it on MacOSX.

Before you ask: it is unlikely that we will ever make PulseAudio be
able to act as an ApEx compatible device that takes connections from
iTunes (i.e. becoming a RAOP server instead of just an RAOP client).
Apple has an unfriendly attitude of dongling their devices to their
applications: normally iTunes has to cryptographically authenticate
itself to the device and the device to iTunes. iTunes’ key has been
recovered by the infamous Jon Lech
Johansen
, but the device key is still unknown. Without that key it
is not realistically possible to disguise PA as an ApEx.

Other stuff

There have been some extensive changes to natively support
Bluetooth audio devices well by directly accessing BlueZ. This code
was originally contributed by the GSoC student João Paulo Rechi
Vita. Initially, 0.9.15 was intended to become the version were BT audio
just works. Unfortunately the kernel is not really up to that yet, and
I am not sure everything will be in place so that 0.9.15 will ship
with well working BT support.

There have been a lot of internal changes and API additions. Most of
these however are not visible to the user.

Oh Nine Fifteen

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/oh-nine-fifteen.html

Last week I’ve released a
test version
for the upcoming 0.9.15 release of PulseAudio. It’s going to be a major one,
so here’s a little overview what’s new from the user’s perspective.

Flat Volumes

Based on code originally contributed by Marc-André Lureau we now
support Flat Volumes. The idea behind flat volumes has been
inspired by how Windows Vista handles volume control: instead of
maintaining one volume control per application stream plus one device
volume we instead fix the device volume automatically to the “loudest”
application stream volume. Sounds confusing? Actually it’s right the contrary, it feels pretty natural and
easy to use and brings us a big step forward to reduce a bit the
number of volume sliders in the entire audio pipeline from the application to what you hear.

The flat volumes logic only applies to devices where we know the
actual multiplication factor of the hardware volume slider. That’s
most devices supported by the ALSA kernel drivers except for a few
older devices and some cheap USB hardware that exports invalid
dB information.

On-the-fly Reconfiguration of Devices (aka “S/PDIF Support”)

PulseAudio will now automatically probe all possible combinations
of configurations how to use your sound card for playback and
capturing and then allow on-the-fly switching of the
configuration. What does that mean? Basically you may now switch
beetween “Analog Stereo”, “Digital S/PDIF Stereo”, “Analog Surround
5.1” (… and so on) on-the-fly without having to reconfigure PA on
the configuration file level or even having to stop your streams. This
fixes a couple of issues PA had previously, including proper SPDIF
support, and per-device configuration of the channel map of
devices.

Unfortunately there is no UI for this yet, and hence you need to
use pactl/pacmd on the command line to switch between the
profiles. Typing list-cards in pacmd will tell you
which profiles your card supports.

In a later PA version this functionality will be extended to also
allow input connector switching (i.e. microphone vs. line-in) and
output connector switching (i.e. internal speakers vs. line-out)
on-the-fly.

Native support for 24bit samples

PA now supports 24bit packed samples as well as 24bit stored in
the LSBs of 32bit integers natively. Previously these formats were
always converted into 32bit MSB samples.

Airport Express Support

Colin Guthrie contributed native Airport Express support. This will
make the RAOP
audio output of ApEx routers appear like local sound devices
(unfortunately sound devices with a very long latency), i.e. any
application connecting to PulseAudio can output audio to ApEx devices
in a similar way to how iTunes can do it on MacOSX.

Before you ask: it is unlikely that we will ever make PulseAudio be
able to act as an ApEx compatible device that takes connections from
iTunes (i.e. becoming a RAOP server instead of just an RAOP client).
Apple has an unfriendly attitude of dongling their devices to their
applications: normally iTunes has to cryptographically authenticate
itself to the device and the device to iTunes. iTunes’ key has been
recovered by the infamous Jon Lech
Johansen
, but the device key is still unknown. Without that key it
is not realistically possible to disguise PA as an ApEx.

Other stuff

There have been some extensive changes to natively support
Bluetooth audio devices well by directly accessing BlueZ. This code
was originally contributed by the GSoC student João Paulo Rechi
Vita. Initially, 0.9.15 was intended to become the version were BT audio
just works. Unfortunately the kernel is not really up to that yet, and
I am not sure everything will be in place so that 0.9.15 will ship
with well working BT support.

There have been a lot of internal changes and API additions. Most of
these however are not visible to the user.

I wonder …

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/send-file.html

… whether the guys behind this know about this?

It’s a pleasure to see as many projects as possible making use of Avahi.
OTOH I believe that all solutions should speak the same protocol. Using
Apple’s somewhat standardized link-local iChat/XMPP protocol (which is what Telekinesis does) seems to be the
best option to me: because you get MacOSX interoperability for free and
many IM clients (including many on Windows) already contain support for this as
well.

I wonder …

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/send-file.html

… whether the guys behind this know about this?

It’s a pleasure to see as many projects as possible making use of Avahi.
OTOH I believe that all solutions should speak the same protocol. Using
Apple’s somewhat standardized link-local iChat/XMPP protocol (which is what Telekinesis does) seems to be the
best option to me: because you get MacOSX interoperability for free and
many IM clients (including many on Windows) already contain support for this as
well.

FOMS/LCA Recap

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/foms-lca-recap.html

Finally, here’s my linux.conf.au 2007 and FOMS 2007
recap. Maybe a little bit late, but better late then never.

FOMS was a very well organized conference with a packed schedule
and a lot of high-profile attendees. To my surprise PulseAudio has been accepted by the
attendees without any opposition (at least none was expressed
aloud). After a few “discussions” on a few mailing lists (including
GNOME MLs) and some personal emails I got, I had thought that more
people were in opposition of the idea of having a userspace sound
daemon for the desktop. Apparently, I was overly pessimistic. Good
news, that!

During the FOMS conference we discussed the problems audio on Linux
currently has. One of the major issues still is that we’re lacking a
cross-platform PCM audio API everyone agrees on. ALSA is Linux-specific
and complicated to use. The only real contender is PortAudio. However,
PortAudio has its share of problems and hasn’t reach wide adoption
yet. Right now most larger software projects implement an audio
abstraction layer of some kind, and mostly in a very dirty, simplistic
and limited fasion. MPlayer does, Xine does it, Flash does
it. Everyone does it, and it sucks. (Note: this is only a very short
overview why audio on Linux sucks right now. For a longer one, please
have a look on the first 15mins of my PulseAudio talk at LCA, linked
below.)

Several people were asking why not to make the PulseAudio API the
new “standard” PCM API for Linux. Due to several reasons that would be a
bad idea. First of all, the PulseAudio API cannot be used on anything
else but PulseAudio. While PulseAudio has been ported to Win32, Vista
already has a userspace desktop sound server, hence running PulseAudio
on top of that doesn’t make much sense. Thus the API is not exactly
cross-platform. Secondly, I – as the guy who designed it – am not
happy with the current PulseAudio API. While it is very powerful it is
also very difficult to use and easy to misuse, mostly due to its fully
asynchronous nature. In addition it is also not the exactly smallest
API around.

So, what could be done about this? We agreed on a – maybe –
controversional solution: defining yet another abstracted PCM audio
API. Yes, fixing the problem that we have too many conflicting,
competing sound systems by defining yet another API sounds like a
paradoxon, but I do believe this is the right path to follow. Why?
Because none of the currently available solutions is suitable for all
application areas we have on Linux. Either the current APIs are not
portable, or they are horribly difficult to use properly, or have a
strange license, or are too simple in their functionality. MacOSX
managed to establish a single audio API (CoreAudio) that makes almost
everyone happy on that system – and we should be able to do same for
Linux. Secondly, none of the current APIs has been designed with
network sound servers in mind. However, proper networking support
reflects back into the API, and in a non-trivial way. An API which
works fine in networked environment needs to eliminate roundtrips
where possible, be open for time interpolation and have a flexible
buffering (besides other minor things). Thirdly none of the current
APIs offers enough functionality to properly support all the needs of
modern desktop sound systems, such as per-stream volumes, stream names
and notifications about external state changes.

During FOMS and LCA, Mikko Leppanen (from Nokia), Jean-Marc Valin
(from Xiph) and I sat down and designed a draft API for the
functionality we would like to see in this API. For the time being we
dubbed it libsydney, after the city where we started this
project. I plan to make this the only supported audio API for
PulseAudio, eventually. Thus, if you will code against PulseAudio you
will get cross-platform support for free. In addition, because
PulseAudio is now being integrated into the major distributions (at
least Ubuntu and Fedora), this library will be made available on most
systems through the backdoor.

So, what will this new API offer? Firstly, the buffering model is
much more powerful than of any current sound API. The buffering model
mostly follows PulseAudio’s internal buffering model which
(theoretically) can offer zero-latency streaming and has been
pioneered by Jim Gettys’ AF sound server. It allows you to seek around
in the playback buffer very flexibly. This is very useful to allow
very fast reaction to the user’s playback control commands while still
allowing large buffers, which are good to deal with high network
lag. In addition it is very handy for the programmer, such as when
implementing streaming clients where packets may arrive
out-of-order. The API will emulate this buffering model on top of
traditional audio devices, and when used on top of PulseAudio it will use
its native implementation. The API will also clearly define which
sound formats are guaranteed to be available, thus making it a lot
easier to code without thinking of different hardware supporting
different formats all the time. Of course, the API will be easier to
use than PulseAudio’s current API. It will be very portable, scaling
from FPU-less architectures to pro-audio machines with a massive
number of synchronised channels. There are several modes available to
deal with XRUNs semi-automatically, one of them guaranteeing that the
time axis stays linear and monotonical in all events.

The list of features of this new API is much longer, however,
enough of these grand plans! We didn’t write any real code for this
yet. To make sure that this project is not another one of those which
are announced grandiosely without ever producing any code I will stop
listing features here now. We will eventually publish a first draft of
our C API for public discussion. Stay tuned.

Side-by-side with libsydney I discussed an abstract API
for desktop event sounds with Mikko (i.e. those annoying “bing” sounds
when you click a button and the like). Dubbed libcanberra
(named after the city which one of the developers visited after
Sydney), this will hopefully be for the PulseAudio sample cache API
what libsydney is for the PulseAudio streaming API: a total
replacement.

As a by-product of the libsydney discussion Jean-Marc
coded a
fast C resampling library
supporting both floating point and fixed
point and being licensed under BSD. (In contrast to
libsamplerate which is GPL and floating-point-only, but which
probably has better quality). PulseAudio will make use of this new
library, as will libsydney. And I sincerly hope that ALSA,
GStreamer and other projects replace their crappy home-grown
resamplers with this one!

For PulseAudio I was looking for a CODEC which we could use to
encode audio if we have to transfer it over the network. Such a CODEC
would need to have low CPU requirements and allow low-latency
operation, while providing hifi audio. Compression ratio is not such a
high requirement. Unfortunately, as it seems no such CODEC exists,
especially not a “Free” one. However, the Xiph people recommended to
hack up a special version of FLAC for this task. FLAC is fast, has
(obviously) good quality and if hacked up could provide low-latency
encoding. However, FLAC doesn’t compress that well. Current PulseAudio
thin-client installations require 170kB network bandwidth for each
client if hifi audio is used. Encoding this in FLAC this could cut
this in half. Not perfect, but better than nothing.

So, that was FOMS! FOMS is a definitely highly recommended
conference. If you have the chance to attend next year, don’t miss it!
I’ve never been to a more productive, packed conference in my life!

At LCA I met fellow Avahi coder Trent Lloyd for the first time. Our
talk about Avahi went very well. During my flights to and back from
.au I hacked up avahi-ui
which I also announced during that talk. Also, in related news,
tedp started to work on an implementation of NAT-PMP
(aka “reverse firewall piercing”; both client and server) for
inclusion in Avahi. This will hopefully make the upcoming Wide-Area
DNS support in Avahi much more useful.

linux.conf.au was a very exciting conference. As a speaker
you’re treated like a rock star, with stuff like the speakers dinner,
the speakers adventure (climbing on top of Sydney’s AMP tower) and
the penguin dinner. Heck, the organizers even picked me up at the
airport, something I really didn’t expect when I landed in Sydney,
which however is quite nice after a 27h flight.

Two talks I particularly enjoyed at LCA:

nouveau – reverse engineered nvidia drivers (Ogg Theora)
burning cpu and battery on the gnome desktop (Ogg Theora)

And just for the sake of completeness, here are the links to my presentations:

The PulseAudio Sound Server (Ogg Theora; Slides)
Using Avahi the “Right Way” (Ogg Theora; Slides)

Ok, that’s it for now. Thanks go to Silvia Pfeiffer, the rest of
the FOMS team and the Seven Team for organizing these two amazing
conferences!