Tag Archives: osx

ESET Tries to Scare People Away From Using Torrents

Post Syndicated from Andy original https://torrentfreak.com/eset-tries-to-scare-people-away-from-using-torrents-170805/

Any company in the security game can be expected to play up threats among its customer base in order to get sales.

Sellers of CCTV equipment, for example, would have us believe that criminals don’t want to be photographed and will often go elsewhere in the face of that. Car alarm companies warn us that since X thousand cars are stolen every minute, an expensive Immobilizer is an anti-theft must.

Of course, they’re absolutely right to point these things out. People want to know about these offline risks since they affect our quality of life. The same can be said of those that occur in the online world too.

We ARE all at risk of horrible malware that will trash our computers and steal our banking information so we should all be running adequate protection. That being said, how many times do our anti-virus programs actually trap a piece of nasty-ware in a year? Once? Twice? Ten times? Almost never?

The truth is we all need to be informed but it should be done in a measured way. That’s why an article just published by security firm ESET on the subject of torrents strikes a couple of bad chords, particularly with people who like torrents. It’s titled “Why you should view torrents as a threat” and predictably proceeds to outline why.

“Despite their popularity among users, torrents are very risky ‘business’,” it begins.

“Apart from the obvious legal trouble you could face for violating the copyright of musicians, filmmakers or software developers, there are security issues linked to downloading them that could put you or your computer in the crosshairs of the black hats.”

Aside from the use of the phrase “very risky” (‘some risk’ is a better description), there’s probably very little to complain about in this opening shot. However, things soon go downhill.

“Merely downloading the newest version of BitTorrent clients – software necessary for any user who wants to download or seed files from this ‘ecosystem’ – could infect your machine and irreversibly damage your files,” ESET writes.

Following that scary statement, some readers will have already vowed never to use a torrent again and moved on without reading any more, but the details are really important.

To support its claim, ESET points to two incidents in 2016 (which to its great credit the company actually discovered) which involved the Transmission torrent client. Both involved deliberate third-party infection and in the latter hackers attacked Transmission’s servers and embedded malware in its OSX client before distribution to the public.

No doubt these were both miserable incidents (to which the Transmission team quickly responded) but to characterize this as a torrent client problem seems somewhat unfair.

People intent on spreading viruses and malware do not discriminate and will happily infect ANY piece of computer software they can. Sadly, many non-technical people reading the ESET post won’t read beyond the claim that installing torrent clients can “infect your machine and irreversibly damage your files.”

That’s a huge disservice to the hundreds of millions of torrent client installations that have taken place over a decade and a half and were absolutely trouble free. On a similar basis, we could argue that installing Windows is the main initial problem for people getting viruses from the Internet. It’s true but it’s also not the full picture.

Finally, the piece goes on to detail other incidents over the years where torrents have been found to contain malware. The several cases highlighted by ESET are both real and pretty unpleasant for victims but the important thing to note here is torrent users are no different to any other online user, no matter how they use the Internet.

People who download files from the Internet, from ALL untrusted sources, are putting themselves at risk of getting a virus or other malware. Whether that content is obtained from a website or a P2P network, the risks are ever-present and only a foolish person would do so without decent security software (such as ESET’s) protecting them.

The take home point here is to be aware of security risks and put them into perspective. It’s hard to put a percentage on these things but of the hundreds of millions of torrent and torrent client downloads that have taken place since their inception 15 years ago, the overwhelming majority have been absolutely fine.

Security situations do arise and we need to be aware of them, but presenting things in a way that spreads unnecessary concern in a particular sector isn’t necessary to sell products.

The AV-TEST Institute registers around 390,000 new malicious programs every day that don’t involve torrents, plenty for any anti-virus firm to deal with.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Tumbleweed at Vuze as Torrent Client Development Grinds to a Halt

Post Syndicated from Andy original https://torrentfreak.com/tumbleweed-at-vuze-as-torrent-client-development-grinds-to-a-halt-170710/

Back in the summer of 2003 when torrenting was still in its infancy, a new torrent client hit the web promising big things.

Taking the Latin name of the blue poison dart frog and deploying a logo depicting its image, the Azureus client aimed to carve out a niche in what would become a market of several hundred million users.

Written in Java and available on Windows, Linux, OSX, and Android, Azureus (latterly ‘Vuze’) always managed to divide the community. Heralded by many as a feature-rich powerhouse that left no stone unturned, others saw the client as bloated when compared to the more streamlined uTorrent.

All that being said, Vuze knew its place in the market and on the bells-and-whistles front, it always delivered. Its features included swarm-merging, built-in search, DVD-burning capabilities, and device integration. It felt like Vuze was always offering something new.

Indeed, for the past several years and like clockwork, every month new additions and fixes have been deployed to Vuze. Since at least 2012 and up to early 2017, not a single month passed without Vuze being tuned up or improved in some manner via beta or full versions. Now, however, all of that seems to have ground to a halt.

The last full release of Vuze (v5.7.5.0) containing plenty of tweaks and fixes was released on February 28 this year. It followed the previous full release by roughly three months, a pattern its developers have kept up for some time with earlier versions. As expected, the Vuze 5.7.5.1 beta versions followed but on April 10, everything stopped.

It’s now three whole months since Vuze the last beta release, which may not sound like a long time unless one considers the history. Vuze has been actively developed for 14 years and its developers have posted communications on their devblog archives every single month, at least as far back as July 2012. Since then – nothing.

Back in May, a user on Vuze forums noted that none of Vuze’s featured content (such as TED Talks) could be downloaded, while another reported that the client’s anti-virus definitions weren’t updating. Given past scheduling, a new version of the client should have been released about a month ago. Nothing appeared.

To illustrate, this is a screenshot of the Vuze source code repository, which shows the number of code changes committed since 2012. The drastic drop-off in April 2017 (12 commits) versus dozens to even hundreds in preceding months is punctuated by zero commits for the past three months.

Of course, even avid developers have offline lives, and it’s certainly possible that an unusual set of outside circumstances have conspired to give the impression that development has stopped. However, posting a note to the Vuze blog or Vuze forum shouldn’t be too difficult, so people are naturally worried about the future.

TorrentFreak has reached out to the respected developer identified by Vuze forum users as the most likely to respond to questions. At the time of publication, we had received no response.

As mentioned earlier, torrent users have a love/hate relationship with Vuze and Azureus but there is no mistaking this clients’ massive contribution to the torrent landscape. Millions will be hoping that the current radio silence is nothing sinister but until that confirmation is received, the concerns will continue.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

New – Managed Device Authentication for Amazon WorkSpaces

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-managed-device-authentication-for-amazon-workspaces/

Amazon WorkSpaces allows you to access a virtual desktop in the cloud from the web and from a wide variety of desktop and mobile devices. This flexibility makes WorkSpaces ideal for environments where users have the ability to use their existing devices (often known as BYOD, or Bring Your Own Device). In these environments, organizations sometimes need the ability to manage the devices which can access WorkSpaces. For example, they may have to regulate access based on the client device operating system, version, or patch level in order to help meet compliance or security policy requirements.

Managed Device Authentication
Today we are launching device authentication for WorkSpaces. You can now use digital certificates to manage client access from Apple OSX and Microsoft Windows. You can also choose to allow or block access from iOS, Android, Chrome OS, web, and zero client devices. You can implement policies to control which device types you want to allow and which ones you want to block, with control all the way down to the patch level. Access policies are set for each WorkSpaces directory. After you have set the policies, requests to connect to WorkSpaces from a client device are assessed and either blocked or allowed. In order to make use of this feature, you will need to distribute certificates to your client devices using Microsoft System Center Configuration Manager or a mobile device management (MDM) tool.

Here’s how you set your access control options from the WorkSpaces Console:

Here’s what happens if a client is not authorized to connect:

 

Available Today
This feature is now available in all Regions where WorkSpaces is available.

Jeff;

 

SAML for Your Serverless JavaScript Application: Part II

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/saml-for-your-serverless-javascript-application-part-ii/

Contributors: Richard Threlkeld, Gene Ting, Stefano Buliani

The full code for both scenarios—including SAM templates—can be found at the samljs-serverless-sample GitHub repository. We highly recommend you use the SAM templates in the GitHub repository to create the resources, opitonally you can manually create them.


This is the second part of a two part series for using SAML providers in your application and receiving short-term credentials to access AWS Services. These credentials can be limited with IAM roles so the users of the applications can perform actions like fetching data from databases or uploading files based on their level of authorization. For example, you may want to build a JavaScript application that allows a user to authenticate against Active Directory Federation Services (ADFS). The user can be granted scoped AWS credentials to invoke an API to display information in the application or write to an Amazon DynamoDB table.

Part I of this series walked through a client-side flow of retrieving SAML claims and passing them to Amazon Cognito to retrieve credentials. This blog post will take you through a more advanced scenario where logic can be moved to the backend for a more comprehensive and flexible solution.

Prerequisites

As in Part I of this series, you need ADFS running in your environment. The following configurations are used for reference:

  1. ADFS federated with the AWS console. For a walkthrough with an AWS CloudFormation template, see Enabling Federation to AWS Using Windows Active Directory, ADFS, and SAML 2.0.
  2. Verify that you can authenticate with user example\bob for both the ADFS-Dev and ADFS-Production groups via the sign-in page.
  3. Create an Amazon Cognito identity pool.

Scenario Overview

The scenario in the last blog post may be sufficient for many organizations but, due to size restrictions, some browsers may drop part or all of a query string when sending a large number of claims in the SAMLResponse. Additionally, for auditing and logging reasons, you may wish to relay SAML assertions via POST only and perform parsing in the backend before sending credentials to the client. This scenario allows you to perform custom business logic and validation as well as putting tracking controls in place.

In this post, we want to show you how these requirements can be achieved in a Serverless application. We also show how different challenges (like XML parsing and JWT exchange) can be done in a Serverless application design. Feel free to mix and match, or swap pieces around to suit your needs.

This scenario uses the following services and features:

  • Cognito for unique ID generation and default role mapping
  • S3 for static website hosting
  • API Gateway for receiving the SAMLResponse POST from ADFS
  • Lambda for processing the SAML assertion using a native XML parser
  • DynamoDB conditional writes for session tracking exceptions
  • STS for credentials via Lambda
  • KMS for signing JWT tokens
  • API Gateway custom authorizers for controlling per-session access to credentials, using JWT tokens that were signed with KMS keys
  • JavaScript-generated SDK from API Gateway using a service proxy to DynamoDB
  • RelayState in the SAMLRequest to ADFS to transmit the CognitoID and a short code from the client to your AWS backend

At a high level, this solution is similar to that of Scenario 1; however, most of the work is done in the infrastructure rather than on the client.

  • ADFS still uses a POST binding to redirect the SAMLResponse to API Gateway; however, the Lambda function does not immediately redirect.
  • The Lambda function decodes and uses an XML parser to read the properties of the SAML assertion.
  • If the user’s assertion shows that they belong to a certain group matching a specified string (“Prod” in the sample), then you assign a role that they can assume (“ADFS-Production”).
  • Lambda then gets the credentials on behalf of the user and stores them in DynamoDB as well as logging an audit record in a separate table.
  • Lambda then returns a short-lived, signed JSON Web Token (JWT) to the JavaScript application.
  • The application uses the JWT to get their stored credentials from DynamoDB through an API Gateway custom authorizer.

The architecture you build in this tutorial is outlined in the following diagram.

lambdasamltwo_1.png

First, a user visits your static website hosted on S3. They generate an ephemeral random code that is transmitted during redirection to ADFS, where they are prompted for their Active Directory credentials.

Upon successful authentication, the ADFS server redirects the SAMLResponse assertion, along with the code (as the RelayState) via POST to API Gateway.

The Lambda function parses the SAMLResponse. If the user is part of the appropriate Active Directory group (AWS-Production in this tutorial), it retrieves credentials from STS on behalf of the user.

The credentials are stored in a DynamoDB table called SAMLSessions, along with the short code. The user login is stored in a tracking table called SAMLUsers.

The Lambda function generates a JWT token, with a 30-second expiration time signed with KMS, then redirects the client back to the static website along with this token.

The client then makes a call to an API Gateway resource acting as a DynamoDB service proxy that retrieves the credentials via a DeleteItem call. To make this call, the client passes the JWT in the authorization header.

A custom authorizer runs to validate the token using the KMS key again as well as the original random code.

Now that the client has credentials, it can use these to access AWS resources.

Tutorial: Backend processing and audit tracking

Before you walk through this tutorial you will need the source code from the samljs-serverless-sample Github Repository. You should use the SAM template provided in order to streamline the process but we’ll outline how you you would manually create resources too. There is a readme in the repository with instructions for using the SAM template. Either way you will still perform the manual steps of KMS key configuration, ADFS enablement of RelayState, and Amazon Cognito Identity Pool creation. The template will automate the process in creating the S3 website, Lambda functions, API Gateway resources and DynamoDB tables.

We walk through the details of all the steps and configuration below for illustrative purposes in this tutorial calling out the sections that can be omitted if you used the SAM template.

KMS key configuration

To sign JWT tokens, you need an encrypted plaintext key, to be stored in KMS. You will need to complete this step even if you use the SAM template.

  1. In the IAM console, choose Encryption Keys, Create Key.
  2. For Alias, type sessionMaster.
  3. For Advanced Options, choose KMS, Next Step.
  4. For Key Administrative Permissions, select your administrative role or user account.
  5. For Key Usage Permissions, you can leave this blank as the IAM Role (next section) will have individual key actions configured. This allows you to perform administrative actions on the set of keys while the Lambda functions have rights to just create data keys for encryption/decryption and use them to sign JWTs.
  6. Take note of the Key ID, which is needed for the Lambda functions.

IAM role configuration

You will need an IAM role for executing your Lambda functions. If you are using the SAM template this can be skipped. The sample code in the GitHub repository under Scenario2 creates separate roles for each function, with limited permissions on individual resources when you use the SAM template. We recommend separate roles scoped to individual resources for production deployments. Your Lambda functions need the following permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1432927122000",
            "Effect": "Allow",
            "Action": [
                "dynamodb:PutItem",
                “dynamodb:GetItem”,
                “dynamodb:DeleteItem”,
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents",
                "kms:GenerateDataKey*",
                “kms:Decrypt”
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

Lambda function configuration

If you are not using the SAM template, create the following three Lambda functions from the GitHub repository in /Scenario2/lambda using the following names and environment variables. The Lambda functions are written in Node.js.

  • GenerateKey_awslabs_samldemo
  • ProcessSAML_awslabs_samldemo
  • SAMLCustomAuth_awslabs_samldemo

The functions above are built, packaged, and uploaded to Lambda. For two of the functions, this can be done from your workstation (the sample commands for each function assume OSX or Linux). The third will need to be built on an AWS EC2 instance running the current Lambda AMI.

GenerateKey_awslabs_samldemo

This function is only used one time to create keys in KMS for signing JWT tokens. The function calls GenerateDataKey and stores the encrypted CipherText blob as Base64 in DynamoDB. This is used by the other two functions for getting the PlainTextKey for signing with a Decrypt operation.

This function only requires a single file. It has the following environment variables:

  • KMS_KEY_ID: Unique identifier from KMS for your sessionMaster Key
  • SESSION_DDB_TABLE: SAMLSessions
  • ENC_CONTEXT: ADFS (or something unique to your organization)
  • RAND_HASH: us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

Navigate into /Scenario2/lambda/GenerateKey and run the following commands:

zip –r generateKey.zip .

aws lambda create-function --function-name GenerateKey_awslabs_samldemo --runtime nodejs4.3 --role LAMBDA_ROLE_ARN --handler index.handler --timeout 10 --memory-size 512 --zip-file fileb://generateKey.zip --environment Variables={SESSION_DDB_TABLE=SAMLSessions,ENC_CONTEXT=ADFS,RAND_HASH=us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX,KMS_KEY_ID=<kms key="KEY" id="ID">}

SAMLCustomAuth_awslabs_samldemo

This is an API Gateway custom authorizer called after the client has been redirected to the website as part of the login workflow. This function calls a GET against the service proxy to DynamoDB, retrieving credentials. The function uses the KMS key signing validation of the JWT created in the ProcessSAML_awslabs_samldemo function and also validates the random code that was generated at the beginning of the login workflow.

You must install the dependencies before zipping this function up. It has the following environment variables:

  • SESSION_DDB_TABLE: SAMLSessions
  • ENC_CONTEXT: ADFS (or whatever was used in GenerateKey_awslabs_samldemo)
  • ID_HASH: us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

Navigate into /Scenario2/lambda/CustomAuth and run:

npm install

zip –r custom_auth.zip .

aws lambda create-function --function-name SAMLCustomAuth_awslabs_samldemo --runtime nodejs4.3 --role LAMBDA_ROLE_ARN --handler CustomAuth.handler --timeout 10 --memory-size 512 --zip-file fileb://custom_auth.zip --environment Variables={SESSION_DDB_TABLE=SAMLSessions,ENC_CONTEXT=ADFS,ID_HASH= us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX }

ProcessSAML_awslabs_samldemo

This function is called when ADFS sends the SAMLResponse to API Gateway. The function parses the SAML assertion to select a role (based on a simple string search) and extract user information. It then uses this data to get short-term credentials from STS via AssumeRoleWithSAML and stores this information in a SAMLSessions table and tracks the user login via a SAMLUsers table. Both of these are DynamoDB tables but you could also store the user information in another AWS database type, as this is for auditing purposes. Finally, this function creates a JWT (signed with the KMS key) which is only valid for 30 seconds and is returned to the client as part of a 302 redirect from API Gateway.

This function needs to be built on an EC2 server running Amazon Linux. This function leverages two main external libraries:

  • nJwt: Used for secure JWT creation for individual client sessions to get access to their records
  • libxmljs: Used for XML XPath queries of the decoded SAMLResponse from AD FS

Libxmljs uses native build tools and you should run this on EC2 running the same AMI as Lambda and with Node.js v4.3.2; otherwise, you might see errors. For more information about current Lambda AMI information, see Lambda Execution Environment and Available Libraries.

After you have the correct AMI launched in EC2 and have SSH open to that host, install Node.js. Ensure that the Node.js version on EC2 is 4.3.2, to match Lambda. If your version is off, you can roll back with NVM.

After you have set up Node.js, run the following command:

yum install -y make gcc*

Now, create a /saml folder on your EC2 server and copy up ProcessSAML.js and package.json from /Scenario2/lambda/ProcessSAML to the EC2 server. Here is a sample SCP command:

cd ProcessSAML/

ls

package.json    ProcessSAML.js

scp -i ~/path/yourpemfile.pem ./* [email protected]:/home/ec2-user/saml/

Then you can SSH to your server, cd into the /saml directory, and run:

npm install

A successful build should look similar to the following:

lambdasamltwo_2.png

Finally, zip up the package and create the function using the following AWS CLI command and these environment variables. Configure the CLI with your credentials as needed.

  • SESSION_DDB_TABLE: SAMLSessions
  • ENC_CONTEXT: ADFS (or whatever was used in GenerateKeyawslabssamldemo)
  • PRINCIPAL_ARN: Full ARN of the AD FS IdP created in the IAM console
  • USER_DDB_TABLE: SAMLUsers
  • REDIRECT_URL: Endpoint URL of your static S3 website (or CloudFront distribution domain name if you did that optional step)
  • ID_HASH: us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
zip –r saml.zip .

aws lambda create-function --function-name ProcessSAML_awslabs_samldemo --runtime nodejs4.3 --role LAMBDA_ROLE_ARN --handler ProcessSAML.handler --timeout 10 --memory-size 512 --zip-file fileb://saml.zip –environment Variables={USER_DDB_TABLE=SAMLUsers,SESSION_DDB_TABLE= SAMLSessions,REDIRECT_URL=<your S3 bucket and test page path>,ID_HASH=us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX,ENC_CONTEXT=ADFS,PRINCIPAL_ARN=<your ADFS IdP ARN>}

If you built the first two functions on your workstation and created the ProcessSAML_awslabs_samldemo function separately in the Lambda console before building on EC2, you can update the code after building on with the following command:

aws lambda update-function-code --function-name ProcessSAML_awslabs_samldemo --zip-file fileb://saml.zip

Role trust policy configuration

This scenario uses STS directly to assume a role. You will need to complete this step even if you use the SAM template. Modify the trust policy, as you did before when Amazon Cognito was assuming the role. In the GitHub repository sample code, ProcessSAML.js is preconfigured to filter and select a role with “Prod” in the name via the selectedRole variable.

This is an example of business logic you can alter in your organization later, such as a callout to an external mapping database for other rules matching. In this tutorial, it corresponds to the ADFS-Production role that was created.

  1. In the IAM console, choose Roles and open the ADFS-Production Role.
  2. Edit the Trust Permissions field and replace the content with the following:

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Federated": [
              "arn:aws:iam::ACCOUNTNUMBER:saml-provider/ADFS"
    ]
          },
          "Action": "sts:AssumeRoleWithSAML"
        }
      ]
    }

If you end up using another role (or add more complex filtering/selection logic), ensure that those roles have similar trust policy configurations. Also note that the sample policy above purposely uses an array for the federated provider matching the IdP ARN that you added. If your environment has multiple SAML providers, you could list them here and modify the code in ProcessSAML.js to process requests from different IdPs and grant or revoke credentials accordingly.

DynamoDB table creation

If you are not using the SAM template, create two DynamoDB tables:

  • SAMLSessions: Temporarily stores credentials from STS. Credentials are removed by an API Gateway Service Proxy to the DynamoDB DeleteItem call that simultaneously returns the credentials to the client.
  • SAMLUsers: This table is for tracking user information and the last time they authenticated in the system via ADFS.

The following AWS CLI commands creates the tables (indexed only with a primary key hash, called identityHash and CognitoID respectively):

aws dynamodb create-table \
    --table-name SAMLSessions \
    --attribute-definitions \
        AttributeName=group,AttributeType=S \
    --key-schema AttributeName=identityhash,KeyType=HASH \
    --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
aws dynamodb create-table \
    --table-name SAMLUsers \
    --attribute-definitions \
        AttributeName=CognitoID,AttributeType=S \
    --key-schema AttributeName=CognitoID,KeyType=HASH \
    --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5

After the tables are created, you should be able to run the GenerateKey_awslabs_samldemo Lambda function and see a CipherText key stored in SAMLSessions. This is only for convenience of this post, to demonstrate that you should persist CipherText keys in a data store and never persist plaintext keys that have been decrypted. You should also never log plaintext keys in your code.

API Gateway configuration

If you are not using the SAM template, you will need to create API Gateway resources. If you have created resources for Scenario 1 in Part I, then the naming of these resources may be similar. If that is the case, then simply create an API with a different name (SAMLAuth2 or similar) and follow these steps accordingly.

  1. In the API Gateway console for your API, choose Authorizers, Custom Authorizer.
  2. Select your region and enter SAMLCustomAuth_awslabs_samldemo for the Lambda function. Choose a friendly name like JWTParser and ensure that Identity token source is method.request.header.Authorization. This tells the custom authorizer to look for the JWT in the Authorization header of the HTTP request, which is specified in the JavaScript code on your S3 webpage. Save the changes.

    lambdasamltwo_3.png

Now it’s time to wire up the Lambda functions to API Gateway.

  1. In the API Gateway console, choose Resources, select your API, and then create a Child Resource called SAML. This includes a POST and a GET method. The POST method uses the ProcessSAML_awslabs_samldemo Lambda function and a 302 redirect, while the GET method uses the JWTParser custom authorizer with a service proxy to DynamoDB to retrieve credentials upon successful authorization.
  2. lambdasamltwo_4.png

  3. Create a POST method. For Integration Type, choose Lambda and add the ProcessSAML_awslabs_samldemo Lambda function. For Method Request, add headers called RelayState and SAMLResponse.

    lambdasamltwo_5.png

  4. Delete the Method Response code for 200 and add a 302. Create a response header called Location. In the Response Models section, for Content-Type, choose application/json and for Models, choose Empty.

    lambdasamltwo_6.png

  5. Delete the Integration Response section for 200 and add one for 302 that has a Method response status of 302. Edit the response header for Location to add a Mapping value of integration.response.body.location.

    lambdasamltwo_7.png

  6. Finally, in order for Lambda to capture the SAMLResponse and RelayState values, choose Integration Request.

  7. In the Body Mapping Template section, for Content-Type, enter application/x-www-form-urlencoded and add the following template:

    {
    "SAMLResponse" :"$input.params('SAMLResponse')",
    "RelayState" :"$input.params('RelayState')",
    "formparams" : $input.json('$')
    }

  8. Create a GET method with an Integration Type of Service Proxy. Select the region and DynamoDB as the AWS Service. Use POST for the HTTP method and DeleteItem for the Action. This is important as you leverage a DynamoDB feature to return the current records when you perform deletion. This simultaneously allows credentials in this system to not be stored long term and also allows clients to retrieve them. For Execution role, use the Lambda role from earlier or a new role that only has IAM scoped permissions for DeleteItem on the SAMLSessions table.

    lambdasamltwo_8.png

  9. Save this and open Method Request.

  10. For Authorization, select your custom authorizer JWTParser. Add in a header called COGNITO_ID and save the changes.

    lambdasamltwo_9.png

  11. In the Integration Request, add in a header name of Content-Type and a value for Mapped of ‘application/x-amzn-json-1.0‘ (you need the single quotes surrounding the entry).

  12. Next, in the Body Mapping Template section, for Content-Type, enter application/json and add the following template:

    {
        "TableName": "SAMLSessions",
        "Key": {
            "identityhash": {
                "S": "$input.params('COGNITO_ID')"
            }
        },
        "ReturnValues": "ALL_OLD"
    }

Inspect this closely for a moment. When your client passes the JWT in an Authorization Header to this GET method, the JWTParser Custom Authorizer grants/denies executing a DeleteItem call on the SAMLSessions table.

ADF

If it is granted, then there needs to be an item to delete the reference as a primary key to the table. The client JavaScript (seen in a moment) passes its CognitoID through as a header called COGNITO_ID that is mapped above. DeleteItem executes to remove the credentials that were placed there via a call to STS by the ProcessSAML_awslabs_samldemo Lambda function. Because the above action specifies ALL_OLD under the ReturnValues mapping, DynamoDB returns these credentials at the same time.

lambdasamltwo_10.png

  1. Save the changes and open your /saml resource root.
  2. Choose Actions, Enable CORS.
  3. In the Access-Control-Allow-Headers section, add COGNITO_ID into the end (inside the quotes and separated from other headers by a comma), then choose Enable CORS and replace existing CORS headers.
  4. When completed, choose Actions, Deploy API. Use the Prod stage or another stage.
  5. In the Stage Editor, choose SDK Generation. For Platform, choose JavaScript and then choose Generate SDK. Save the folder someplace close. Take note of the Invoke URL value at the top, as you need this for ADFS configuration later.

Website configuration

If you are not using the SAM template, create an S3 bucket and configure it as a static website in the same way that you did for Part I.

If you are using the SAM template this will automatically be created for you however the steps below will still need to be completed:

In the source code repository, edit /Scenario2/website/configs.js.

  1. Ensure that the identityPool value matches your Amazon Cognito Pool ID and the region is correct.
  2. Leave adfsUrl the same if you’re testing on your lab server; otherwise, update with the AD FS DNS entries as appropriate.
  3. Update the relayingPartyId value as well if you used something different from the prerequisite blog post.

Next, download the minified version of the AWS SDK for JavaScript in the Browser (aws-sdk.min.js) and place it along with the other files in /Scenario2/website into the S3 bucket.

Copy the files from the API Gateway Generated SDK in the last section to this bucket so that the apigClient.js is in the root directory and lib folder is as well. The imports for these scripts (which do things like sign API requests and configure headers for the JWT in the Authorization header) are already included in the index.html file. Consult the latest API Gateway documentation if the SDK generation process updates in the future

ADFS configuration

Now that the AWS setup is complete, modify your ADFS setup to capture RelayState information about the client and to send the POST response to API Gateway for processing. You will need to complete this step even if you use the SAM template.

If you’re using Windows Server 2008 with ADFS 2.0, ensure that Update Rollup 2 is installed before enabling RelayState. Please see official Microsoft documentation for specific download information.

  1. After Update Rollup 2 is installed, modify %systemroot%\inetpub\adfs\ls\web.config. If you’re on a newer version of Windows Server running AD FS 3.0, modify %systemroot%\ADFS\Microsoft.IdentityServer.Servicehost.exe.config.
  2. Find the section in the XML marked <Microsoft.identityServer.web> and add an entry for <useRelayStateForIdpInitiatedSignOn enabled="true">. If you have the proper ADFS rollup or version installed, this should allow the RelayState parameter to be accepted by the service provider.
  3. In the ADFS console, open Relaying Party Trusts for Amazon Web Services and choose Endpoints.
  4. For Binding, choose POST and for Invoke URL,enter the URL to your API Gateway from the stage that you noted earlier.

At this point, you are ready to test out your webpage. Navigate to the S3 static website Endpoint URL and it should redirect you to the ADFS login screen. If the user login has been recent enough to have a valid SAML cookie, then you should see the login pass-through; otherwise, a login prompt appears. After the authentication has taken place, you should quickly end up back at your original webpage. Using the browser debugging tools, you see “Successful DDB call” followed by the results of a call to STS that were stored in DynamoDB.

lambdasamltwo_11.png

As in Scenario 1, the sample code under /scenario2/website/index.html has a button that allows you to “ping” an endpoint to test if the federated credentials are working. If you have used the SAM template this should already be working and you can test it out (it will fail at first – keep reading to find out how to set the IAM permissions!). If not go to API Gateway and create a new Resource called /users at the same level of /saml in your API with a GET method.

lambdasamltwo_12.png

For Integration type, choose Mock.

lambdasamltwo_13.png

In the Method Request, for Authorization, choose AWS_IAM. In the Integration Response, in the Body Mapping Template section, for Content-Type, choose application/json and add the following JSON:

{
    "status": "Success",
    "agent": "${context.identity.userAgent}"
}

lambdasamltwo_14.png

Before using this new Mock API as a test, configure CORS and re-generate the JavaScript SDK so that the browser knows about the new methods.

  1. On the /saml resource root and choose Actions, Enable CORS.
  2. In the Access-Control-Allow-Headers section, add COGNITO_ID into the endpoint and then choose Enable CORS and replace existing CORS headers.
  3. Choose Actions, Deploy API. Use the stage that you configured earlier.
  4. In the Stage Editor, choose SDK Generation and select JavaScript as your platform. Choose Generate SDK.
  5. Upload the new apigClient.js and lib directory to the S3 bucket of your static website.

One last thing must be completed before testing (You will need to complete this step even if you use the SAM template) if the credentials can invoke this mock endpoint with AWS_IAM credentials. The ADFS-Production Role needs execute-api:Invoke permissions for this API Gateway resource.

  1. In the IAM console, choose Roles, and open the ADFS-Production Role.

  2. For testing, you can attach the AmazonAPIGatewayInvokeFullAccess policy; however, for production, you should scope this down to the resource as documented in Control Access to API Gateway with IAM Permissions.

  3. After you have attached a policy with invocation rights and authenticated with AD FS to finish the redirect process, choose PING.

If everything has been set up successfully you should see an alert with information about the user agent.

Final Thoughts

We hope these scenarios and sample code help you to not only begin to build comprehensive enterprise applications on AWS but also to enhance your understanding of different AuthN and AuthZ mechanisms. Consider some ways that you might be able to evolve this solution to meet the needs of your own customers and innovate in this space. For example:

  • Completing the CloudFront configuration and leveraging SSL termination for site identification. See if this can be incorporated into the Lambda processing pipeline.
  • Attaching a scope-down IAM policy if the business rules are matched. For example, the default role could be more permissive for a group but if the user is a contractor (username with –C appended) they get extra restrictions applied when assumeRoleWithSaml is called in the ProcessSAML_awslabs_samldemo Lambda function.
  • Changing the time duration before credentials expire on a per-role basis. Perhaps if the SAMLResponse parsing determines the user is an Administrator, they get a longer duration.
  • Passing through additional user claims in SAMLResponse for further logical decisions or auditing by adding more claim rules in the ADFS console. This could also be a mechanism to synchronize some Active Directory schema attributes with AWS services.
  • Granting different sets of credentials if a user has accounts with multiple SAML providers. While this tutorial was made with ADFS, you could also leverage it with other solutions such as Shibboleth and modify the ProcessSAML_awslabs_samldemo Lambda function to be aware of the different IdP ARN values. Perhaps your solution grants different IAM roles for the same user depending on if they initiated a login from Shibboleth rather than ADFS?

The Lambda functions can be altered to take advantage of these options which you can read more about here. For more information about ADFS claim rule language manipulation, see The Role of the Claim Rule Language on Microsoft TechNet.

We would love to hear feedback from our customers on these designs and see different secure application designs that you’re implementing on the AWS platform.

Stitch – Python Remote Administration Tool AKA RAT

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/ni6lXu8TvAg/

Stitch is a cross-platform Python Remote Administration Tool, commonly known as a RAT. This framework allows you to build custom payloads for Windows, Mac OSX and Linux as well. You are able to select whether the payload binds to a specific IP and port, listens for a connection on a port, option to send an […]

The post Stitch – Python…

Read the full post at darknet.org.uk

OWASP VBScan – vBulletin Vulnerability Scanner

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/TTIz-sCvWbk/

OWASP VBScan short for vBulletin Vulnerability Scanner is an open-source project in Perl programming language to detect VBulletin CMS vulnerabilities and analyse them. Features VBScan currently has the following: Compatible with Windows, Linux & OSX Up to date exploit database Full path disclosure Firewall detect & bypass Version check…

Read the full post at darknet.org.uk

Scripting Languages for AWS Lambda: Running PHP, Ruby, and Go

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/scripting-languages-for-aws-lambda-running-php-ruby-and-go/

Dimitrij Zub
Dimitrij Zub, Solutions Architect
Raphael Sack
Raphael Sack, Technical Trainer

In our daily work with partners and customers, we see a lot of different amazing skills, expertise and experience in many fields, and programming languages. From languages that have been around for a while to languages on the cutting edge, many teams have developed a deep understanding of concepts of each language; they want to apply these languages with and within the innovations coming from AWS, such as AWS Lambda.

Lambda provides native support for a wide array of languages, such as Scala, Java, Node.js, Python, and C/C++ Linux native applications. In this post, we outline how you can use Lambda with different scripting languages.

For each language, you perform the following tasks:

  • Prepare: Launch an instance from an AMI and log in via SSH
  • Compile and package the language for Lambda
  • Install: Create the Lambda package and test the code

The preparation and installation steps are similar between languages, but we provide step-by-step guides and examples for compiling and packaging PHP, Go, and Ruby.

Common steps to prepare

You can use the capabilities of Lambda to run arbitrary executables to prepare the binaries to be executed within the Lambda environment.

The following steps are only an overview on how to get PHP, Go, or Ruby up and running on Lambda; however, using this approach, you can add more specific libraries, extend the compilation scope, and leverage JSON to interconnect your Lambda function to Amazon API Gateway and other services.

After your binaries have been compiled and your basic folder structure is set up, you won’t need to redo those steps for new projects or variations of your code. Simply write the code to accept inputs from STDIN and return to STDOUT and the written Node.js wrapper takes care of bridging the runtimes for you.

For the sake of simplicity, we demonstrate the preparation steps for PHP only, but these steps are also applicable for the other environments described later.

In the Amazon EC2 console, choose Launch instance. When you choose an AMI, use one of the AMIs in the Lambda Execution Environment and Available Libraries list, for the same region in which you will run the PHP code and launch an EC2 instance to have a compiler. For more information, see Step 1: Launch an Instance.

Pick t2.large as the EC2 instance type to have two cores and 8 GB of memory for faster PHP compilation times.

languages_1png

Choose Review and Launch to use the defaults for storage and add the instance to a default, SSH only, security group generated by the wizard.

Choose Launch to continue; in the launch dialog, you can select an existing key-pair value for your login or create a new one. In this case, create a new key pair called “php” and download it.

languages_2png

After downloading the keys, navigate to the download folder and run the following command:

chmod 400 php.pem

This is required because of SSH security standards. You can now connect to the instance using the EC2 public DNS. Get the value by selecting the instance in the console and looking it up under Public DNS in the lower right part of the screen.

ssh -i php.pem [email protected][PUBLIC DNS]

You’re done! With this instance up and running, you have the right AMI in the right region to be able to continue with all the other steps.

Getting ready for PHP

After you have logged in to your running AMI, you can start compiling and packaging your environment for Lambda. With PHP, you compile the PHP 7 environment from the source and make it ready to be packaged for the Lambda environment.

Setting up PHP on the instance

The next step is to prepare the instance to compile PHP 7, configure the PHP 7 compiler to output in a defined directory, and finally compile PHP 7 to the Lambda AMI.

Update the package manager by running the following command:

sudo yum update –y

Install the minimum necessary libraries to be able to compile PHP 7:

sudo yum install gcc gcc-c++ libxml2-devel -y 

With the dependencies installed, you need to download the PHP 7 sources available from

PHP Downloads.

For this post, we were running the EC2 instance in Ireland, so we selected http://ie1.php.net/get/php-7.0.7.tar.bz2/from/this/mirror as our mirror. Run the following command to download the sources to the instance and choose your own mirror for the appropriate region.

cd ~

wget http://ie1.php.net/distributions/php-7.0.7.tar.bz2 .

Extract the files using the following command:

tar -jxvf php-7.0.7.tar.bz2

This creates the php-7.0.7 folder in your home directory. Next, create a dedicated folder for the php-7 binaries by running the following commands.

mkdir /home/ec2-user/php-7-bin

./configure --prefix=/home/ec2-user/php-7-bin/

This makes sure the PHP compilation is nicely packaged into the php binaries folder you created in your home directory. Keep in mind, that you only compile the baseline PHP here to reduce the amount of dependencies required for your Lambda function.

You can add more dependencies and more compiler options to your PHP binaries using the options available in ./configure. Run ./configure –h for more information about what can be packaged into your PHP distribution to be used with Lambda, but also keep in mind that this will increase the overall binaries package.

Finally, run the following command to start the compilation:

make install

languages_3png

https://xkcd.com/303/

After the compilation is complete, you can quickly confirm that PHP is functional by running the following command:

cd ~/php-7-bin/bin/

./php –v

PHP 7.0.7 (cli) (built: Jun 16 2016 09:14:04) ( NTS )

Copyright (c) 1997-2016 The PHP Group

Zend Engine v3.0.0, Copyright (c) 1998-2016 Zend Technologies

Time to code

Using your favorite editor, you can create an entry point PHP file, which in this case reads input from a Linux pipe and provide its output to stdout. Take a simple JSON document and count the amounts of top-level attributes for this matter. Name the file HelloLambda.php.

<?php

$data = stream_get_contents(STDIN);

$json = json_decode($data, true);

$result = json_encode(array('result' => count($json)));

echo $result."n";

?>

Creating the Lambda package

With PHP compiled and ready to go, all you need to do now is to create your Lambda package with the Node.js wrapper as an entry point.

First, tar the php-7-bin folder where the binaries reside using the following command:

cd ~

tar -zcvf php-7-bin.tar.gz php-7-bin/

Download it to your local project folder where you can continue development, by logging out and running the following command from your local machine (Linux or OSX), or using tools like WinSCP on Windows:

scp -i php.pem [email protected][EC2_HOST]:~/php-7-bin.tar.gz .

With the package download, you create your Lambda project in a new folder, which you can call php-lambda for this specific example. Unpack all files into this folder, which should result in the following structure:

php-lambda 

+-- php-7-bin

The next step is to create a Node.js wrapper file. The file takes the inputs of the Lambda invocations, invoke the PHP binary with helloLambda.php as a parameter, and provide the inputs via Linux pipe to PHP for processing. Call the file php.js and copy the following content:

process.env['PATH'] = process.env['PATH'] + ':' + process.env['LAMBDA_TASK_ROOT'];

const spawn = require('child_process').spawn;

exports.handler = function(event, context) {

    //var php = spawn('php',['helloLambda.php']); //local debug only
    var php = spawn('php-7-bin/bin/php',['helloLambda.php']);
    var output = "";

    //send the input event json as string via STDIN to php process
    php.stdin.write(JSON.stringify(event));

    //close the php stream to unblock php process
    php.stdin.end();

    //dynamically collect php output
    php.stdout.on('data', function(data) {
          output+=data;
    });

    //react to potential errors
    php.stderr.on('data', function(data) {
            console.log("STDERR: "+data);
    });

    //finalize when php process is done.
    php.on('close', function(code) {
            context.succeed(JSON.parse(output));
    });
}

//local debug only
//exports.handler(JSON.parse("{"hello":"world"}"));

With all the files finalized, the folder structure should look like the following:

php-lambda

+– php-7-bin

— helloLambda.php

— php.js

The final step before the deployment is to zip the package into an archive which can be uploaded to Lambda. Call the package LambdaPHP.zip. Feel free to remove unnecessary files, such as phpdebug, from the php-7-bin/bin folder to reduce the size of the archive.

Go, Lambda, go!

The following steps are an overview of how to compile and execute Go applications on Lambda. As with the PHP section, you are free to enhance and build upon the Lambda function with other AWS services and your application infrastructure. Though this example allows you to use your own Linux machine with a fitting distribution to work locally, it might still be useful to understand the Lambda AMIs for test and automation.

To further enhance your environment, you may want to create an automatic compilation pipeline and even deployment of the Go application to Lambda. Consider using versioning and aliases, as they help in managing new versions and dev/test/production code.

Setting up Go on the instance

The next step is to set up the Go binaries on the instance, so that you can compile the upcoming application.

First, make sure your packages are up to date (always):

sudo yum update -y

Next, visit the official Go site, check for the latest version, and download it to EC2 or to your local machine if using Linux:

cd ~

wget https://storage.googleapis.com/golang/go1.6.2.linux-amd64.tar.gz .

Extract the files using the following command:

tar -xvf go1.6.2.linux-amd64.tar.

This creates a folder named “go” in your home directory.

Time to code

For this example, you create a very simple application that counts the amount of objects in the provided JSON element. Using your favorite editor, create a file named “HelloLambda.go” with the following code directly on the machine to which you have downloaded the Go package, which may be the EC2 instance you started in the beginning or your local environment, in which case you are not stuck with vi.

package main

import (
    "fmt"
    "os"
    "encoding/json"
)

func main() {
    var dat map[string]interface{}
    
    fmt.Printf( "Welcome to Lambda Go, now Go Go Go!n" )
    if len( os.Args ) < 2 {
        fmt.Println( "Missing args" )
        return
    }

    err := json.Unmarshal([]byte(os.Args[1]), &dat)

    if err == nil {
        fmt.Println( len( dat ) )
    } else {
        fmt.Println(err)
    }
}

Before compiling, configure an environment variable to tell the Go compiler where all the files are located:

export GOROOT=~/go/

You are now set to compile a nifty new application!

~/go/bin/go build ./HelloLambda.go

Start your application for the very first time:

./HelloLambda '{ "we" : "love", "using" : "Lambda" }'

You should see output similar to:

Welcome to Lambda Go, now Go Go Go!

2

Creating the Lambda package

You have already set up your machine to compile Go applications, written the code, and compiled it successfully; all that is left is to package it up and deploy it to Lambda.

If you used an EC2 instance, copy the binary from the compilation instance and prepare it for packaging. To copy out the binary, use the following command from your local machine (Linux or OSX), or using tools such as WinSCP on Windows.

scp -i GoLambdaGo.pem [email protected]:~/goLambdaGo .

With the binary ready, create the Lambda project in a new folder, which you can call go-lambda.

The next step is to create a Node.js wrapper file to invoke the Go application; call it go.js. The file takes the inputs of the Lambda invocations and invokes the Go binary.

Here’s the content for another example of a Node.js wrapper:

const exec = require('child_process').exec;
exports.handler = function(event, context) {
    const child = exec('./goLambdaGo ' + ''' + JSON.stringify(event) + ''', (error) => {
        // Resolve with result of process
        context.done(error, 'Process complete!');
    });

    // Log process stdout and stderr
    child.stdout.on('data', console.log);
    child.stderr.on('data', console.error);

}

With all the files finalized and ready, your folder structure should look like the following:

go-lambda

— go.js

— goLambdaGo

The final step before deployment is to zip the package into an archive that can be uploaded to Lambda; call the package LambdaGo.zip.

On a Linux or OSX machine, run the following command:

zip -r go.zip ./goLambdaGo ./go.js

A gem in Lambda

For convenience, you can use the same previously used instance, but this time to compile Ruby for use with Lambda. You can also create a new instance using the same instructions.

Setting up Ruby on the instance

The next step is to set up the Ruby binaries and dependencies on the EC2 instance or local Linux environment, so that you can package the upcoming application.

First, make sure your packages are up to date (always):

sudo yum update -y

For this post, you use Traveling Ruby, a project that helps in creating “portable”, self-contained Ruby packages. You can download the latest version from Traveling Ruby linux-x86:

cd ~

wget http://d6r77u77i8pq3.cloudfront.net/releases/traveling-ruby-20150715-2.2.2-linux-x86_64.tar.gz .

Extract the files to a new folder using the following command:

mkdir LambdaRuby

tar -xvf traveling-ruby-20150715-2.2.2-linux-x86_64.tar.gz -C LambdaRuby

This creates the “LambdaRuby” folder in your home directory.

Time to code

For this demonstration, you create a very simple application that counts the amount of objects in a provided JSON element. Using your favorite editor, create a file named “lambdaRuby.rb” with the following code:

#!./bin/ruby

require 'json'

# You can use this to check your Ruby version from within puts(RUBY_VERSION)

if ARGV.length > 0
    puts JSON.parse( ARGV[0] ).length
else
    puts "0"
end

Now, start your application for the very first time, using the following command:

./lambdaRuby.rb '{ "we" : "love", "using" : "Lambda" }'

You should see the amount of fields in the JSON as output (2).

Creating the Lambda package

You have downloaded the Ruby gem, written the code, and tested it successfully… all that is left is to package it up and deploy it to Lambda. Because Ruby is an interpreter-based language, you create a Node.js wrapper and package it with the Ruby script and all the Ruby files.

The next step is to create a Node.js wrapper file to invoke your Ruby application; call it ruby.js. The file takes the inputs of the Lambda invocations and invoke your Ruby application. Here’s the content for a sample Node.js wrapper:

const exec = require('child_process').exec;

exports.handler = function(event, context) {
    const child = exec('./lambdaRuby.rb ' + ''' + JSON.stringify(event) + ''', (result) => {
        // Resolve with result of process
        context.done(result);
    });

    // Log process stdout and stderr
    child.stdout.on('data', console.log);
    child.stderr.on('data', console.error);
}

With all the files finalized and ready, your folder structure should look like this:

LambdaRuby

+– bin

+– bin.real

+– info

— lambdaRuby.rb

+– lib

— ruby.js

The final step before the deployment is to zip the package into an archive to be uploaded to Lambda. Call the package LambdaRuby.zip.

On a Linux or OSX machine, run the following command:

zip -r ruby.zip ./

Copy your zip file from the instance so you can upload it. To copy out the archive, use the following command from your local machine (Linux or OSX), or using tools such as WinSCP on Windows.

scp -i RubyLambda.pem [email protected]:~/LambdaRuby/LambdaRuby.zip .

Common steps to install

With the package done, you are ready to deploy the PHP, Go, or Ruby runtime into Lambda.

Log in to the AWS Management Console and navigate to Lambda; make sure that the region matches the one which you selected the AMI for in the preparation step.

For simplicity, I’ve used PHP as an example for the deployment; however, the steps below are the same for Go and Ruby.

Creating the Lambda function

Choose Create a Lambda function, Skip. Select the following fields and upload your previously created archive.

languages_4png

The most important areas are:

  • Name: The name to give your Lambda function
  • Runtime: Node.js
  • Lambda function code: Select the zip file created in the PHP, Go, or Ruby section, such as php.zip, go.zip, or ruby.zip
  • Handler: php.handler (as in the code, the entry function is called handler and the file is php.js. If you have used the file names from the Go and Ruby sections use the following format: [js file name without .js].handler, i.e., go.handler)
  • Role: Choose Basic Role if you have not yet created one, and create a role for your Lambda function execution

Choose Next, Create function to continue to testing.

Testing the Lambda function

To test the Lambda function, choose Test in the upper right corner, which displays a sample event with three top-level attributes.

languages_5png

Feel free to add more, or simply choose Save and test to see that your function has executed properly.

languages_6png

Conclusion

In this post, we outlined three different ways to create scripting language runtimes for Lambda, from compiling against the Lambda runtime for PHP and being able to run scripts, compiling the actuals as in Go, or using packaged binaries as much as possible with Ruby. We hope you enjoyed the ideas, found the hidden gems, and are now ready to go to create some pretty hefty projects in your favorite language, enjoying serverless, Amazon Kinesis, and API Gateway along the way.

If you have questions or suggestions, please comment below.

Androguard – Reverse Engineering & Malware Analysis For Android

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/6-3ScpU6zF8/

Androguard is a toolkit built in Python which provides reverse engineering and malware analysis for Android. It’s buyilt to examine * Dex/Odex (Dalvik virtual machine) (.dex) (disassemble, decompilation), * APK (Android application) (.apk), * Android’s binary xml (.xml) and * Android Resources (.arsc). Androguard is available for Linux/OSX/Windows…

Read the full post at darknet.org.uk

Building End-to-End Continuous Delivery and Deployment Pipelines in AWS and TeamCity

Post Syndicated from Balaji Iyer original https://aws.amazon.com/blogs/devops/building-end-to-end-continuous-delivery-and-deployment-pipelines-in-aws-and-teamcity/

By Balaji Iyer, Janisha Anand, and Frank Li

Organizations that transform their applications to cloud-optimized architectures need a seamless, end-to-end continuous delivery and deployment workflow: from source code, to build, to deployment, to software delivery.

Continuous delivery is a DevOps software development practice where code changes are automatically built, tested, and prepared for a release to production. The practice expands on continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage. When continuous delivery is implemented properly, developers will always have a deployment-ready build artifact that has undergone a standardized test process.

Continuous deployment is the process of deploying application revisions to a production environment automatically, without explicit approval from a developer. This process makes the entire software release process automated. Features are released as soon as they are ready, providing maximum value to customers.

These two techniques enable development teams to deploy software rapidly, repeatedly, and reliably.

In this post, we will build an end-to-end continuous deployment and delivery pipeline using AWS CodePipeline (a fully managed continuous delivery service), AWS CodeDeploy (an automated application deployment service), and TeamCity’s AWS CodePipeline plugin. We will use AWS CloudFormation to setup and configure the end-to-end infrastructure and application stacks. The ­­pipeline pulls source code from an Amazon S3 bucket, an AWS CodeCommit repository, or a GitHub repository. The source code will then be built and tested using TeamCity’s continuous integration server. Then AWS CodeDeploy will deploy the compiled and tested code to Amazon EC2 instances.

Prerequisites

You’ll need an AWS account, an Amazon EC2 key pair, and administrator-level permissions for AWS Identity and Access Management (IAM), AWS CloudFormation, AWS CodeDeploy, AWS CodePipeline, Amazon EC2, and Amazon S3.

Overview

Here are the steps:

  1. Continuous integration server setup using TeamCity.
  2. Continuous deployment using AWS CodeDeploy.
  3. Building a delivery pipeline using AWS CodePipeline.

In less than an hour, you’ll have an end-to-end, fully-automated continuous integration, continuous deployment, and delivery pipeline for your application. Let’s get started!

1. Continuous integration server setup using TeamCity

Click here on this button launch-stack to launch an AWS CloudFormation stack to set up a TeamCity server. If you’re not already signed in to the AWS Management Console, you will be prompted to enter your AWS credentials. This stack provides an automated way to set up a TeamCity server based on the instructions here. You can download the template used for this setup from here.

The CloudFormation template does the following:

  1. Installs and configures the TeamCity server and its dependencies in Linux.
  2. Installs the AWS CodePipeline plugin for TeamCity.
  3. Installs a sample application with build configurations.
  4. Installs PHP meta-runners required to build the sample application.
  5. Redirects TeamCity port 8111 to 80.

Choose the AWS region where the TeamCity server will be hosted. For this demo, choose US East (N. Virginia).

Select a region

On the Select Template page, choose Next.

On the Specify Details page, do the following:

  1. In Stack name, enter a name for the stack. The name must be unique in the region in which you are creating the stack.
  2. In InstanceType, choose the instance type that best fits your requirements. The default value is t2.medium.

Note: The default instance type exceeds what’s included in the AWS Free Tier. If you use t2.medium, there will be charges to your account. The cost will depend on how long you keep the CloudFormation stack and its resources.

  1. In KeyName, choose the name of your Amazon EC2 key pair.
  2. In SSHLocation, enter the IP address range that can be used to connect through SSH to the EC2 instance. SSH and HTTP access is limited to this IP address range.

Note: You can use checkip.amazonaws.com or whatsmyip.org to find your IP address. Remember to add /32 to any single domain or, if you are representing a larger IP address space, use the correct CIDR block notation.

Specify Details

Choose Next.

Although it’s optional, on the Options page, type TeamCityServer for the instance name. This is the name used in the CloudFormation template for the stack. It’s a best practice to name your instance, because it makes it easier to identify or modify resources later on.

Choose Next.

On the Review page, choose Create button. It will take several minutes for AWS CloudFormation to create the resources for you.

Review

When the stack has been created, you will see a CREATE_COMPLETE message on the Overview tab in the Status column.

Events

You have now successfully created a TeamCity server. To access the server, on the EC2 Instance page, choose Public IP for the TeamCityServer instance.

Public DNS

On the TeamCity First Start page, choose Proceed.

TeamCity First Start

Although an internal database based on the HSQLDB database engine can be used for evaluation, TeamCity strongly recommends that you use an external database as a back-end TeamCity database in a production environment. An external database provides better performance and reliability. For more information, see the TeamCity documentation.

On the Database connection setup page, choose Proceed.

Database connection setup

The TeamCity server will start, which can take several minutes.

TeamCity is starting

Review and Accept the TeamCity License Agreement, and then choose Continue.

Next, create an Administrator account. Type a user name and password, and then choose Create Account.

You can navigate to the demo project from Projects in the top-left corner.

Projects

Note: You can create a project from a repository URL (the option used in this demo), or you can connect to your managed Git repositories, such as GitHub or BitBucket. The demo app used in this example can be found here.

We have already created a sample project configuration. Under Build, choose Edit Settings, and then review the settings.

Demo App

Choose Build Step: PHP – PHPUnit.

Build Step

The fields on the Build Step page are already configured.

Build Step

Choose Run to start the build.

Run Test

To review the tests that are run as part of the build, choose Tests.

Build

Build

You can view any build errors by choosing Build log from the same drop-down list.

Now that we have a successful build, we will use AWS CodeDeploy to set up a continuous deployment pipeline.

2. Continuous deployment using AWS CodeDeploy

Click here on this button launch-stack to launch an AWS CloudFormation stack that will use AWS CodeDeploy to set up a sample deployment pipeline. If you’re not already signed in to the AWS Management Console, you will be prompted to enter your AWS credentials.

You can download the master template used for this setup from here. The template nests two CloudFormation templates to execute all dependent stacks cohesively.

  1. Template 1 creates a fleet of up to three EC2 instances (with a base operating system of Windows or Linux), associates an instance profile, and installs the AWS CodeDeploy agent. The CloudFormation template can be downloaded from here.
  2. Template 2 creates an AWS CodeDeploy deployment group and then installs a sample application. The CloudFormation template can be downloaded from here.

Choose the same AWS region you used when you created the TeamCity server (US East (N. Virginia)).

Note: The templates contain Amazon Machine Image (AMI) mappings for us-east-1, us-west-2, eu-west-1, and ap-southeast-2 only.

On the Select Template page, choose Next.

Picture17

On the Specify Details page, in Stack name, type a name for the stack. In the Parameters section, do the following:

  1. In AppName, you can use the default, or you can type a name of your choice. The name must be between 2 and 15 characters long. It can contain lowercase and alphanumeric characters, hyphens (-), and periods (.), but the name must start with an alphanumeric character.
  1. In DeploymentGroupName, you can use the default, or you type a name of your choice. The name must be between 2 and 25 characters long. It can contain lowercase and alphanumeric characters, hyphens (-), and periods (.), but the name must start with an alphanumeric character.

Picture18

  1. In InstanceType, choose the instance type that best fits the requirements of your application.
  2. In InstanceCount, type the number of EC2 instances (up to three) that will be part of the deployment group.
  3. For Operating System, choose Linux or Windows.
  4. Leave TagKey and TagValue at their defaults. AWS CodeDeploy will use this tag key and value to locate the instances during deployments. For information about Amazon EC2 instance tags, see Working with Tags Using the Console.Picture19
  5. In S3Bucket and S3Key, type the bucket name and S3 key where the application is located. The default points to a sample application that will be deployed to instances in the deployment group. Based on what you selected in the OperatingSystem field, use the following values.
    Linux:
    S3Bucket: aws-codedeploy
    S3Key: samples/latest/SampleApp_Linux.zip
    Windows:
    S3Bucket: aws-codedeploy
    S3Key: samples/latest/SampleApp_Windows.zip
  1. In KeyName, choose the name of your Amazon EC2 key pair.
  2. In SSHLocation, enter the IP address range that can be used to connect through SSH/RDP to the EC2 instance.

Picture20

Note: You can use checkip.amazonaws.com or whatsmyip.org to find your IP address. Remember to add /32 to any single domain or, if you are representing a larger IP address space, use the correct CIDR block notation.

Follow the prompts, and then choose Next.

On the Review page, select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box. Review the other settings, and then choose Create.

Picture21

It will take several minutes for CloudFormation to create all of the resources on your behalf. The nested stacks will be launched sequentially. You can view progress messages on the Events tab in the AWS CloudFormation console.

Picture22

You can see the newly created application and deployment groups in the AWS CodeDeploy console.

Picture23

To verify that your application was deployed successfully, navigate to the DNS address of one of the instances.

Picture24

Picture25

Now that we have successfully created a deployment pipeline, let’s integrate it with AWS CodePipeline.

3. Building a delivery pipeline using AWS CodePipeline

We will now create a delivery pipeline in AWS CodePipeline with the TeamCity AWS CodePipeline plugin.

  1. Using AWS CodePipeline, we will build a new pipeline with Source and Deploy stages.
  2. Create a custom action for the TeamCity Build stage.
  3. Create an AWS CodePipeline action trigger in TeamCity.
  4. Create a Build stage in the delivery pipeline for TeamCity.
  5. Publish the build artifact for deployment.

Step 1: Build a new pipeline with Source and Deploy stages using AWS CodePipeline.

In this step, we will create an Amazon S3 bucket to use as the artifact store for this pipeline.

  1. Install and configure the AWS CLI.
  1. Create an Amazon S3 bucket that will host the build artifact. Replace account-number with your AWS account number in the following steps.
    $ aws s3 mb s3://demo-app-build-account-number
  1. Enable bucket versioning
    $ aws s3api put-bucket-versioning --bucket demo-app-build-account-number --versioning-configuration Status=Enabled
  1. Download the sample build artifact and upload it to the Amazon S3 bucket created in step 2.
  • OSX/Linux:
    $ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/Sample_Linux_App.zip | aws s3 cp - s3://demo-app-build-account-number
  • Windows:
    $ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/Sample_Windows_App.zip
    $ aws s3 cp ./Sample_Windows_App.zip s3://demo-app-account-number

Note: You can use AWS CloudFormation to perform these steps in an automated way. When you choose launch-stack, this template will be used. Use the following commands to extract the Amazon S3 bucket name, enable versioning on the bucket, and copy over the sample artifact.

$ export bucket-name ="$(aws cloudformation describe-stacks --stack-name “S3BucketStack” --output text --query 'Stacks[0].Outputs[?OutputKey==`S3BucketName`].OutputValue')"
$ aws s3api put-bucket-versioning --bucket $bucket-name --versioning-configuration Status=Enabled && wget https://s3.amazonaws.com/teamcity-demo-app/Sample_Linux_App.zip && aws s3 cp ./Sample_Linux_App.zip s3://$bucket-name

You can create a pipeline by using a CloudFormation stack or the AWS CodePipeline console.

Option 1: Use AWS CloudFormation to create a pipeline

We’re going to create a two-stage pipeline that uses a versioned Amazon S3 bucket and AWS CodeDeploy to release a sample application. (You can use an AWS CodeCommit repository or a GitHub repository as the source provider instead of Amazon S3.)

Click here on this button launch-stack to launch an AWS CloudFormation stack to set up a new delivery pipeline using the application and deployment group created in an earlier step. If you’re not already signed in to the AWS Management Console, you will be prompted to enter your AWS credentials.

Choose the US East (N. Virginia) region, and then choose Next.

Leave the default options, and then choose Next.

Picture26

On the Options page, choose Next.

Picture27

Select the I acknowledge that AWS CloudFormation might create IAM resources check box, and then choose Create. This will create the delivery pipeline in AWS CodePipeline.

Option 2: Use the AWS CodePipeline console to create a pipeline

On the Create pipeline page, in Pipeline name, type a name for your pipeline, and then choose Next step.
Picture28

Depending on where your source code is located, you can choose Amazon S3, AWS CodeCommit, or GitHub as your Source provider. The pipeline will be triggered automatically upon every check-in to your GitHub or AWS CodeCommit repository or when an artifact is published into the S3 bucket. In this example, we will be accessing the product binaries from an Amazon S3 bucket.

Choose Next step.Picture29

s3://demo-app-build-account-number/Sample_Linux_App.zip (or) Sample_Windows_App.zip

Note: AWS CodePipeline requires a versioned S3 bucket for source artifacts. Enable versioning for the S3 bucket where the source artifacts will be located.

On the Build page, choose No Build. We will update the build provider information later on.Picture31

For Deployment provider, choose CodeDeploy. For Application name and Deployment group, choose the application and deployment group we created in the deployment pipeline step, and then choose Next step.Picture32

An IAM role will provide the permissions required for AWS CodePipeline to perform the build actions and service calls.  If you already have a role you want to use with the pipeline, choose it on the AWS Service Role page. Otherwise, type a name for your role, and then choose Create role.  Review the predefined permissions, and then choose Allow. Then choose Next step.

 

For information about AWS CodePipeline access permissions, see the AWS CodePipeline Access Permissions Reference.

Picture34

Review your pipeline, and then choose Create pipeline

Picture35

This will trigger AWS CodePipeline to execute the Source and Beta steps. The source artifact will be deployed to the AWS CodeDeploy deployment groups.

Picture36

Now you can access the same DNS address of the AWS CodeDeploy instance to see the updated deployment. You will see the background color has changed to green and the page text has been updated.

Picture37

We have now successfully created a delivery pipeline with two stages and integrated the deployment with AWS CodeDeploy. Now let’s integrate the Build stage with TeamCity.

Step 2: Create a custom action for TeamCity Build stage

AWS CodePipeline includes a number of actions that help you configure build, test, and deployment resources for your automated release process. TeamCity is not included in the default actions, so we will create a custom action and then include it in our delivery pipeline. TeamCity’s CodePipeline plugin will also create a job worker that will poll AWS CodePipeline for job requests for this custom action, execute the job, and return the status result to AWS CodePipeline.

TeamCity’s custom action type (Build/Test categories) can be integrated with AWS CodePipeline. It’s similar to Jenkins and Solano CI custom actions. TeamCity’s CodePipeline plugin will also create a job worker that will poll AWS CodePipeline for job requests for this custom action, execute the job, and return the status result to AWS CodePipeline.

The TeamCity AWS CodePipeline plugin is already installed on the TeamCity server we set up earlier. To learn more about installing TeamCity plugins, see install the plugin. We will now create a custom action to integrate TeamCity with AWS CodePipeline using a custom-action JSON file.

Download this file locally: https://github.com/JetBrains/teamcity-aws-codepipeline-plugin/blob/master/custom-action.json

Open a terminal session (Linux, OS X, Unix) or command prompt (Windows) on a computer where you have installed the AWS CLI. For information about setting up the AWS CLI, see here.

Use the AWS CLI to run the aws codepipeline create-custom-action-type command, specifying the name of the JSON file you just created.

For example, to create a build custom action:

$ aws codepipeline create-custom-action-type --cli-input-json file://teamcity-custom-action.json

This should result in an output similar to this:

{
    "actionType": {
        "inputArtifactDetails": {
            "maximumCount": 5,
            "minimumCount": 0
        },
        "actionConfigurationProperties": [
            {
                "description": "The expected URL format is http[s]://host[:port]",
                "required": true,
                "secret": false,
                "key": true,
                "queryable": false,
                "name": "TeamCityServerURL"
            },
            {
                "description": "Corresponding TeamCity build configuration external ID",
                "required": true,
                "secret": false,
                "key": true,
                "queryable": false,
                "name": "BuildConfigurationID"
            },
            {
                "description": "Must be unique, match the corresponding field in the TeamCity build trigger settings, satisfy regular expression pattern: [a-zA-Z0-9_-]+] and have length <= 20",
                "required": true,
                "secret": false,
                "key": true,
                "queryable": true,
                "name": "ActionID"
            }
        ],
        "outputArtifactDetails": {
            "maximumCount": 5,
            "minimumCount": 0
        },
        "id": {
            "category": "Build",
            "owner": "Custom",
            "version": "1",
            "provider": "TeamCity"
        },
        "settings": {
            "entityUrlTemplate": "{Config:TeamCityServerURL}/viewType.html?buildTypeId={Config:BuildConfigurationID}",
            "executionUrlTemplate": "{Config:TeamCityServerURL}/viewLog.html?buildId={ExternalExecutionId}&tab=buildResultsDiv"
        }
    }
}

Before you add the custom action to your delivery pipeline, make the following changes to the TeamCity build server. You can access the server by opening the Public IP of the TeamCityServer instance from the EC2 Instance page.

Picture38

In TeamCity, choose Projects. Under Build Configuration Settings, choose Version Control Settings. You need to remove the version control trigger here so that the TeamCity build server will be triggered during the Source stage in AWS CodePipeline. Choose Detach.

Picture39

Step 3: Create a new AWS CodePipeline action trigger in TeamCity

Now add a new AWS CodePipeline trigger in your build configuration. Choose Triggers, and then choose Add new trigger

Picture40

From the drop-down menu, choose AWS CodePipeline Action.

Picture41

 

In the AWS CodePipeline console, choose the region in which you created your delivery pipeline. Enter your access key credentials, and for Action ID, type a unique name. You will need this ID when you add a TeamCity Build stage to the pipeline.

Picture42

Step 4: Create a new Build stage in the delivery pipeline for TeamCity

Add a stage to the pipeline and name it Build.

Picture43

From the drop-down menu, choose Build. In Action name, type a name for the action. In Build provider, choose TeamCity, and then choose Add action.

Select TeamCity, click Add action

Picture44

For TeamCity Action Configuration, use the following:

TeamCityServerURL:  http://<Public DNS address of the TeamCity build server>[:port]

Picture45

BuildConfigurationID: In your TeamCity project, choose Build. You’ll find this ID (AwsDemoPhpSimpleApp_Build) under Build Configuration Settings.

Picture46

ActionID: In your TeamCity project, choose Build. You’ll find this ID under Build Configuration Settings. Choose Triggers, and then choose AWS CodePipeline Action.

Picture47

Next, choose input and output artifacts for the Build stage, and then choose Add action.

Picture48

We will now publish a new artifact to the Amazon S3 artifact bucket we created earlier, so we can see the deployment of a new app and its progress through the delivery pipeline. The demo app used in this artifact can be found here for Linux or here for Windows.

Download the sample build artifact and upload it to the Amazon S3 bucket created in step 2.

OSX/Linux:

$ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/PhpArtifact.zip | aws s3 cp - s3://demo-app-build-account-number

Windows:

$ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/WindowsArtifact.zip
$ aws s3 cp ./WindowsArtifact.zip s3://demo-app-account-number

From the AWS CodePipeline dashboard, under delivery-pipeline, choose Edit.Picture49

Edit Source stage by choosing the edit icon on the right.

Amazon S3 location:

Linux: s3://demo-app-account-number/PhpArtifact.zip

Windows: s3://demo-app-account-number/WindowsArtifact.zip

Under Output artifacts, make sure My App is displayed for Output artifact #1. This will be the input artifact for the Build stage.Picture50

The output artifact of the Build stage should be the input artifact of the Beta deployment stage (in this case, MyAppBuild).Picture51

Choose Update, and then choose Save pipeline changes. On the next page, choose Save and continue.

Step 5: Publish the build artifact for deploymentPicture52

Step (a)

In TeamCity, on the Build Steps page, for Runner type, choose Command Line, and then add the following custom script to copy the source artifact to the TeamCity build checkout directory.

Note: This step is required only if your AWS CodePipeline source provider is either AWS CodeCommit or Amazon S3. If your source provider is GitHub, this step is redundant, because the artifact is copied over automatically by the TeamCity AWS CodePipeline plugin.

In Step name, enter a name for the Command Line runner to easily distinguish the context of the step.

Syntax:

$ cp -R %codepipeline.artifact.input.folder%/<CodePipeline-Name>/<build-input-artifact-name>/* % teamcity.build.checkoutDir%
$ unzip *.zip -d %teamcity.build.checkoutDir%
$ rm –rf %teamcity.build.checkoutDir%/*.zip

For Custom script, use the following commands:

cp -R %codepipeline.artifact.input.folder%/delivery-pipeline/MyApp/* %teamcity.build.checkoutDir%
unzip *.zip -d %teamcity.build.checkoutDir%
rm –rf %teamcity.build.checkoutDir%/*.zip

Picture53

Step (b):

For Runner type, choose Command Line runner type, and then add the following custom script to copy the build artifact to the output folder.

For Step name, enter a name for the Command Line runner.

Syntax:

$ mkdir -p %codepipeline.artifact.output.folder%/<CodePipeline-Name>/<build-output-artifact-name>/
$ cp -R %codepipeline.artifact.input.folder%/<CodePipeline-Name>/<build-input-artifact-name>/* %codepipeline.artifact.output.folder%/<CodePipeline-Name/<build-output-artifact-name>/

For Custom script, use the following commands:

$ mkdir -p %codepipeline.artifact.output.folder%/delivery-pipeline/MyAppBuild/
$ cp -R %codepipeline.artifact.input.folder%/delivery-pipeline/MyApp/* %codepipeline.artifact.output.folder%/delivery-pipeline/MyAppBuild/

CopyToOutputFolderIn Build Steps, choose Reorder build steps to ensure the copying of the source artifact step is executed before the PHP – PHP Unit step.Picture54

Drag and drop Copy Source Artifact To Build Checkout Directory to make it the first build step, and then choose Apply.Picture55

Navigate to the AWS CodePipeline console. Choose the delivery pipeline, and then choose Release change. When prompted, choose Release.

Choose Release on the next prompt.

Picture56

The most recent change will run through the pipeline again. It might take a few moments before the status of the run is displayed in the pipeline view.

Here is what you’d see after AWS CodePipeline runs through all of the stages in the pipeline:Picture57

Let’s access one of the instances to see the new application deployment on the EC2 Instance page.Picture58

If your base operating system is Windows, accessing the public DNS address of one of the AWS CodeDeploy instances will result in the following page.

Windows: http://public-dns/Picture59

If your base operating system is Linux, when we access the public DNS address of one of the AWS CodeDeploy instances, we will see the following test page, which is the sample application.

Linux: http://public-dns/www/index.phpPicture60

Congratulations! You’ve created an end-to-end deployment and delivery pipeline ─ from source code, to build, to deployment ─ in a fully automated way.

Summary:

In this post, you learned how to build an end-to-end delivery and deployment pipeline on AWS. Specifically, you learned how to build an end-to-end, fully automated, continuous integration, continuous deployment, and delivery pipeline for your application, at scale, using AWS deployment and management services. You also learned how AWS CodePipeline can be easily extended through the use of custom triggers to integrate other services like TeamCity.

If you have questions or suggestions, please leave a comment below.

Attackers Infect Transmission Torrent Client With OS X Malware

Post Syndicated from Andy original https://torrentfreak.com/attackers-infect-transmission-torrent-client-with-os-x-malware-160831/

Last month, researchers at IT security company ESET reported on a new type of OS X malware. Called OSX/Keydnap, the malicious software was designed to steal the content of OS X’s keychain while inserting a backdoor.

In a detailed blog post on the topic, ESET said that it was unclear how machines became infected with OSX/Keydnap but speculated that it might be through attack vectors such as spam, downloads from untrusted websites, “or something else”. Other things were certain, however.

“What we know is that a downloader component is distributed in a .zip file. The archive file contains a Mach-O executable file with an extension that looks benign, such as .txt or .jpg. However, the file extension actually contains a space character at the end, which means double-clicking the file in Finder will launch it in Terminal and not Preview or TextEdit,” the researchers wrote.

osx-mal-1

Now, several weeks later, it now transpires that some BitTorrent users have been exposed to OSX/Keydnap. While some might presume that could have taken place through a suspect download from an infected torrent (also possible), this particular attack actually took place from a trusted source.

ESET reports that a recompiled version of the open source BitTorrent client Transmission was offered for download on the software’s official website. Somehow, that software had already become infected with OSX/Keydnap. Once informed, the Transmission team were quick to act.

“Literally minutes after being notified by ESET, the Transmission team removed the malicious file from their web server and launched an investigation to identify how this happened,” the researchers explain.

“According to the signature, the application bundle was signed on August 28th, 2016, but it seems to have been distributed only the next day.”

Following an investigation by the Transmission team, we now know how the infected version came to be offered to the public and for how long. It all began with some kind of intrusion by an as-yet-unamed attacker.

“It appears that on or about August 28, 2016, unauthorized access was gained to our website server,” the team said in a statement.

“The official Mac version of Transmission 2.92 was replaced with an unauthorized version that contained the OSX/Keydnap malware. The infected file was available for download somewhere between a few hours and less than a day.”

The team says that they infected file was removed from the server immediately upon its discovery, potentially just a few hours after it was placed there. While any period is too long, the length of time the download was made available to the public should help to limit the impact of the malware.

For anyone concerned that they still might have been infected during that period, ESET offers the following advice to check for infection.

“We advise anyone who downloaded Transmission v2.92 between August 28th and August 29th, 2016, inclusively, to verify if their system is compromised by testing the presence of any of the following file or directory,” the company writes.

osx-mal-2

“If any of them exists, it means the malicious Transmission application was executed and that Keydnap is most likely running. Also note that the malicious disk image was named Transmission2.92.dmg while the legitimate one is Transmission-2.92.dmg (notice the hyphen).”

The Transmission team has also published a detailed guide for anyone concerned about infection. They’re also taking additional steps to limit the chances of this happening again.

“To help prevent future incidents, we have migrated the website and all binary files from our current servers to GitHub. Other services, which are currently unavailable, will be migrated to new servers in the coming days,” the team says.

“As an added precaution, we will be hosting the binaries and the website (including checksums) in two separate repositories.”

Uninfected versions of Transmission can be downloaded here. Versions for operating systems other than OS X are not affected.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Encrypt all the things!

Post Syndicated from Mark Henderson original http://blog.serverfault.com/2016/02/09/encrypt-all-the-things/

Let’s talk about encryption. Specifically, HTTPS encryption. If you’ve been following any of the U.S. election debates, encryption is a topic that the politicians want to talk about – but not in the way that most of us would like. And it’s not just exclusive to the U.S. – the U.K. is proposing banning encrypted services, Australia is similar. If you’re really into it, you can get information about most countries cryptography laws.

But one thing is very clear – if your traffic is not encrypted, it’s almost certainly being watched and monitored by someone in a government somewhere – this is the well publicised reason behind governments opposing widespread encryption. The NSA’s PRISM program is the most well known, which is also contributed to by the British and Australian intelligence agencies.

Which is why when the EFF announced their Let’s Encrypt project (in conjunction with Mozilla, Cisco, Akamai and others), we thought it sounded like a great idea.

The premise is simple:

Provide free encryption certificates
Make renewing certificates and installing them on your systems easy
Keep the certificates secure by installing them properly and issuing them best practices
Be transparent. Issued and revoked certificates are publically auditable
Be open. Make a platform and a standard that anyone can use and build on.
Benefit the internet through cooperation – don’t let one body control access to the service

Let’s Encrypt explain this elegantly themselves:

The objective of Let’s Encrypt and the ACME protocol is to make it possible to set up an HTTPS server and have it automatically obtain a browser-trusted certificate, without any human intervention.

The process goes a bit like this:

Get your web server up and running, as per normal, on HTTP.
Install the appropriate Let’s Encrypt tool for your platform. Currently there is ACME protocol support for:

Apache (Let’s Encrypt)
Nginx (Let’s Encrypt — experimental)
HAProxy (janeczku)
IIS (ACMESharp)

Run the tool. It will generate a Certificate Signing Request for your domain, submit it to Let’s Encrypt, and then give you options for validating the ownership of your domain. The easiest method of validating ownership is one that the tool can do automatically, which is creating a file with a pre-determined, random file name, that the Let’s Encrypt server can then validate
The tool then receives the valid certificate from the Let’s Encrypt Certificate Authority and installs it onto your systems, and configures your web server to use the certificate
You need to renew the certificate in fewer than 90 days – so you then need to set up a scheduled task (cron job for Linux, scheduled task for Windows) to execute the renewal command for your platform (see your tool’s documentation for this).

And that’s it. No copy/pasting your CSR into poorly built web interfaces, or waiting for the email to confirm the certificate to come through, or hand-building PEM files with certificate chains. No faxing documents to numbers in foreign countries. No panicking at the last minute because you forgot to renew your certificate. Free, unencumbered, automatically renewed, SSL certificates for life.

Who Let’s Encrypt is for

People running their own web servers.

You could be small businesses running Windows SBS server
You could be a startup offering a Software as a Service platform
You could be a local hackerspace running a forum
You could be a highschool student with a website about making clocks

People with a registered, publically accessible domain name

Let’s Encrypt requires some form of domain name validation, whether it be a file it can probe over HTTP to verify your ownership of the domain name, or creating a DNS record it can verify
Certificate Authorities no longer issue certificates for “made-up” internal domain names or reserved IP addresses

Who Let’s Encrypt is not for

Anyone on shared web hosting

Let’s Encrypt requires the input of the server operator. If you are not running your own web server, then this isn’t for you.

Anyone who wants to keep the existence of their certificates a secret

Every certificate issued by Let’s Encrypt is publically auditable, which means that if you don’t want anyone to know that you have a server on a given domain, then don’t use Let’s Encrypt
If you have sensitive server names (such as finance.corp.example.com), even though it’s firewalled, you might not want to use Let’s Encrypt

Anyone who needs a wildcard certificate

Let’s Encrypt does not issue wildcard certificates. They don’t need to – they offer unlimited certificates, and you can even specify multiple Subject Alternative Names on your certificate signing request
However, you may still need a wildcard if:

You have a lot of domains and can’t use SNI (I’m looking at you, Android 2.x, of which there is still a non-trivial number of users)
You have systems that require a wildcard certificate (some unified communications systems do this)

Anyone who needs a long-lived certificate

Let’s Encrypt certificates are only valid for 90 days, and must be renewed prior to then. If you need a long-lived certificate, then Let’s Encrypt is not for you

Anyone who wants Extended Validation

Let’s Encrypt only validates that you have control over a given domain. It does not validate your identity or business or anything of that nature. As such you cannot get the green security bar that displays in the browser for places like banks or PayPal.

Anyone who needs their certificate to be trusted by really old things

If you have devices from 1997 that only trust 1997’s list of CA’s, then you’re going to have a bad time
However, this is likely the least of your troubles
Let’s Encrypt is trusted by:

Android version 2.3.6 and above, released 2011-09-02
FireFox version 2.0 and above, released 2006-10-24
Internet Explorer on Windows Vista or above (For Windows XP, see this issue), released 2007-01-30
Google Chrome on Windows Vista or above (For Windows XP, see this issue), released 2008-08-02
Safari on OSX v4.0 or above (Mac OSX 10.4 or newer), released 2005-04-29
Safari on iOS v3.1 or above, released 2010-02-02

However, these are mostly edge cases, and if you’re reading this blog post, then you will know if they apply to you or not.

So let’s get out there and encrypt!

The elephant in the room

“But hang on!”, I hear the eagle-eyed reader say. “Stack Overflow is not using SSL/TLS!” you say. And you would be partly correct.

We do offer SSL on all our main sites. Go ahead, try it:

https://stackoverflow.com/
https://serverfault.com/
https://meta.stackexchange.com/

However, we have some slightly more complicated issues at hand. For details about our issues, see the great blog post by Nick Craver. It’s from 2013 and we have fixed many of the issues that we were facing back then, but there is still some way to go.

However, all our signup and login pages however are delivered over HTTPS, and you can switch to HTTPS manually if you would prefer – for most sites.

Let’s get started

So how do you get started? If you have a debian-based Apache server, then grab the Let’s Encrypt tool and go!

If you’re on a different platform, then check the list of pre-build clients above, or take a look at a recent comparison of the most common *nix scripts.

 

Addendum: Michael Hampton pointed out to me that Fedora ships with the Let’s Encrypt package as a part of their distribution and is also in EPEL if you’re on RedHat, CentOS or another distribution that can make use of EPEL packages.

Yahoo Daily Fantasy: Everyone’s Invited—and We Mean “Everyone”

Post Syndicated from davglass original https://yahooeng.tumblr.com/post/129855575131

imbrianj:

Photo of a Yahoo accessibility specialist assisting a colleague with keyboard navigation. The Fantasy Sport logo is superimposed.
When we’re building products at Yahoo we get really excited about our work. No surprise. We envision that millions of people are going to love our products and be absolutely delighted when using them.

With our new Yahoo Sports Daily Fantasy game, we wanted to include everyone.

We support all major modern browsers on desktop and mobile as well as native apps. However, that, in and of itself, won’t ensure that the billion individuals around the world who use assistive technology will be able to manage and play our fantasy games. One billion. That’s a lot of everyone.

Daily Fantasy baked in accessibility. Baked in. Important point. In order to ensure that everyone is able to compete in our games at the same level, accessibility can’t be an add-on.

Check out our pages. Title and ARIA attributes. Structured headers. Brilliant labels. TabIndex and other attributes that are convenience features for many of us and a necessity for a great experience for others—especially our assistive technology users. There are a lot of them and if we work to make our pages and apps accessible, well, we figure, there can be a lot more of them using Daily Fantasy.

Think about it: whether you’re a sighted user and just need to hover over an icon to get the full description of what it indicates—or a totally blind user who would otherwise miss that valuable content—it makes sense to work on making our game as enjoyable and as easy to use as possible for everyone.

So, the technical bits. What specific things did we do to ensure good accessibility on Daily Fantasy?

A properly accessible site starts on a foundation of good, semantic markup. We work to ensure that content is presented in the markup in the order that makes the most sense, then worry about how to style it to look as we desire. The markup we choose is also important: while <div> and <span> are handy wrappers, we try to make sure the context is appropriate. Should this player info be a <dl>? Should this alert be a <p>?

One of the biggest impacts to screen readers is the appropriate use of header tags and well-written labels. With these a user can quickly navigate to the appropriate part of the page based on the headers presented—allowing them to skip some of the navigation stuff that sighted users take for granted—and know exactly what they can do when, for example, they need to subtract or add a player to their roster. When content changes, we make use of ARIA attributes. With a single-page web app (that does not do a page refresh as you navigate) we make use of ARIA’s role=“alert” to give a cue to users what change has occurred. Similarly, we’ve tried to ensure some components, such as our tab selectors and sliders, are compatible and present information that is as helpful as possible. With our scrolling table headers, we had to use ARIA to “ignore” them, as it’d be redundant for screen readers as the natural <th> elements were intentionally left in place but visibly hidden.

Although we have done some testing with OSX and VoiceOver, our primary testing platform is NVDA on Windows using Chrome. NVDA’s support has been good – and, it’s free and open source. Even if you’re on OSX, you can install a free Windows VM for testing thanks to a program Microsoft has set up (thank you!). These free tools make it so anyone is able to ensure a great experience for all users:

https://dev.modern.ie/tools/vms/mac/
https://www.virtualbox.org/wiki/Downloads
http://www.nvaccess.org/download/
http://www.google.com/chrome/

Accessibility should not be considered a competitive advantage. It’s something everyone should strive for and something we should all be supporting. If you’re interested in participating in the conversation, give us a tweet, reblog, join in the forum conversations or drop us a line! We share your love of Daily Fantasy games and want to make sure everyone’s invited.

If you have a suggestion on what could improve our product, please let us know! For Daily Fantasy we personally lurk in some of the more popular forums and have gotten some really great feedback from their users. It’s not uncommon to read a comment and have a fix in to address it within hours.

Did I mention that we are excited about our work and delighting users—everyone?

– Gary, Darren and Brian

Why Greet Apple’s Swift 2.0 With Open Arms?

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2015/06/15/apple-is-not-our-friend.html

Apple announced last week that its Swift programming language — a
currently fully proprietary software successor to Objective C — will
probably be partially released under an OSI-approved license eventually.
Apple explicitly stated though that such released software will not be
copylefted. (Apple’s pathological hatred of copyleft is reasonably well
documented.) Apple’s announcement remained completely silent on patents,
and we should expect the chosen non-copyleft license
will not contain a patent grant.
(I’ve explained at
great length in the past why software patents are a particularly dangerous
threat to programming language infrastructure
.)

Apple’s dogged pursuit for non-copyleft replacements for copylefted
software is far from new. For example, Apple has worked to create
replacements for Samba so they need not ship Samba in OSX. But, their
anti-copyleft witch hunt goes back much further. It began
when Richard
Stallman himself famously led the world’s first GPL enforcement effort
against NeXT
, and Objective-C was liberated. For a time, NeXT and
Apple worked upstream with GCC to make Objective-C better for the
community. But, that whole time, Apple was carefully plotting its escape
from the copyleft world. Fortuitously, Apple eventually discovered a
technically brilliant (but sadly non-copylefted) research programming
language and compiler system called LLVM. Since then, Apple has sunk
millions of dollars into making LLVM better. On the surface, that seems
like a win for software freedom, until you look at the bigger picture:
their goal is to end copyleft compilers. Their goal is to pick and choose
when and how programming language software is liberated. Swift is not a
shining example of Apple joining us in software freedom; rather, it’s a
recent example of Apple’s long-term strategy to manipulate open source
— giving our community occasional software freedom on Apple’s own
terms. Apple gives us no bread but says let them eat cake
instead.

Apple’s got PR talent. They understand that merely announcing the
possibility of liberating proprietary software gets press. They know that
few people will follow through and determine how it went. Meanwhile, the
standing story becomes: Wait, didn’t Apple open source Swift
anyway?. Already, that false soundbite’s grip strengthens, even though
the answer remains a resounding No!. However, I suspect that
Apple will probably meet most
of their
public pledges
. We’ll likely see pieces of Swift 2.0 thrown over the
wall. But the best stuff will be kept proprietary. That’s already happening
with LLVM, anyway; Apple already ships a no-source-available fork of
LLVM.

Thus, Apple’s announcement incident hasn’t happened in a void. Apple
didn’t just discover open source after years of neutrality on the topic.
Apple’s move is calculated, which
led various
industry pundits like O’Grady and Weinberg to ask hard questions (some of
which are similar to mine)
. Yet, Apple’s hype is so good, that
it did
convince one trade association leader
.

To me, Apple’s not-yet-executed move to liberate some of the Swift 2.0
code seems a tactical stunt to win over developers who currently prefer the
relatively more open nature of the Android/Linux platform. While nearly
all the Android userspace applications are proprietary, and GPL violations on
Android devices abound, at least the copyleft license of Linux itself
provides the opportunity to keep the core operating system of Android
liberated. No matter how much Swift code is released, such will never be
true with Apple.

I’m often pointing out
in my recent
talks
how complex and treacherous the Open Source and Free Software
political climate became in the last decade. Here’s a great example: Apple
is a wily opponent, utilizing Open Source (the cooption of Free Software) to
manipulate the press and hoodwink the would-be spokespeople for Linux to
support them. Many of us software freedom advocates have predicted for
years that Free Software unfriendly companies like Apple would liberate
more and more code under non-copyleft licenses in an effort to create
walled gardens of seeming software freedom. I don’t revel in my past
accuracy of such predictions; rather, I feel simply the hefty weight of
Cassandra’s curse.

You’re Living in the Past, Dude!

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/08/05/living-in-the-past.html

At the 2000 Usenix
Technical Conference
(which was the primary “generalist”
conference for Free Software developers in those days), I met Miguel De
Icaza for the third time in my life. In those days, he’d just started
Helix Code (anyone else remember what Ximian used to be called?) and was
still president of the GNOME Foundation. To give you some context:
Bonobo was a centerpiece of new and active GNOME development then.

Out of curiosity and a little excitement about GNOME, I asked Miguel if
he could show me how to get the GNOME 1.2 running on my laptop. Miguel
agreed to help, quickly taking control of the keyboard and frantically
typing and editing my sources.list.

Debian potato was the just-becoming-stable release in those days, and
of course, I was still running potato (this was before
my experiment
with running things from testing began
).

After a few minutes hacking on my keyboard, Miguel realized that I
wasn’t running woody, Debian’s development release. Miguel looked at
me, and said: You aren’t running woody; I can’t make GNOME run on
this thing. There’s nothing I can do for you. You’re living in the
past, dude!. (Those who know Miguel IRL can imagine easily how he’d
sound saying this.)

So, I’ve told that story many times for the last eleven years. I
usually tell it for laughs, as it seems an equal-opportunity humorous
anecdote. It pokes some fun at Miguel, at me, at Debian for its release
cycle, and also at GNOME (which has, since its inception, tried
to never live in the past, dude).

Fact is, though, I rather like living in the past, at least
with regard to my computer setup. By way of desktop GUIs, I
used twm well into the
late 1990s, and used fvwm well into
the early 2000s. I switched to sawfish (then sawmill) during the
relatively brief period when GNOME used it as its default window
manager. When Metacity became the default, I never switched because I’d
configured sawfish so heavily.

In fact, the only actual parts of GNOME 2 that I ever used on a daily
basis have been (a) a small unobtrusive panel, (b) dbus (and its related
services), and (c) the Network Manager applet. When GNOME 3 was
released, I had no plans to switch to it, and frankly I still don’t.

I’m not embarrassed that I consistently live in the past; it’s
sort of the point. GNOME 3 isn’t for me; it’s for people who want their
desktop to operate in new and interesting ways. Indeed, it’s (in many
ways) for the people who are tempted to run OSX because its desktop is
different than the usual, traditional, “desktop metaphor”
experience that had been standard since the mid-1990s.

GNOME 3 just wasn’t designed with old-school Unix hackers in mind.
Those of us who don’t believe a computer is any good until we see a
command line aren’t going to be the early adopters who embrace GNOME 3.
For my part, I’ll actually try to avoid it as long as possible, continue
to run my little GNOME 2 panel and sawfish, until slowly, GNOME 3 will
seep into my workflow the way the GNOME 2 panel and sawfish did
when they were current, state-of-the-art GNOME
technologies.

I hope that other old-school geeks will see this distinction: we’re
past the era when every Free Software project is targeted at us hackers
specifically. Failing to notice this will cause us to ignore the deeper
problem software freedom faces. GNOME Foundation’s Executive Director
(and my good friend), Karen
Sandler
, pointed out in
her OSCON
keynote
something that’s bothered her and me for years: the majority
computer at OSCON is Apple hardware running OSX. (In fact, I even
noticed Simon Phipps has one
now!) That’s the world we’re living in now. Users who
actually know about “Open Source” are now regularly
enticed to give up software freedom for shiny things.

Yes, as you just read, I can snicker as quickly as any
old-school command-line geek (just as
Linus
Torvalds did earlier this week
) at the pointlessness of wobbly
windows, desktop cubes, and zoom effects. I could also easily give a
treatise on how I can get work done faster, better, and smarter because
I have the technology of years ago that makes every keystroke
matter.

Notwithstanding that, I’d even love to have the same versatility with
GNOME 3 that I have with sawfish. And, if it turns out GNOME 3’s
embedded Javascript engine will give me the same hackability I prefer
with sawfish, I’ll adopt GNOME 3 happily. But, no matter what, I’ll
always be living in the past, because like every other human, I hate
changing anything, unless it’s strictly necessary or it’s my own
creation and derivation. Humans are like that: no matter who you are,
if it wasn’t your idea, you’re always slow to adopt something new and
change old habits.

Nevertheless, there’s actually nothing wrong with living in the
past — I quite like it myself. However, I’d suggest that care
be taken to not admonish those who make a go at creating the future.
(At this risk of making a conclusion that sounds like a time travel
joke,) don’t forget that their future will eventually
become that very past where I and others would prefer to
live.

Mac OSX 10.7 to include full disk encryption

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/Jy_Sl5uXF2g/mac-osx-107-to-include-full-disk.html

Apple’s recent developer preview announcement for 10.7 notes that it will include:”the all new FileVault, that provides high performance full disk encryption for local and external drives, and the ability to wipe data from your Mac instantaneously”This means that both Windows (BitLocker) and MacOS (FileVault) will have free, OS integrated full disk encryption.

_uacct = “UA-1423386-1”;
urchinTracker();

Mac OSX 10.7 to include full disk encryption

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/Jy_Sl5uXF2g/mac-osx-107-to-include-full-disk.html

Apple’s recent developer preview announcement for 10.7 notes that it will include:”the all new FileVault, that provides high performance full disk encryption for local and external drives, and the ability to wipe data from your Mac instantaneously”This means that both Windows (BitLocker) and MacOS (FileVault) will have free, OS integrated full disk encryption.

_uacct = “UA-1423386-1”;
urchinTracker();

Proprietary Licenses Are Even Worse Than They Look

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2010/04/07/proprietary-licenses.html

There are lots of evil things that proprietary software companies might
do. Companies put their own profit above the rights and freedoms of
their users, and to that end, much can be done that subjugates
users. Even as someone who avoids proprietary software, I still read
many proprietary license agreements (mainly to see how bad they are).
I’ve certainly become numb to the constant barrage of horrible
restrictions they place on users. But, sometimes, proprietary licenses
go so far that I’m taken aback by their gratuitous cruelty.

Apple’s licenses are probably the easiest example of proprietary
licensing terms that are well beyond reasonableness. Of course, Apple’s
licenses do the usual things like forbidding users from copying,
modifying, sharing, and reverse engineering the software. But even
worse, Apple also forbid users from running Apple software on any
hardware that is not produced by Apple.

The decoupling of one’s hardware vendor from one’s software vendor was
a great innovation brought about by the PC revolution, in which,
ironically, Apple played a role. Computing history has shown us that when
your software vendor also controls your hardware, you can easily be
“locked in“ in ways that make mundane proprietary software
licenses seem almost nonthreatening.

Film image from Tron of the Master Control Program (MCP)

Indeed, Apple has such a good hype machine that
they even
have convinced some users this restrictive policy makes computing
better
. In this worldview, the paternalistic vendor will use its
proprietary controls over as many pieces of the technology as possible
to keep the infantile users from doing something that’s “just bad
for them”. The tyrannical
MCP
of Tron comes quickly to my mind.

I’m amazed that so many otherwise Free Software supporters are quite
happy using OSX and buying Apple products, given these kinds of utterly
unacceptable policies. The scariest part, though, is that this practice
isn’t confined to Apple. I’ve been recently reminded that other
companies, such
as IBM, do exactly the same thing
. As a Free Software advocate, I’m
critical of any company that uses their control of a proprietary
software license to demand that users run that software only on the
original company’s hardware as well. The production and distribution of
mundane proprietary software is bad enough. It’s unfortunate that
companies like Apple and IBM are going the extra mile to treat users
even worse.

Oh Nine Fifteen

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/oh-nine-fifteen.html

Last week I’ve released a
test version
for the upcoming 0.9.15 release of PulseAudio. It’s going to be a major one,
so here’s a little overview what’s new from the user’s perspective.

Flat Volumes

Based on code originally contributed by Marc-André Lureau we now
support Flat Volumes. The idea behind flat volumes has been
inspired by how Windows Vista handles volume control: instead of
maintaining one volume control per application stream plus one device
volume we instead fix the device volume automatically to the “loudest”
application stream volume. Sounds confusing? Actually it’s right the contrary, it feels pretty natural and
easy to use and brings us a big step forward to reduce a bit the
number of volume sliders in the entire audio pipeline from the application to what you hear.

The flat volumes logic only applies to devices where we know the
actual multiplication factor of the hardware volume slider. That’s
most devices supported by the ALSA kernel drivers except for a few
older devices and some cheap USB hardware that exports invalid
dB information.

On-the-fly Reconfiguration of Devices (aka “S/PDIF Support”)

PulseAudio will now automatically probe all possible combinations
of configurations how to use your sound card for playback and
capturing and then allow on-the-fly switching of the
configuration. What does that mean? Basically you may now switch
beetween “Analog Stereo”, “Digital S/PDIF Stereo”, “Analog Surround
5.1” (… and so on) on-the-fly without having to reconfigure PA on
the configuration file level or even having to stop your streams. This
fixes a couple of issues PA had previously, including proper SPDIF
support, and per-device configuration of the channel map of
devices.

Unfortunately there is no UI for this yet, and hence you need to
use pactl/pacmd on the command line to switch between the
profiles. Typing list-cards in pacmd will tell you
which profiles your card supports.

In a later PA version this functionality will be extended to also
allow input connector switching (i.e. microphone vs. line-in) and
output connector switching (i.e. internal speakers vs. line-out)
on-the-fly.

Native support for 24bit samples

PA now supports 24bit packed samples as well as 24bit stored in
the LSBs of 32bit integers natively. Previously these formats were
always converted into 32bit MSB samples.

Airport Express Support

Colin Guthrie contributed native Airport Express support. This will
make the RAOP
audio output of ApEx routers appear like local sound devices
(unfortunately sound devices with a very long latency), i.e. any
application connecting to PulseAudio can output audio to ApEx devices
in a similar way to how iTunes can do it on MacOSX.

Before you ask: it is unlikely that we will ever make PulseAudio be
able to act as an ApEx compatible device that takes connections from
iTunes (i.e. becoming a RAOP server instead of just an RAOP client).
Apple has an unfriendly attitude of dongling their devices to their
applications: normally iTunes has to cryptographically authenticate
itself to the device and the device to iTunes. iTunes’ key has been
recovered by the infamous Jon Lech
Johansen
, but the device key is still unknown. Without that key it
is not realistically possible to disguise PA as an ApEx.

Other stuff

There have been some extensive changes to natively support
Bluetooth audio devices well by directly accessing BlueZ. This code
was originally contributed by the GSoC student João Paulo Rechi
Vita. Initially, 0.9.15 was intended to become the version were BT audio
just works. Unfortunately the kernel is not really up to that yet, and
I am not sure everything will be in place so that 0.9.15 will ship
with well working BT support.

There have been a lot of internal changes and API additions. Most of
these however are not visible to the user.