Tag Archives: ios

iPhone Apps Stealing Clipboard Data

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/iphone_apps_ste.html

iOS apps are repeatedly reading clipboard data, which can include all sorts of sensitive information.

While Haj Bakry and Mysk published their research in March, the invasive apps made headlines again this week with the developer beta release of iOS 14. A novel feature Apple added provides a banner warning every time an app reads clipboard contents. As large numbers of people began testing the beta release, they quickly came to appreciate just how many apps engage in the practice and just how often they do it.

This YouTube video, which has racked up more than 87,000 views since it was posted on Tuesday, shows a small sample of the apps triggering the new warning.

EDITED TO ADD (7/6): LinkedIn and Reddit are doing this.

iOS XML Bug

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/05/ios_xml_bug.html

This is a good explanation of an iOS bug that allowed someone to break out of the application sandbox. A summary:

What a crazy bug, and Siguza’s explanation is very cogent. Basically, it comes down to this:

  • XML is terrible.
  • iOS uses XML for Plists, and Plists are used everywhere in iOS (and MacOS).
  • iOS’s sandboxing system depends upon three different XML parsers, which interpret slightly invalid XML input in slightly different ways.

So Siguza’s exploit ­– which granted an app full access to the entire file system, and more ­- uses malformed XML comments constructed in a way that one of iOS’s XML parsers sees its declaration of entitlements one way, and another XML parser sees it another way. The XML parser used to check whether an application should be allowed to launch doesn’t see the fishy entitlements because it thinks they’re inside a comment. The XML parser used to determine whether an already running application has permission to do things that require entitlements sees the fishy entitlements and grants permission.

This is fixed in the new iOS release, 13.5 beta 3.

Comment:

Implementing 4 different parsers is just asking for trouble, and the “fix” is of the crappiest sort, bolting on more crap to check they’re doing the right thing in this single case. None of this is encouraging.

More commentary. Hacker News thread.

Building and testing iOS and iPadOS apps with AWS DevOps and mobile services

Post Syndicated from Abdullahi Olaoye original https://aws.amazon.com/blogs/devops/building-and-testing-ios-and-ipados-apps-with-aws-devops-and-mobile-services/

Continuous integration/continuous deployment (CI/CD) helps automate software delivery processes. With the software delivery process automated, developers can test and deliver features faster. In iOS app development, testing your apps on real devices allows you to understand how users will interact with your app and to detect potential issues in real time.

AWS has a collection of tools designed to help developers build, test, configure, and release cloud-based applications for mobile devices. This blog post shows you how to leverage some of those tools and integrate third-party build tools like Jenkins into a CI/CD Pipeline in AWS for iOS app development and testing.

A new commit to the source repository triggers the pipeline. The build is done on a Jenkins server, and the build artifact from Jenkins is passed to the test phase, which is configured with AWS Device Farm to test the application on real devices. AWS CodePipeline provides the orchestration and helps automate the build and test phases. The CodePipeline continuous delivery process is illustrated in the following screenshot.

CodePipeline Archietcture with all stages

Figure: CodePipeline Continuous Delivery Architecture

 

Prerequisites

Ensure you have the following prerequisites set up before beginning:

  1. Apple developer account
  2. Build server (macOS)
  3. Xcode Version 11.3 (installed on the build server and setup)
  4. Jenkins (installed on the build server)
  5. AWS CLI installed and configured on workstation
  6. Basic knowledge of Git

Source

This example uses a sample iOS Notes app which we have hosted in an AWS CodeCommit repository, which is in the source stage of the pipeline.

Jenkins installation

Jenkins can be installed on macOS using a homebrew package manager for macOS with the following command:

$ brew install Jenkins

Start Jenkins by typing the following command:

$ Jenkins

You can also configure Jenkins to start as a service on startup with the following command:

$ brew services start Jenkins

Jenkins configuration

On a browser on your local machine, visit http://localhost:8080. You should see the setup screen shown in the following screenshot:

Screenshot of how to retrive Jenkins secret during setup on mac

1. Grab the initial admin password from the terminal by typing:

$ cat /Users/administrator/.jenkins/secrets/initialAdminPassword

2. Follow the onscreen instructions to complete setup. This includes creating a first admin user, installing initial plugins, etc.

3. Make some changes to the config file to ensure Jenkins is accessible from anywhere, not just the local machine:

    • Open the config file:

$ sudo nano /Users/admin/Library/LaunchAgents/homebrew.mxcl.jenkins.plist

    • Find the following line:

<string>--httpListenAddress=127.0.0.1</string>

    • Change it to the following:

<string>--httpListenAddress=0.0.0.0</string>

    • Save your changes and exit.

To reach Jenkins from the internet, enter the following into a web browser:

<build-server-public-ip>:<Jenkins-port>

The default Jenkins port is 8080. For example, if a public IP address 1.2.3.4, the path is 1.2.3.4:8080.

4. Install the AWS CodePipeline Jenkins plugin:

      • Sign in to Jenkins using the user name and password you created. Choose Manage Jenkins, then Manage Plugins.
      • Switch to the Available tab and start typing CodePipeline into the filter until AWS CodePipeline Plugin appears. Select the plugin, then select Install without restart.
      • Select Restart Jenkins when installation is complete and no jobs are running.

5. Create a project. Choose New Item, then Freestyle Project. Enter a descriptive name. This example uses iosapp as the item name.

6. In the Source Code Management section, select AWS CodePipeline and configure the plugin as shown in the following screenshot.

Screenshot of Source Code Management configuration in a Jenkins freestyle project

      • AWS region: The region in which you want to create the CI/CD pipeline.
      • AWS access key and AWS secret key: Create a special IAM user and apply the AWSCodePipelineCustomActionAccess managed policy to that user. Use the access credentials for that user to configure this section.
      • Category: Choose Build. This is also used in the pipeline configuration.
      • Provider: This example uses the name Jenkins. It can be renamed, but take note of the name specified here.
      • Version: Enter 1 here. This value is used in the pipeline configuration.

7. Under Build Triggers, select Poll SCM. Enter the schedule * * * * * separated by spaces, as shown in the following screenshot.

Screenshot of Build Triggers confoguration in a Jenkins freestyle project

8. Under Build, select Add build step, then Execute shell. Enter the following commands, inserting your development team ID.

/usr/bin/xcodebuild -version
/usr/bin/xcodebuild build-for-testing -scheme MyNotes -destination generic/platform=iOS DEVELOPMENT_TEAM=<your development team ID> -allowProvisioningUpdates -derivedDataPath /Users/admin/.jenkins/workspace/iosapp
mkdir Payload && cp -r /Users/admin/.jenkins/workspace/iosapp/Build/Products/Debug-iphoneos/MyNotes.app Payload/
zip -r Payload.zip Payload && mv Payload.zip MyNotes.ipa

9. Under Post-build Actions, select Add post-build action, then AWS CodePipeline Publisher. Fill in the fields as shown in the following screenshot:

Screenshot of Post Build Action Configuration in a Jenkins freestyle project

10. Save the configuration.

11. Retrieve the public IP address for the macOS build server.

Configure Device Farm

In this section, you configure Device Farm to test the sample iOS app on real-world devices.

  1. Navigate to the AWS Device Farm Console
  2. Choose Create a new project and enter a name for the project. Choose Create project. Note the name of the project.
  3. Choose the newly created project and retrieve the project ID:
    • Copy the URL found in the browser into a text editor.
    • Note the project ID, which can be found in the URL path:

https://us-west-2.console.aws.amazon.com/devicefarm/home?region=us-east-1#/projects/<your project ID is here>/runs

    • Decide on which devices you want to test the sample app. This is known as the device pool in Device Farm. This example doesn’t use a PRIVATE device pool. It uses a CURATED device pool, which is a device pool created and managed by AWS Device Farm.
    • Retrieve the ARN of the CURATED device pool for your project using the AWS CLI:

$ aws devicefarm list-device-pools --arn arn::devicefarm:us-west-2:<account-id>:project:<project id noted above> --region us-west-2 --query 'devicePools[?name==`Top Devices`]'

Note the device pool ARN.

Configure the CodeCommit repository

In this section, the source code repository is created and source code is pushed to the repository.

  1. Create a CodeCommit repository. Take note of the repository name.
  2. Connect to the newly created repository.
  3. Push the iOS app code from the local repository to the remote CodeCommit repository:

$ git push

Create and configure CodePipeline

CodePipeline orchestrates all phases of the example. Each action is represented as a stage.

Since you have a Jenkins stage, which is considered a custom action and has to be configured via the AWS management console, use the AWS management console to create your pipeline.

  1. Go to the AWS CodePipeline console and choose Create pipeline.
  2. Enter iosapp under Pipeline settings and select New service role.
  3. Leave the default Role name, and select Allow AWS CodePipeline to create a service role so it can be used with this new pipeline.
  4. Choose Next.
  5. Select AWS CodeCommit as the Source provider. Select the repository you created and the branch name, then select Next.
  6. Select Add Jenkins as the build provider and fill in the fields:
    • Provider name: Specify the provider name you configured for this example.
    • Server URL: Specify the public IP address of the Jenkins server and the port on which Jenkins is. For example, if 1.2.3.4 is the IP address and 8080 is the port, the server URL is http://1.2.3.4:8080.
    • Project name: Specify the name you gave to the Jenkins Freestyle project you created.
  7. Choose Next.
  8. Choose Skip deploy stage. You are integrating with Device Farm and this is only valid as a test stage, not a deploy stage.
  9. Choose Create pipeline. This creates a two-stage pipeline which starts executing immediately after creation. However, you are not done yet, so stop the current execution
  10. Now create a test stage with Device Farm. Choose Edit to modify the pipeline. Under the Build stage, select Add Stage and enter a stage name (such as Test). Choose Add stage again.
  11. In the newly added stage, choose Add action group and fill in the fields:
    • Action name: Enter an Action name
    • Action Provider: Select AWS Device Farm
    • Region: Select US West – Oregon.

      “AWS Device Farm is only supported in US-West-2 (Oregon) so this action will be a cross region action since the pipeline is in us-east-1”

    • Input artifacts: Select BuildArtifact, which is the output of the Jenkins build stage
    • ProjectId: This is the Device Farm project ID you noted earlier
    • DevicePoolArn: This is the Device Farm ARN you noted earlier
    • AppType: Enter iOS
    • App: This is the file that contains the app to test; the filename of your generated IPA is MyNotes.ipa
    • TestType: This is the type of test to run on the application; enter BUILTIN_FUZZ
  12. Leave the other fields blank and choose Done to save the action configuration, then choose Save to save the pipeline changes.
  13. Optionally, you can enable notifications to notify you of changes in the pipeline, such as when the pipeline completes, when a stage or action completes, or when there is a failure. To enable notifications, create a notification rule.
  14. Choose Release change to execute the pipeline, as shown in the following screenshot.

Completed codepipeline sample with example test failure

Verify the test on Device Farm

From the pipeline execution, you can see there is a failure in your test. Check the test results:

  1. Navigate to the AWS Device Farm Console.
  2. Select the project you created.
  3. All the tests that have run are listed, as seen in the following screenshot.
  4. Failure on AWS Device FarmChoose the test to see more details.

You can see the source of the failure. To investigate why the test failed, choose each device. The device names on which the app was tested are also shown, such as the OS version and the total duration of the test for each device. You can see screenshots of the test by switching to the Screenshots tab. More information can also be seen by clicking on a device.

Troubleshoot the failure by examining the result in each of the devices on which the test was run to determine what changes are needed in the application. After making the needed changes in the application source code, push the changes to the remote repository (in this case, a CodeCommit repository) to trigger the pipeline again. The following screenshot shows a successful pipeline execution:

Succesfuly executed CodePipeline

The following screenshot shows a successful test:

Sucessfully executed tests on Device Farm

Cleanup

Cleanup the following AWS resources:

Conclusion

This post showed you how to integrate CodePipeline with an iOS Jenkins build server and leverage the integration of CodePipeline and Device Farm to automatically build and test iOS apps on real-world devices. By taking this approach to testing iOS apps, you can visualize how an app will behave on actual devices and with the automated CI/CD pipeline, and quickly test apps as they are developed.

Tackling UI test execution time imbalance for Xcode parallel testing

Post Syndicated from Grab Tech original https://engineering.grab.com/tackling-ui-test-execution-time-imbalance-for-xcode-parallel-testing

Introduction

Testing is a common practice to ensure that code logic is not easily broken during development and refactoring. Having tests running as part of Continuous Integration (CI) infrastructure is essential, especially with a large codebase contributed by many engineers. However, the more tests we add, the longer it takes to execute. In the context of iOS development, the execution time of the whole test suite might be significantly affected by the increasing number of tests written. Running CI pre-merge pipelines against a change, would cost us more time. Therefore, reducing test execution time is a long term epic we have to tackle in order to build a good CI infrastructure.

Apart from splitting tests into subsets and running each of them in a CI job, we can also make use of the Xcode parallel testing feature to achieve parallelism within one single CI job. However, due to platform-specific implementations, there are some constraints that prevent parallel testing from working efficiently. One constraint we found is that tests of the same Swift class run on the same simulator. In this post, we will discuss this constraint in detail and introduce a tip to overcome it.

Background

Xcode parallel testing

The parallel testing feature was shipped as part of the Xcode 10 release. This support enables us to easily configure test setup:

  • There is no need to care about how to split a given test suite.
  • The number of workers (i.e. parallel runners/instances) is configurable. We can pass this value in the xcodebuild CLI via the -parallel-testing-worker-count option.
  • Xcode takes care of cloning and starts simulators accordingly.

However, the distribution logic under the hood is a black-box. We do not really know how tests are assigned to each worker or simulator, and in which order.

Three simulators running tests in parallel
Three simulators running tests in parallel

It is worth mentioning that even without the Xcode parallel testing support, we can still achieve similar improvements by running subsets of tests in different child processes. But it takes more effort to dispatch tests to each child process in an efficient way, and to handle the output from each test process appropriately.

Test time imbalance

Generally, a parallel execution system is at its best efficiency if each parallel task executes in roughly the same duration and ends at roughly the same time.

If the time spent on each parallel task is significantly different, it will take more time than expected to execute all tasks. For example, in the following image, it takes the system on the left 13 mins to finish 3 tasks. Whereas, the one on the right takes only 10.5 mins to finish those 3 tasks.

Bad parallelism vs. good parallelism
Bad parallelism vs. good parallelism

Assume there are N workers. The ith worker executes its tasks in ti seconds/minutes. In the left plot, t1 = 10 mins, t2 = 7 mins, t3 = 13 mins.

We define the test time imbalance metric as the difference between the min and max end time:

max(ti) – min(ti)

For the example above, the test time imbalance is 13 mins – 7 mins = 6 mins.

Contributing factors in test time imbalance

There are several factors causing test time imbalance. The top two prominent factors are:

  1. Tests vary in execution time.
  2. Tests of the same class run on the same simulator.

An example of the first factor is that in our project, around 50% of tests execute in a range of 20-40 secs. Some tests take under 15 secs to run while several take up to 2 minutes. Sometimes tests taking longer execution time is inevitable since those tests usually touch many flows, which cannot be split. If such tests run last, the test time imbalance may increase.

However, this issue, in general, does not matter that much because long-time-execution tests do not always run last.

Regarding the second factor, there is no official Apple documentation that explicitly states this constraint. When Apple first introduced parallel testing support in Xcode 10, they only mentioned that test classes are distributed across runner processes:

“Test parallelization occurs by distributing the test classes in a target across multiple runner processes. Use the test log to see how your test classes were parallelized. You will see an entry in the log for each runner process that was launched, and below each runner you will see the list of classes that it executed.”

For example, we have a test class JobFlowTests that includes five tests and another test class TutorialTests that has only one single test.

final class JobFlowTests: BaseXCTestCase {
func testHappyFlow() { ... }
  func testRecoverFlow() { ... }
  func testJobIgnoreByDax() { ... }
  func testJobIgnoreByTimer() { ... }
  func testForceClearBooking() { ... }
}
...
final class TutorialTests: BaseXCTestCase {
  func testOnboardingFlow() { ... }
}

When executing the two tests with two simulators running in parallel, the actual run is like the one shown on the left side of the following image, but ideally it should work like the one on the right side.

Tests of the same class are supposed to run on the same simulator but they should be able to run on different simulators.
Tests of the same class are supposed to run on the same simulator but they should be able to run on different simulators.

Diving deep into Xcode parallel testing

Demystifying Xcode scheduling log

As mentioned above, Xcode distributes tests to simulators/workers in a black-box manner. However, by looking at the scheduling log generated when running tests, we can understand how Xcode parallel testing works.

When running UI tests via the xcodebuild command:

$ xcodebuild -workspace Driver/Driver.xcworkspace \
    -scheme Driver \
    -configuration Debug \
    -sdk 'iphonesimulator' \
    -destination 'platform=iOS Simulator,id=EEE06943-7D7B-4E76-A3E0-B9A5C1470DBE' \
    -derivedDataPath './DerivedData' \
    -parallel-testing-enabled YES \
    -parallel-testing-worker-count 2 \
    -only-testing:DriverUITests/JobFlowTests \    # 👈👈👈👈👈
    -only-testing:DriverUITests/TutorialTests \
    test-without-building

The log can be found inside the *.xcresult folder under DerivedData/Logs/Test. For example: DerivedData/Logs/Test/Test-Driver-2019.11.04\_23-31-34-+0800.xcresult/1\_Test/Diagnostics/DriverUITests-144D9549-FD53-437B-BE97-8A288855E259/scheduling.log

Scheduling log under xcresult folder.
Scheduling log under xcresult folder
2019-11-05 03:55:00 +0000: Received worker from worker provider: 0x7fe6a684c4e0 [0: Clone 1 of DaxIOS-XC10-1-iP7-1 (3D082B53-3159-4004-A798-EA5553C873C4)]
2019-11-05 03:55:13 +0000: Worker 0x7fe6a684c4e0 [4985: Clone 1 of DaxIOS-XC10-1-iP7-1 (3D082B53-3159-4004-A798-EA5553C873C4)] finished bootstrapping
2019-11-05 03:55:13 +0000: Parallelization enabled; test execution driven by the IDE
2019-11-05 03:55:13 +0000: Skipping test class discovery
2019-11-05 03:55:13 +0000: Executing tests {(	# 👈👈👈👈👈
    DriverUITests/JobFlowTests,
    DriverUITests/TutorialTests
)}; skipping tests {(
)}
2019-11-05 03:55:13 +0000: Load balancer requested an additional worker
2019-11-05 03:55:13 +0000: Dispatching tests {(  # 👈👈👈👈👈
    DriverUITests/JobFlowTests
)} to worker: 0x7fe6a684c4e0 [4985: Clone 1 of DaxIOS-XC10-1-iP7-1 (3D082B53-3159-4004-A798-EA5553C873C4)]
2019-11-05 03:55:13 +0000: Received worker from worker provider: 0x7fe6a1582e40 [0: Clone 2 of DaxIOS-XC10-1-iP7-1 (F640C2F1-59A7-4448-B700-7381949B5D00)]
2019-11-05 03:55:39 +0000: Dispatching tests {(  # 👈👈👈👈👈
    DriverUITests/TutorialTests
)} to worker: 0x7fe6a684c4e0 [4985: Clone 1 of DaxIOS-XC10-1-iP7-1 (3D082B53-3159-4004-A798-EA5553C873C4)]
...

Looking at the log below, we know that once a test class is dispatched or distributed to a worker/simulator, all tests of that class will be executed in that simulator.

2019-11-05 03:55:39 +0000: Dispatching tests {(
    DriverUITests/TutorialTests
)} to worker: 0x7fe6a684c4e0 [4985: Clone 1 of DaxIOS-XC10-1-iP7-1 (3D082B53-3159-4004-A798-EA5553C873C4)]

Even when we customize a test suite (by swizzling some XCTestSuite class methods or variables), to split a test suite into multiple suites, it does not work because the made-up test suite is only initialized after tests are dispatched to a given worker.

Therefore, any hook to bypass this constraint must be done early on.

Passing the -only-testing argument to xcodebuild command

Now we pass tests (instead of test classes) to the -only-testing argument.

$ xcodebuild -workspace Driver/Driver.xcworkspace \
    # ...
    -only-testing:DriverUITests/JobFlowTests/testJobIgnoreByTimer \
    -only-testing:DriverUITests/JobFlowTests/testRecoverFlow \
    -only-testing:DriverUITests/JobFlowTests/testJobIgnoreByDax \
    -only-testing:DriverUITests/JobFlowTests/testHappyFlow \
    -only-testing:DriverUITests/JobFlowTests/testForceClearBooking \
    -only-testing:DriverUITests/TutorialTests/testOnboardingFlow \
    test-without-building

But still, the scheduling log shows that tests are grouped by test class before being dispatched to workers (see the following log for reference). This grouping is automatically done by Xcode (which it should not).

2019-11-05 04:21:42 +0000: Executing tests {(	# 👈
    DriverUITests/JobFlowTests/testJobIgnoreByTimer,
    DriverUITests/JobFlowTests/testRecoverFlow,
    DriverUITests/JobFlowTests/testJobIgnoreByDax,
    DriverUITests/TutorialTests/testOnboardingFlow,
    DriverUITests/JobFlowTests/testHappyFlow,
    DriverUITests/JobFlowTests/testForceClearBooking
)}; skipping tests {(
)}
2019-11-05 04:21:42 +0000: Load balancer requested an additional worker
2019-11-05 04:21:42 +0000: Dispatching tests {(  # 👈 ❌
    DriverUITests/JobFlowTests/testJobIgnoreByTimer,
    DriverUITests/JobFlowTests/testForceClearBooking,
    DriverUITests/JobFlowTests/testJobIgnoreByDax,
    DriverUITests/JobFlowTests/testHappyFlow,
    DriverUITests/JobFlowTests/testRecoverFlow
)} to worker: 0x7fd781261940 [6300: Clone 1 of DaxIOS-XC10-1-iP7-1 (93F0FCB6-C83F-4419-9A75-C11765F4B1CA)]
......

Overcoming grouping logic in Xcode parallel testing

Tweaking the -only-testing argument values

Based on our observation, we can imagine how Xcode runs tests in parallel. See below.

Step 1.   tests = detect_tests_to_run() # parse -only-testing arguments
Step 2.   groups_of_tests = group_tests_by_test_class(tests)
Step 3.   while groups_of_tests is not empty:
Step 3.1. 	worker = find_free_worker()
Step 3.2.     if worker is not None:
                  dispatch_tests_to_workers(groups_of_tests.pop())

In the pseudo-code above, we do not have much control to change step 2 since that grouping logic is implemented by Xcode. But we have a good guess that Xcode groups tests, by the first two components (class name) only (For example, DriverUITests/JobFlowTests). In other words, tests having the same class name run together on one simulator.

The trick to break this constraint is simple. We can tweak the input (test names) so that each group contains only one test. By inserting a random token in the class name, all class names in the tests that are passed via -only-testing argument are different.

For example, instead of passing:

-only-testing:DriverUITests/JobFlowTests/testJobIgnoreByTimer \
-only-testing:DriverUITests/JobFlowTests/testRecoverFlow \

We rather use:

-only-testing:DriverUITests/JobFlowTests_AxY132z8/testJobIgnoreByTimer \
-only-testing:DriverUITests/JobFlowTests_By8MTk7l/testRecoverFlow \

Or we can use the test name itself as the token:

-only-testing:DriverUITests/JobFlowTests_testJobIgnoreByTimer/testJobIgnoreByTimer \
-only-testing:DriverUITests/JobFlowTests_testRecoverFlow/testRecoverFlow \

After that, looking at the scheduling log, we will see that the trick can bypass the grouping logic. Now, only one test is dispatched to a worker once ready.

2019-11-05 06:06:56 +0000: Dispatching tests {(	# 👈 ✅
    DriverUITests/JobFlowTests_testJobIgnoreByDax/testJobIgnoreByDax
)} to worker: 0x7fef7952d0e0 [13857: Clone 2 of DaxIOS-XC10-1-iP7-1 (9BA030CD-C90F-4B7A-B9A7-D12F368A5A64)]
2019-11-05 06:06:58 +0000: Dispatching tests {(	# 👈 ✅
    DriverUITests/TutorialTests_testOnboardingFlow/testOnboardingFlow
)} to worker: 0x7fef7e85fd70 [13719: Clone 1 of DaxIOS-XC10-1-iP7-1 (584F99FE-49C2-4536-B6AC-90B8A10F361B)]
2019-11-05 06:07:07 +0000: Dispatching tests {(	# 👈 ✅
    DriverUITests/JobFlowTests_testRecoverFlow/testRecoverFlow
)} to worker: 0x7fef7952d0e0 [13857: Clone 2 of DaxIOS-XC10-1-iP7-1 (9BA030CD-C90F-4B7A-B9A7-D12F368A5A64)]

Handling tweaked test names

When a worker/simulator receives a request to run a test, the app (could be the runner app or the hosting app) initializes an XCTestSuite corresponding to the test name. In order for the test suite to be properly made up, we need to remove the inserted token.

This could be done easily by swizzling the XCTestSuite.init(forTestCaseWithName:). Inside that swizzled function, we remove the token and then call the original init function.

extension XCTestSuite {
  /// For 'Selected tests' suite
  @objc dynamic class func swizzled_init(forTestCaseWithName maskedName: String) -> XCTestSuite {
    /// Recover the original test name
    /// - masked: UITestCaseA_testA1/testA1      	--> recovered: UITestCaseA/testA1
    /// - masked: Driver/UITestCaseA_testA1/testA1   --> recovered: Driver/UITestCaseA/testA1
    guard let testBaseName = maskedName.split(separator: "/").last else {
      return swizzled_init(forTestCaseWithName: maskedName)
    }
    let recoveredName = maskedName.replacingOccurrences(of: "_\(testBaseName)/", with: "/") # 👈 remove the token
    return swizzled_init(forTestCaseWithName: recoveredName) # 👈 call the original init
  }
}
Swizzle function to run tests properly
Swizzle function to run tests properly

Test class discovery

In order to adopt this tip, we need to know which test classes we need to run in advance. Although Apple does not provide an API to obtain the list before running tests, this can be done in several ways. One approach we can use is to generate test classes using Sourcery. Another alternative is to parse the binaries inside .xctest bundles (in build products) to look for symbols related to tests.

Conclusion

In this article, we identified some factors causing test execution time imbalance in Xcode parallel testing (particularly for UI tests).

We also looked into how Xcode distributes tests in parallel testing. We also try to mitigate a constraint in which tests within the same class run on the same simulator. The trick not only reduces the imbalance but also gives us more confidence in adding more tests to a class without caring about whether it affects our CI infrastructure.

Below is the metric about test time imbalance recorded when running UI tests. After adopting the trick, we saw a decrease in the metric (which is a good sign). As of now, the metric stabilizes at around 0.4 mins.

Tracking data of UI test time imbalance (in minutes) in our project, collected by multiple runs
Tracking data of UI test time imbalance (in minutes) in our project, collected by multiple runs

Join us

Grab is more than just the leading ride-hailing and mobile payments platform in Southeast Asia. We use data and technology to improve everything from transportation to payments and financial services across a region of more than 620 million people. We aspire to unlock the true potential of Southeast Asia and look for like-minded individuals to join us on this ride.

If you share our vision of driving South East Asia forward, apply to join our team today.

New Unpatchable iPhone Exploit Allows Jailbreaking

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/new_unpatchable.html

A new iOS exploit allows jailbreaking of pretty much all version of the iPhone. This is a huge deal for Apple, but at least it doesn’t allow someone to remotely hack people’s phones.

Some details:

I wanted to learn how Checkm8 will shape the iPhone experience­ — particularly as it relates to security­ — so I spoke at length with axi0mX on Friday. Thomas Reed, director of Mac offerings at security firm Malwarebytes, joined me. The takeaways from the long-ranging interview are:

  • Checkm8 requires physical access to the phone. It can’t be remotely executed, even if combined with other exploits.
  • The exploit allows only tethered jailbreaks, meaning it lacks persistence. The exploit must be run each time an iDevice boots.

  • Checkm8 doesn’t bypass the protections offered by the Secure Enclave and Touch ID.

  • All of the above means people will be able to use Checkm8 to install malware only under very limited circumstances. The above also means that Checkm8 is unlikely to make it easier for people who find, steal or confiscate a vulnerable iPhone, but don’t have the unlock PIN, to access the data stored on it.

  • Checkm8 is going to benefit researchers, hobbyists, and hackers by providing a way not seen in almost a decade to access the lowest levels of iDevices.

Also:

“The main people who are likely to benefit from this are security researchers, who are using their own phone in controlled conditions. This process allows them to gain more control over the phone and so improves visibility into research on iOS or other apps on the phone,” Wood says. “For normal users, this is unlikely to have any effect, there are too many extra hurdles currently in place that they would have to get over to do anything significant.”

If a regular person with no prior knowledge of jailbreaking wanted to use this exploit to jailbreak their iPhone, they would find it extremely difficult, simply because Checkm8 just gives you access to the exploit, but not a jailbreak in itself. It’s also a ‘tethered exploit’, meaning that the jailbreak can only be triggered when connected to a computer via USB and will become untethered once the device restarts.

Cellebrite Claims It Can Unlock Any iPhone

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/cellebrite_clai.html

The digital forensics company Cellebrite now claims it can unlock any iPhone.

I dithered before blogging this, not wanting to give the company more publicity. But I decided that everyone who wants to know already knows, and that Apple already knows. It’s all of us that need to know.

iOS 12.1 Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/11/ios_121_vulnera.html

This is really just to point out that computer security is really hard:

Almost as soon as Apple released iOS 12.1 on Tuesday, a Spanish security researcher discovered a bug that exploits group Facetime calls to give anyone access to an iPhone users’ contact information with no need for a passcode.

[…]

A bad actor would need physical access to the phone that they are targeting and has a few options for viewing the victim’s contact information. They would need to either call the phone from another iPhone or have the phone call itself. Once the call connects they would need to:

  • Select the Facetime icon
  • Select “Add Person”
  • Select the plus icon
  • Scroll through the contacts and use 3D touch on a name to view all contact information that’s stored.

Making the phone call itself without entering a passcode can be accomplished by either telling Siri the phone number or, if they don’t know the number, they can say “call my phone.” We tested this with both the owners’ voice and a strangers voice, in both cases, Siri initiated the call.

Bypassing Passcodes in iOS

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/bypassing_passc.html

Last week, a story was going around explaining how to brute-force an iOS password. Basically, the trick was to plug the phone into an external keyboard and trying every PIN at once:

We reported Friday on Hickey’s findings, which claimed to be able to send all combinations of a user’s possible passcode in one go, by enumerating each code from 0000 to 9999, and concatenating the results in one string with no spaces. He explained that because this doesn’t give the software any breaks, the keyboard input routine takes priority over the device’s data-erasing feature.

I didn’t write about it, because it seemed too good to be true. A few days later, Apple pushed back on the findings — and it seems that it doesn’t work.

This isn’t to say that no one can break into an iPhone. We know that companies like Cellebrite and Grayshift are renting/selling iPhone unlock tools to law enforcement — which means governments and criminals can do the same thing — and that Apple is releasing a new feature called “restricted mode” that may make those hacks obsolete.

Grayshift is claiming that its technology will still work.

Former Apple security engineer Braden Thomas, who now works for a company called Grayshift, warned customers who had bought his GrayKey iPhone unlocking tool that iOS 11.3 would make it a bit harder for cops to get evidence and data out of seized iPhones. A change in the beta didn’t break GrayKey, but would require cops to use GrayKey on phones within a week of them being last unlocked.

“Starting with iOS 11.3, iOS saves the last time a device has been unlocked (either with biometrics or passcode) or was connected to an accessory or computer. If a full seven days (168 hours) elapse [sic] since the last time iOS saved one of these events, the Lightning port is entirely disabled,” Thomas wrote in a blog post published in a customer-only portal, which Motherboard obtained. “You cannot use it to sync or to connect to accessories. It is basically just a charging port at this point. This is termed USB Restricted Mode and it affects all devices that support iOS 11.3.”

Whether that’s real or marketing, we don’t know.

Perverse Vulnerability from Interaction between 2-Factor Authentication and iOS AutoFill

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/perverse_vulner.html

Apple is rolling out an iOS security usability feature called Security code AutoFill. The basic idea is that the OS scans incoming SMS messages for security codes and suggests them in AutoFill, so that people can use them without having to memorize or type them.

Sounds like a really good idea, but Andreas Gutmann points out an application where this could become a vulnerability: when authenticating transactions:

Transaction authentication, as opposed to user authentication, is used to attest the correctness of the intention of an action rather than just the identity of a user. It is most widely known from online banking, where it is an essential tool to defend against sophisticated attacks. For example, an adversary can try to trick a victim into transferring money to a different account than the one intended. To achieve this the adversary might use social engineering techniques such as phishing and vishing and/or tools such as Man-in-the-Browser malware.

Transaction authentication is used to defend against these adversaries. Different methods exist but in the one of relevance here — which is among the most common methods currently used — the bank will summarise the salient information of any transaction request, augment this summary with a TAN tailored to that information, and send this data to the registered phone number via SMS. The user, or bank customer in this case, should verify the summary and, if this summary matches with his or her intentions, copy the TAN from the SMS message into the webpage.

This new iOS feature creates problems for the use of SMS in transaction authentication. Applied to 2FA, the user would no longer need to open and read the SMS from which the code has already been conveniently extracted and presented. Unless this feature can reliably distinguish between OTPs in 2FA and TANs in transaction authentication, we can expect that users will also have their TANs extracted and presented without context of the salient information, e.g. amount and destination of the transaction. Yet, precisely the verification of this salient information is essential for security. Examples of where this scenario could apply include a Man-in-the-Middle attack on the user accessing online banking from their mobile browser, or where a malicious website or app on the user’s phone accesses the bank’s legitimate online banking service.

This is an interesting interaction between two security systems. Security code AutoFill eliminates the need for the user to view the SMS or memorize the one-time code. Transaction authentication assumes the user read and approved the additional information in the SMS message before using the one-time code.

New iPhone OS May Include Device-Unlocking Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/new_iphone_os_m.html

iOS 12, the next release of Apple’s iPhone operating system, may include features to prevent someone from unlocking your phone without your permission:

The feature essentially forces users to unlock the iPhone with the passcode when connecting it to a USB accessory everytime the phone has not been unlocked for one hour. That includes the iPhone unlocking devices that companies such as Cellebrite or GrayShift make, which police departments all over the world use to hack into seized iPhones.

“That pretty much kills [GrayShift’s product] GrayKey and Cellebrite,” Ryan Duff, a security researcher who has studied iPhone and is Director of Cyber Solutions at Point3 Security, told Motherboard in an online chat. “If it actually does what it says and doesn’t let ANY type of data connection happen until it’s unlocked, then yes. You can’t exploit the device if you can’t communicate with it.”

This is part of a bunch of security enhancements in iOS 12:

Other enhancements include tools for generating strong passwords, storing them in the iCloud keychain, and automatically entering them into Safari and iOS apps across all of a user’s devices. Previously, standalone apps such as 1Password have done much the same thing. Now, Apple is integrating the functions directly into macOS and iOS. Apple also debuted new programming interfaces that allow users to more easily access passwords stored in third-party password managers directly from the QuickType bar. The company also announced a new feature that will flag reused passwords, an interface that autofills one-time passwords provided by authentication apps, and a mechanism for sharing passwords among nearby iOS devices, Macs, and Apple TVs.

A separate privacy enhancement is designed to prevent websites from tracking people when using Safari. It’s specifically designed to prevent share buttons and comment code on webpages from tracking people’s movements across the Web without permission or from collecting a device’s unique settings such as fonts, in an attempt to fingerprint the device.

The last additions of note are new permission dialogues macOS Mojave will display before allowing apps to access a user’s camera or microphone. The permissions are designed to thwart malicious software that surreptitiously turns on these devices in an attempt to spy on users. The new protections will largely mimic those previously available only through standalone apps such as one called Oversight, developed by security researcher Patrick Wardle. Apple said similar dialog permissions will protect the file system, mail database, message history, and backups.

Storing Encrypted Credentials In Git

Post Syndicated from Bozho original https://techblog.bozho.net/storing-encrypted-credentials-in-git/

We all know that we should not commit any passwords or keys to the repo with our code (no matter if public or private). Yet, thousands of production passwords can be found on GitHub (and probably thousands more in internal company repositories). Some have tried to fix that by removing the passwords (once they learned it’s not a good idea to store them publicly), but passwords have remained in the git history.

Knowing what not to do is the first and very important step. But how do we store production credentials. Database credentials, system secrets (e.g. for HMACs), access keys for 3rd party services like payment providers or social networks. There doesn’t seem to be an agreed upon solution.

I’ve previously argued with the 12-factor app recommendation to use environment variables – if you have a few that might be okay, but when the number of variables grow (as in any real application), it becomes impractical. And you can set environment variables via a bash script, but you’d have to store it somewhere. And in fact, even separate environment variables should be stored somewhere.

This somewhere could be a local directory (risky), a shared storage, e.g. FTP or S3 bucket with limited access, or a separate git repository. I think I prefer the git repository as it allows versioning (Note: S3 also does, but is provider-specific). So you can store all your environment-specific properties files with all their credentials and environment-specific configurations in a git repo with limited access (only Ops people). And that’s not bad, as long as it’s not the same repo as the source code.

Such a repo would look like this:

project
└─── production
|   |   application.properites
|   |   keystore.jks
└─── staging
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client1
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client2
|   |   application.properites
|   |   keystore.jks

Since many companies are using GitHub or BitBucket for their repositories, storing production credentials on a public provider may still be risky. That’s why it’s a good idea to encrypt the files in the repository. A good way to do it is via git-crypt. It is “transparent” encryption because it supports diff and encryption and decryption on the fly. Once you set it up, you continue working with the repo as if it’s not encrypted. There’s even a fork that works on Windows.

You simply run git-crypt init (after you’ve put the git-crypt binary on your OS Path), which generates a key. Then you specify your .gitattributes, e.g. like that:

secretfile filter=git-crypt diff=git-crypt
*.key filter=git-crypt diff=git-crypt
*.properties filter=git-crypt diff=git-crypt
*.jks filter=git-crypt diff=git-crypt

And you’re done. Well, almost. If this is a fresh repo, everything is good. If it is an existing repo, you’d have to clean up your history which contains the unencrypted files. Following these steps will get you there, with one addition – before calling git commit, you should call git-crypt status -f so that the existing files are actually encrypted.

You’re almost done. We should somehow share and backup the keys. For the sharing part, it’s not a big issue to have a team of 2-3 Ops people share the same key, but you could also use the GPG option of git-crypt (as documented in the README). What’s left is to backup your secret key (that’s generated in the .git/git-crypt directory). You can store it (password-protected) in some other storage, be it a company shared folder, Dropbox/Google Drive, or even your email. Just make sure your computer is not the only place where it’s present and that it’s protected. I don’t think key rotation is necessary, but you can devise some rotation procedure.

git-crypt authors claim to shine when it comes to encrypting just a few files in an otherwise public repo. And recommend looking at git-remote-gcrypt. But as often there are non-sensitive parts of environment-specific configurations, you may not want to encrypt everything. And I think it’s perfectly fine to use git-crypt even in a separate repo scenario. And even though encryption is an okay approach to protect credentials in your source code repo, it’s still not necessarily a good idea to have the environment configurations in the same repo. Especially given that different people/teams manage these credentials. Even in small companies, maybe not all members have production access.

The outstanding questions in this case is – how do you sync the properties with code changes. Sometimes the code adds new properties that should be reflected in the environment configurations. There are two scenarios here – first, properties that could vary across environments, but can have default values (e.g. scheduled job periods), and second, properties that require explicit configuration (e.g. database credentials). The former can have the default values bundled in the code repo and therefore in the release artifact, allowing external files to override them. The latter should be announced to the people who do the deployment so that they can set the proper values.

The whole process of having versioned environment-speific configurations is actually quite simple and logical, even with the encryption added to the picture. And I think it’s a good security practice we should try to follow.

The post Storing Encrypted Credentials In Git appeared first on Bozho's tech blog.

MagPi 70: Home automation with Raspberry Pi

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/magpi-70-home-automation/

Hey folks, Rob here! It’s the last Thursday of the month, and that means it’s time for a brand-new The MagPi. Issue 70 is all about home automation using your favourite microcomputer, the Raspberry Pi.

Cover of The MagPi 70 — Raspberry Pi home automation and tech upcycling

Home automation in this month’s The MagPi!

Raspberry Pi home automation

We think home automation is an excellent use of the Raspberry Pi, hiding it around your house and letting it power your lights and doorbells and…fish tanks? We show you how to do all of that, and give you some excellent tips on how to add even more automation to your home in our ten-page cover feature.

Upcycle your life

Our other big feature this issue covers upcycling, the hot trend of taking old electronics and making them better than new with some custom code and a tactically placed Raspberry Pi. For this feature, we had a chat with Martin Mander, upcycler extraordinaire, to find out his top tips for hacking your old hardware.

Article on upcycling in The MagPi 70 — Raspberry Pi home automation and tech upcycling

Upcycling is a lot of fun

But wait, there’s more!

If for some reason you want even more content, you’re in luck! We have some fun tutorials for you to try, like creating a theremin and turning a Babbage into an IoT nanny cam. We also continue our quest to make a video game in C++. Our project showcase is headlined by the Teslonda on page 28, a Honda/Tesla car hybrid that is just wonderful.

Diddyborg V2 review in The MagPi 70 — Raspberry Pi home automation and tech upcycling

We review PiBorg’s latest robot

All this comes with our definitive reviews and the community section where we celebrate you, our amazing community! You’re all good beans

Teslonda article in The MagPi 70 — Raspberry Pi home automation and tech upcycling

An amazing, and practical, Raspberry Pi project

Get The MagPi 70

Issue 70 is available today from WHSmith, Tesco, Sainsbury’s, and Asda. If you live in the US, head over to your local Barnes & Noble or Micro Center in the next few days for a print copy. You can also get the new issue online from our store, or digitally via our Android and iOS apps. And don’t forget, there’s always the free PDF as well.

New subscription offer!

Want to support the Raspberry Pi Foundation and the magazine? We’ve launched a new way to subscribe to the print version of The MagPi: you can now take out a monthly £4 subscription to the magazine, effectively creating a rolling pre-order system that saves you money on each issue.

The MagPi subscription offer — Raspberry Pi home automation and tech upcycling

You can also take out a twelve-month print subscription and get a Pi Zero W plus case and adapter cables absolutely free! This offer does not currently have an end date.

That’s it for today! See you next month.

Animated GIF: a door slides open and Captain Picard emerges hesitantly

The post MagPi 70: Home automation with Raspberry Pi appeared first on Raspberry Pi.

Measuring the throughput for Amazon MQ using the JMS Benchmark

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/measuring-the-throughput-for-amazon-mq-using-the-jms-benchmark/

This post is courtesy of Alan Protasio, Software Development Engineer, Amazon Web Services

Just like compute and storage, messaging is a fundamental building block of enterprise applications. Message brokers (aka “message-oriented middleware”) enable different software systems, often written in different languages, on different platforms, running in different locations, to communicate and exchange information. Mission-critical applications, such as CRM and ERP, rely on message brokers to work.

A common performance consideration for customers deploying a message broker in a production environment is the throughput of the system, measured as messages per second. This is important to know so that application environments (hosts, threads, memory, etc.) can be configured correctly.

In this post, we demonstrate how to measure the throughput for Amazon MQ, a new managed message broker service for ActiveMQ, using JMS Benchmark. It should take between 15–20 minutes to set up the environment and an hour to run the benchmark. We also provide some tips on how to configure Amazon MQ for optimal throughput.

Benchmarking throughput for Amazon MQ

ActiveMQ can be used for a number of use cases. These use cases can range from simple fire and forget tasks (that is, asynchronous processing), low-latency request-reply patterns, to buffering requests before they are persisted to a database.

The throughput of Amazon MQ is largely dependent on the use case. For example, if you have non-critical workloads such as gathering click events for a non-business-critical portal, you can use ActiveMQ in a non-persistent mode and get extremely high throughput with Amazon MQ.

On the flip side, if you have a critical workload where durability is extremely important (meaning that you can’t lose a message), then you are bound by the I/O capacity of your underlying persistence store. We recommend using mq.m4.large for the best results. The mq.t2.micro instance type is intended for product evaluation. Performance is limited, due to the lower memory and burstable CPU performance.

Tip: To improve your throughput with Amazon MQ, make sure that you have consumers processing messaging as fast as (or faster than) your producers are pushing messages.

Because it’s impossible to talk about how the broker (ActiveMQ) behaves for each and every use case, we walk through how to set up your own benchmark for Amazon MQ using our favorite open-source benchmarking tool: JMS Benchmark. We are fans of the JMS Benchmark suite because it’s easy to set up and deploy, and comes with a built-in visualizer of the results.

Non-Persistent Scenarios – Queue latency as you scale producer throughput

JMS Benchmark nonpersistent scenarios

Getting started

At the time of publication, you can create an mq.m4.large single-instance broker for testing for $0.30 per hour (US pricing).

This walkthrough covers the following tasks:

  1.  Create and configure the broker.
  2. Create an EC2 instance to run your benchmark
  3. Configure the security groups
  4.  Run the benchmark.

Step 1 – Create and configure the broker
Create and configure the broker using Tutorial: Creating and Configuring an Amazon MQ Broker.

Step 2 – Create an EC2 instance to run your benchmark
Launch the EC2 instance using Step 1: Launch an Instance. We recommend choosing the m5.large instance type.

Step 3 – Configure the security groups
Make sure that all the security groups are correctly configured to let the traffic flow between the EC2 instance and your broker.

  1. Sign in to the Amazon MQ console.
  2. From the broker list, choose the name of your broker (for example, MyBroker)
  3. In the Details section, under Security and network, choose the name of your security group or choose the expand icon ( ).
  4. From the security group list, choose your security group.
  5. At the bottom of the page, choose Inbound, Edit.
  6. In the Edit inbound rules dialog box, add a role to allow traffic between your instance and the broker:
    • Choose Add Rule.
    • For Type, choose Custom TCP.
    • For Port Range, type the ActiveMQ SSL port (61617).
    • For Source, leave Custom selected and then type the security group of your EC2 instance.
    • Choose Save.

Your broker can now accept the connection from your EC2 instance.

Step 4 – Run the benchmark
Connect to your EC2 instance using SSH and run the following commands:

$ cd ~
$ curl -L https://github.com/alanprot/jms-benchmark/archive/master.zip -o master.zip
$ unzip master.zip
$ cd jms-benchmark-master
$ chmod a+x bin/*
$ env \
  SERVER_SETUP=false \
  SERVER_ADDRESS={activemq-endpoint} \
  ACTIVEMQ_TRANSPORT=ssl\
  ACTIVEMQ_PORT=61617 \
  ACTIVEMQ_USERNAME={activemq-user} \
  ACTIVEMQ_PASSWORD={activemq-password} \
  ./bin/benchmark-activemq

After the benchmark finishes, you can find the results in the ~/reports directory. As you may notice, the performance of ActiveMQ varies based on the number of consumers, producers, destinations, and message size.

Amazon MQ architecture

The last bit that’s important to know so that you can better understand the results of the benchmark is how Amazon MQ is architected.

Amazon MQ is architected to be highly available (HA) and durable. For HA, we recommend using the multi-AZ option. After a message is sent to Amazon MQ in persistent mode, the message is written to the highly durable message store that replicates the data across multiple nodes in multiple Availability Zones. Because of this replication, for some use cases you may see a reduction in throughput as you migrate to Amazon MQ. Customers have told us they appreciate the benefits of message replication as it helps protect durability even in the face of the loss of an Availability Zone.

Conclusion

We hope this gives you an idea of how Amazon MQ performs. We encourage you to run tests to simulate your own use cases.

To learn more, see the Amazon MQ website. You can try Amazon MQ for free with the AWS Free Tier, which includes up to 750 hours of a single-instance mq.t2.micro broker and up to 1 GB of storage per month for one year.

Protecting your API using Amazon API Gateway and AWS WAF — Part I

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/protecting-your-api-using-amazon-api-gateway-and-aws-waf-part-i/

This post courtesy of Thiago Morais, AWS Solutions Architect

When you build web applications or expose any data externally, you probably look for a platform where you can build highly scalable, secure, and robust REST APIs. As APIs are publicly exposed, there are a number of best practices for providing a secure mechanism to consumers using your API.

Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management.

In this post, I show you how to take advantage of the regional API endpoint feature in API Gateway, so that you can create your own Amazon CloudFront distribution and secure your API using AWS WAF.

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.

As you make your APIs publicly available, you are exposed to attackers trying to exploit your services in several ways. The AWS security team published a whitepaper solution using AWS WAF, How to Mitigate OWASP’s Top 10 Web Application Vulnerabilities.

Regional API endpoints

Edge-optimized APIs are endpoints that are accessed through a CloudFront distribution created and managed by API Gateway. Before the launch of regional API endpoints, this was the default option when creating APIs using API Gateway. It primarily helped to reduce latency for API consumers that were located in different geographical locations than your API.

When API requests predominantly originate from an Amazon EC2 instance or other services within the same AWS Region as the API is deployed, a regional API endpoint typically lowers the latency of connections. It is recommended for such scenarios.

For better control around caching strategies, customers can use their own CloudFront distribution for regional APIs. They also have the ability to use AWS WAF protection, as I describe in this post.

Edge-optimized API endpoint

The following diagram is an illustrated example of the edge-optimized API endpoint where your API clients access your API through a CloudFront distribution created and managed by API Gateway.

Regional API endpoint

For the regional API endpoint, your customers access your API from the same Region in which your REST API is deployed. This helps you to reduce request latency and particularly allows you to add your own content delivery network, as needed.

Walkthrough

In this section, you implement the following steps:

  • Create a regional API using the PetStore sample API.
  • Create a CloudFront distribution for the API.
  • Test the CloudFront distribution.
  • Set up AWS WAF and create a web ACL.
  • Attach the web ACL to the CloudFront distribution.
  • Test AWS WAF protection.

Create the regional API

For this walkthrough, use an existing PetStore API. All new APIs launch by default as the regional endpoint type. To change the endpoint type for your existing API, choose the cog icon on the top right corner:

After you have created the PetStore API on your account, deploy a stage called “prod” for the PetStore API.

On the API Gateway console, select the PetStore API and choose Actions, Deploy API.

For Stage name, type prod and add a stage description.

Choose Deploy and the new API stage is created.

Use the following AWS CLI command to update your API from edge-optimized to regional:

aws apigateway update-rest-api \
--rest-api-id {rest-api-id} \
--patch-operations op=replace,path=/endpointConfiguration/types/EDGE,value=REGIONAL

A successful response looks like the following:

{
    "description": "Your first API with Amazon API Gateway. This is a sample API that integrates via HTTP with your demo Pet Store endpoints", 
    "createdDate": 1511525626, 
    "endpointConfiguration": {
        "types": [
            "REGIONAL"
        ]
    }, 
    "id": "{api-id}", 
    "name": "PetStore"
}

After you change your API endpoint to regional, you can now assign your own CloudFront distribution to this API.

Create a CloudFront distribution

To make things easier, I have provided an AWS CloudFormation template to deploy a CloudFront distribution pointing to the API that you just created. Click the button to deploy the template in the us-east-1 Region.

For Stack name, enter RegionalAPI. For APIGWEndpoint, enter your API FQDN in the following format:

{api-id}.execute-api.us-east-1.amazonaws.com

After you fill out the parameters, choose Next to continue the stack deployment. It takes a couple of minutes to finish the deployment. After it finishes, the Output tab lists the following items:

  • A CloudFront domain URL
  • An S3 bucket for CloudFront access logs
Output from CloudFormation

Output from CloudFormation

Test the CloudFront distribution

To see if the CloudFront distribution was configured correctly, use a web browser and enter the URL from your distribution, with the following parameters:

https://{your-distribution-url}.cloudfront.net/{api-stage}/pets

You should get the following output:

[
  {
    "id": 1,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 2,
    "type": "cat",
    "price": 124.99
  },
  {
    "id": 3,
    "type": "fish",
    "price": 0.99
  }
]

Set up AWS WAF and create a web ACL

With the new CloudFront distribution in place, you can now start setting up AWS WAF to protect your API.

For this demo, you deploy the AWS WAF Security Automations solution, which provides fine-grained control over the requests attempting to access your API.

For more information about deployment, see Automated Deployment. If you prefer, you can launch the solution directly into your account using the following button.

For CloudFront Access Log Bucket Name, add the name of the bucket created during the deployment of the CloudFormation stack for your CloudFront distribution.

The solution allows you to adjust thresholds and also choose which automations to enable to protect your API. After you finish configuring these settings, choose Next.

To start the deployment process in your account, follow the creation wizard and choose Create. It takes a few minutes do finish the deployment. You can follow the creation process through the CloudFormation console.

After the deployment finishes, you can see the new web ACL deployed on the AWS WAF console, AWSWAFSecurityAutomations.

Attach the AWS WAF web ACL to the CloudFront distribution

With the solution deployed, you can now attach the AWS WAF web ACL to the CloudFront distribution that you created earlier.

To assign the newly created AWS WAF web ACL, go back to your CloudFront distribution. After you open your distribution for editing, choose General, Edit.

Select the new AWS WAF web ACL that you created earlier, AWSWAFSecurityAutomations.

Save the changes to your CloudFront distribution and wait for the deployment to finish.

Test AWS WAF protection

To validate the AWS WAF Web ACL setup, use Artillery to load test your API and see AWS WAF in action.

To install Artillery on your machine, run the following command:

$ npm install -g artillery

After the installation completes, you can check if Artillery installed successfully by running the following command:

$ artillery -V
$ 1.6.0-12

As the time of publication, Artillery is on version 1.6.0-12.

One of the WAF web ACL rules that you have set up is a rate-based rule. By default, it is set up to block any requesters that exceed 2000 requests under 5 minutes. Try this out.

First, use cURL to query your distribution and see the API output:

$ curl -s https://{distribution-name}.cloudfront.net/prod/pets
[
  {
    "id": 1,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 2,
    "type": "cat",
    "price": 124.99
  },
  {
    "id": 3,
    "type": "fish",
    "price": 0.99
  }
]

Based on the test above, the result looks good. But what if you max out the 2000 requests in under 5 minutes?

Run the following Artillery command:

artillery quick -n 2000 --count 10  https://{distribution-name}.cloudfront.net/prod/pets

What you are doing is firing 2000 requests to your API from 10 concurrent users. For brevity, I am not posting the Artillery output here.

After Artillery finishes its execution, try to run the cURL request again and see what happens:

 

$ curl -s https://{distribution-name}.cloudfront.net/prod/pets

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<HTML><HEAD><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<TITLE>ERROR: The request could not be satisfied</TITLE>
</HEAD><BODY>
<H1>ERROR</H1>
<H2>The request could not be satisfied.</H2>
<HR noshade size="1px">
Request blocked.
<BR clear="all">
<HR noshade size="1px">
<PRE>
Generated by cloudfront (CloudFront)
Request ID: [removed]
</PRE>
<ADDRESS>
</ADDRESS>
</BODY></HTML>

As you can see from the output above, the request was blocked by AWS WAF. Your IP address is removed from the blocked list after it falls below the request limit rate.

Conclusion

In this first part, you saw how to use the new API Gateway regional API endpoint together with Amazon CloudFront and AWS WAF to secure your API from a series of attacks.

In the second part, I will demonstrate some other techniques to protect your API using API keys and Amazon CloudFront custom headers.

[$] Easier container security with entitlements

Post Syndicated from corbet original https://lwn.net/Articles/755238/rss

During KubeCon
+ CloudNativeCon Europe 2018
, Justin Cormack and Nassim Eddequiouaq presented
a proposal to simplify the setting of security parameters for containerized
applications.
Containers depend on a large set of intricate security primitives that can
have weird interactions. Because they are so hard to use, people often just
turn the whole thing off. The goal of the proposal is to make those
controls easier to understand and use; it is partly inspired by mobile apps
on iOS and Android platforms, an idea that trickled back into Microsoft and
Apple desktops. The time seems ripe to improve the field of
container security, which is in desperate need of simpler controls.

Amazon Sumerian – Now Generally Available

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-sumerian-now-generally-available/

We announced Amazon Sumerian at AWS re:Invent 2017. As you can see from Tara‘s blog post (Presenting Amazon Sumerian: An Easy Way to Create VR, AR, and 3D Experiences), Sumerian does not require any specialized programming or 3D graphics expertise. You can build VR, AR, and 3D experiences for a wide variety of popular hardware platforms including mobile devices, head-mounted displays, digital signs, and web browsers.

I’m happy to announce that Sumerian is now generally available. You can create realistic virtual environments and scenes without having to acquire or master specialized tools for 3D modeling, animation, lighting, audio editing, or programming. Once built, you can deploy your finished creation across multiple platforms without having to write custom code or deal with specialized deployment systems and processes.

Sumerian gives you a web-based editor that you can use to quickly and easily create realistic, professional-quality scenes. There’s a visual scripting tool that lets you build logic to control how objects and characters (Sumerian Hosts) respond to user actions. Sumerian also lets you create rich, natural interactions powered by AWS services such as Amazon Lex, Polly, AWS Lambda, AWS IoT, and Amazon DynamoDB.

Sumerian was designed to work on multiple platforms. The VR and AR apps that you create in Sumerian will run in browsers that supports WebGL or WebVR and on popular devices such as the Oculus Rift, HTC Vive, and those powered by iOS or Android.

During the preview period, we have been working with a broad spectrum of customers to put Sumerian to the test and to create proof of concept (PoC) projects designed to highlight an equally broad spectrum of use cases, including employee education, training simulations, field service productivity, virtual concierge, design and creative, and brand engagement. Fidelity Labs (the internal R&D unit of Fidelity Investments), was the first to use a Sumerian host to create an engaging VR experience. Cora (the host) lives within a virtual chart room. She can display stock quotes, pull up company charts, and answer questions about a company’s performance. This PoC uses Amazon Polly to implement text to speech and Amazon Lex for conversational chatbot functionality. Read their blog post and watch the video inside to see Cora in action:

Now that Sumerian is generally available, you have the power to create engaging AR, VR, and 3D experiences of your own. To learn more, visit the Amazon Sumerian home page and then spend some quality time with our extensive collection of Sumerian Tutorials.

Jeff;

 

Virginia Beach Police Want Encrypted Radios

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/virginia_beach_.html

This article says that the Virginia Beach police are looking to buy encrypted radios.

Virginia Beach police believe encryption will prevent criminals from listening to police communications. They said officer safety would increase and citizens would be better protected.

Someone should ask them if they want those radios to have a backdoor.