Tag Archives: testing

Thermal testing Raspberry Pi 4

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/thermal-testing-raspberry-pi-4/

Raspberry Pi 4 just got a lot cooler! The last four months of firmware updates have taken over half a watt out of idle power and nearly a watt out of fully loaded power. For The MagPi magazine, Gareth Halfacree gets testing.

Raspberry Pi 4 Model B

Raspberry Pi 4 launched with a wealth of new features to tempt users into upgrading: a more powerful CPU and GPU, more memory, Gigabit Ethernet, and USB 3.0 support. More processing power means more electrical power, and Raspberry Pi 4 is the most power-hungry member of the family.

The launch of a new Raspberry Pi model is only the beginning of the story. Development is continuous, with new software and firmware improving each board long after it has rolled off the factory floor.

Raspberry Pi 4 updates

Raspberry Pi 4 is no exception: since launch, there has been a series of updates which have reduced its power needs and, in doing so, enabled it to run considerably cooler. These updates apply to any Raspberry Pi 4, whether you picked one up on launch day or are only just now making a purchase.

This feature takes a look at how each successive firmware release has improved Raspberry Pi 4, using a synthetic workload designed – unlike a real-world task – to make the system-on-chip (SoC) get as hot as possible in as short a time as possible.

Read on to see what wonders a simple firmware update can work.

How we tested Raspberry Pi 4 firmware revisions

To test how well each firmware revision handles the heat, a power-hungry synthetic workload was devised to represent a worst-case scenario: the stress-ng CPU stress-testing utility places all four CPU cores under heavy and continuous load. Meanwhile, the glxgears tool exercises the GPU. Both tools can be installed by typing the following at the Terminal:

sudo apt install stress-ng mesa-utils

The CPU workload can be run with the following command:

stress-ng --cpu 0 --cpu-method fft

The command will run for a full day at default settings; to cancel, press CTRL+C on the keyboard.

To run the GPU workload, type:

glxgears -fullscreen

This will display a 3D animation of moving gears, filling the entire screen. To close it, press ALT+F4 on the keyboard.

For more information on how both tools work, type:

man stress-ng
man glxgears

During the testing for this feature, both of the above workloads are run simultaneously for ten minutes. Afterwards, Raspberry Pi is allowed to cool for five minutes.

The thermal imagery was taken at idle, then again after 60 seconds of the stress-ng load alone.

Baseline test: Raspberry Pi 3B+

Already well established, Raspberry Pi 3 Model B+ was the device to beat

Before Raspberry Pi 4 came on the scene, Raspberry Pi 3 Model B+ was the must-have single-board computer. Benefiting from all the work that had gone into the earlier Raspberry Pi 3 Model B alongside improved hardware, Raspberry Pi 3B+ was – and still is – a popular device. Let’s see how well it performs before testing Raspberry Pi 4.

Power draw

An efficient processor and an improved design for the power circuitry compared to its predecessor help keep Raspberry Pi 3B+ power draw down: at idle, the board draws just 1.91W; when running the synthetic workload, that increases to 5.77W.

Thermal imaging


A thermal camera shows where the power goes. At idle, the system-on-chip is relatively cool while the combined USB and Ethernet controller to the middle-right is a noticeable hot spot; at load, measured after 60 seconds of a CPU-intensive synthetic workload, the SoC is by far the hottest component at 58.1°C.

Thermal throttling

This chart measures Raspberry Pi 3B+ CPU speed and temperature during a ten-minute power-intensive synthetic workload. The test runs on both the CPU and GPU, and is followed by a five-minute cooldown. Raspberry Pi 3B+ quickly reaches the ‘soft throttle’ point of 60°C, designed to prevent the SoC hitting the hard-throttle maximum limit of 80°C, and the CPU remains throttled at 1.2GHz for the duration of the benchmark run.

Raspberry Pi 4 Launch Firmware

The fastest Raspberry Pi ever made demanded the most power

Raspberry Pi 4 Model B launched with a range of improvements over Raspberry Pi 3B+, including a considerably more powerful CPU, a new GPU, up to four times the memory, and USB 3.0 ports. All that new hardware came at a cost: higher power draw and heat output. So let’s see how Raspberry Pi 4 performed at launch.

Power Draw

There’s no denying it, Raspberry Pi 4 was a hungry beast at launch. Even idling at the Raspbian desktop, the board draws 2.89W, hitting a peak of 7.28W under a worst-case synthetic CPU and GPU workload – a hefty increase over Raspberry Pi 3 B+.

Thermal Imaging


Thermal imaging shows that Raspberry Pi 4, using the launch-day firmware, runs hot even at idle, with hot spots at the USB controller to the middle-right and power-management circuitry to the bottom-left. Under a heavy synthetic load, the SoC hits 72.1°C by the 60-second mark.

Thermal Throttling

Raspberry Pi 4 manages to go longer than Raspberry Pi 3 B+ before the synthetic workload causes it to throttle; but throttle it does after just 65 seconds. As the workload runs, the CPU drops from 1.5GHz to a stable 1GHz, then dips as low as 750MHz towards the end.

Raspberry Pi 4 VLI Firmware

USB power management brings some relief for Raspberry Pi heat

The first major firmware update developed for Raspberry Pi 4 brought power management features to the Via Labs Inc. (VLI) USB controller. The VLI controller is responsible for handling the two USB 3.0 ports, and the firmware update allowed it to run cooler.

Power Draw

Even without anything connected to Raspberry Pi 4’s USB 3.0 ports, the VLI firmware upgrade has a noticeable impact: idle power draw has dropped to 2.62W, while the worst-case draw under a heavy synthetic workload sits at 7.01W.

Thermal Imaging


The biggest impact on heat is seen, unsurprisingly, on the VLI chip to the middle-right; the VLI firmware helps keep the SoC in the centre and the power-management circuitry at the bottom-left cooler than the launch firmware. The SoC reached 71.4°C under load – a small, but measurable, improvement.

Thermal Throttling

Enabling power management on the VLI chip has a dramatic impact on performance in the worst-case synthetic workload: the throttle point is pushed back to 77 seconds, the CPU spends more time at its full 1.5GHz speed, and it doesn’t drop to 750MHz at all. The SoC also cools marginally more rapidly at the end of the test.

Raspberry Pi 4 VLI, SDRAM firmware

With VLI tamed, it’s the memory’s turn now

The next firmware update, designed to be used alongside the VLI power management features, changes how Raspberry Pi 4’s memory – LPDDR4 SDRAM – operates. While having no impact on performance, it helps to push the power draw down still further at both idle and load.

Power Draw

As with the VLI update, the SDRAM update brings a welcome drop in power draw at both idle and load. Raspberry Pi 4 now draws 2.47W at idle and 6.79W running a worst-case synthetic load – a real improvement from the 7.28W at launch.

Thermal Imaging


Thermal imaging shows the biggest improvement yet, with both the SoC and the power-management circuitry running considerably cooler at idle after the installation of this update. After 60 seconds of load, the SoC is noticeably cooler at 68.8°C – a drop of nearly 3°C over the VLI firmware alone.

Thermal Throttling

A cooler SoC means better performance: the throttle point under the worst-case synthetic workload is pushed back to 109 seconds, after which Raspberry Pi 4 continues to bounce between full 1.5GHz and throttled 1GHz speeds for the entire ten-minute benchmark run – bringing the average speed up considerably.

Raspberry Pi 4 VLI, SDRAM, Clocking, and Load-Step Firmware

September 2019’s firmware update includes several changes, while bringing with it the VLI power management and SDRAM firmware updates. The biggest change is how the BCM2711B0 SoC on Raspberry Pi 4 increases and decreases its clock-speed in response to demand and temperature.

Power Draw

The September firmware update has incremental improvements: idle power draw is down to 2.36W and load under the worst-case synthetic workload to a peak of 6.67W, all without any reduction in raw performance or loss of functionality.

Thermal Imaging


Improved processor clocking brings a noticeable drop in idle temperature throughout the circuit board. At load, everything’s improved – the SoC peaked at 65°C after 60 seconds of the synthetic workload, while both the VLI chip and the power-management circuitry are clearly cooler than under previous firmwares.

Thermal Throttling

With this firmware, Raspberry Pi 4’s throttle point under the worst-case synthetic workload is pushed back all the way to 155 seconds – more than double the time the launch-day firmware took to hit the same point. The overall average speed is also brought up, thanks to more aggressive clocking back up to 1.5GHz.

Raspberry Pi 4 Beta Firmware

Currently in testing, this beta release is cutting-edge

Nobody at Raspberry Pi is resting on their laurels. Beta firmware is in testing and due for public release soon. It brings with it many improvements, including finer-grained control over SoC operating voltages and optimised clocking for the HDMI video state machines.

To upgrade your Raspberry Pi to the latest firmware, open a Terminal window and enter:

sudo apt update
sudo apt full-upgrade

Now restart Raspberry Pi using:

sudo shutdown - r now

Power Draw

The beta firmware decreases power draw at idle to reduce overall power usage, while tweaking the voltage of the SoC to drop power draw at load without harming performance. The result: a drop to 2.1W idle, and 6.41W at load – the best yet.

Thermal Imaging


The improvements made at idle are clear to see on thermal imaging: the majority of Raspberry Pi 4’s circuit board is below the bottom 35°C measurement point for the first time. After 60 seconds of load, there’s a smaller but still measurable improvement, with a peak measured temperature of 64.8°C.

Thermal Throttling

While Raspberry Pi 4 does still throttle with the beta firmware, thanks to the heavy demands of the synthetic workload used for testing, it delivers the best results yet: throttling occurs at the 177s mark while the new clocking controls bring the average clock speed up markedly. The firmware also allows Raspberry Pi 4 to up-clock more at idle, improving the performance of background tasks.

Keep cool with Raspberry Pi 4 orientation

Firmware upgrades offer great gains, but what about putting Raspberry Pi on its side?

While running the latest firmware will result in considerable power draw and heat management improvements, there’s a trick to unlock even greater gains: adjusting the orientation of Raspberry Pi. For this test, Raspberry Pi 4 with the beta firmware installed was stood upright with the GPIO header at the bottom and the power and HDMI ports at the top.

Thermal Throttling

Simply moving Raspberry Pi 4 into a vertical orientation has an immediate impact: the SoC idles around 2°C lower than the previous best and heats a lot more slowly – allowing it to run the synthetic workload for longer without throttling and maintain a dramatically improved average clock speed.

There are several factors at work: having the components oriented vertically improves convection, allowing the surrounding air to draw the heat away more quickly, while lifting the rear of the board from a heat-insulating desk surface dramatically increases the available surface area for cooling.

Throttle Point Timing

This chart shows how long it took to reach the throttle point under the synthetic workload. Raspberry Pi 3B+ sits at the bottom, soft-throttling after just 19 seconds. Each successive firmware update for Raspberry Pi 4, meanwhile, pushes the throttle point further and further – though the biggest impact can be achieved simply by adjusting Raspberry Pi’s orientation.

Real World Testing

Synthetic benchmarks aside, how do the boards perform with real workloads?

Looking at the previous pages, it’s hard to get a real idea of the difference in performance between Raspberry Pi 3B+ and Raspberry Pi 4. The synthetic benchmark chosen for the thermal throttle tests performs power-hungry operations which are rarely seen in real-world workloads, and repeats them over and over again with no end.

Compiling Linux

In this test, both Raspberry Pi 3B+ and Raspberry Pi 4 are given the task of compiling the Linux kernel from its source code. It’s a good example of a CPU-heavy workload which occurs in the real world, and is much more realistic than the deliberately taxing synthetic workload of earlier tests.

With this workload, Raspberry Pi 4 easily emerges the victor. Despite its CPU running only 100MHz faster than Raspberry Pi 3B+ at its full speed, it’s considerably more efficient – and, combined with the ability to run without hitting its thermal throttle point, completes the task in nearly half the time.

Kernel compile: Raspberry Pi 3B+

Raspberry Pi 3B+ throttles very early on in the benchmark compilation test and remains at a steady 1.2GHz until a brief period of cooling, as the compiler switches from a CPU-heavy workload to a storage-heavy workload, allows it to briefly spike back to its 1.4GHz default again. Compilation finished in 5097 seconds – one hour, 24 minutes, and 57 seconds.

Kernel compile: Raspberry Pi 4 model B

The difference between the synthetic and real-world workloads is clear to see: at no point during the compilation did Raspberry Pi 4 reach a high enough temperature to throttle, remaining at its full 1.5GHz throughout – bar, as with Raspberry Pi 3 B+, a brief period when a change in compiler workload allowed it to drop to its idle speeds. Compilation finished in 2660 seconds – 44 minutes and 20 seconds.

Get The MagPi magazine issue 88 now

This article is from today’s brand-new issue of The MagPi magazine, the official Raspberry Pi magazine. Buy it from all good newsagents, Raspberry Pi Press, and the Raspberry Pi Store, Cambridge.

Subscribe to pay less per issue and support our work, or download the free PDF to give it a try first.

The post Thermal testing Raspberry Pi 4 appeared first on Raspberry Pi.

Building and testing polyglot applications using AWS CodeBuild

Post Syndicated from Prakash Palanisamy original https://aws.amazon.com/blogs/devops/building-and-testing-polyglot-applications-using-aws-codebuild/

Prakash Palanisamy, Solutions Architect

Microservices are becoming the new normal, and it’s natural to use multiple different programming languages for different microservices in the same application. This blog post explains how easy it is to build polyglot applications, test them, and package them for deployment using a single AWS CodeBuild project.

CodeBuild adds support for Polyglot builds using runtime-versions. With CodeBuild, it is possible to specify multiple runtimes in the buildspec file as part of the install phase. Runtimes can be composed of different major versions of the same programming language, or different programming languages altogether. For a complete list of supported runtime versions and how they must be specified in the buildspec file, see the build specification reference for CodeBuild.

This post provides an example of a microservices application comprised of three different microservices written using different programming languages. The complete code is available in the GitHub repository aws-codebuild-polyglot-application.

  • A microservice providing a greeting message (microservice-greeting) is written in Python.
  • A microservice obtaining users’ details (microservice-name) from an Amazon DynamoDB table is written using Node.js.
  • Another microservice making API calls to the above-mentioned name and greeting services, and provide a personalized greeting (microservice-webapp), is written in Java.

All of these microservices are deployed as serverless functions in AWS Lambda and front-ended by an API interface using Amazon API Gateway. To enable automated packaging and deployment, use AWS Serverless Application Model (AWS SAM) and the AWS SAM CLI. The complete application is deployed locally using DynamoDB Local and the sam local command. Automated UI testing uses the built-in headless browsers in the standard CodeBuild containers. To run DynamoDB Local and SAM Local docker runtime is being used hence the privileged mode should be enabled in the CodeBuild project.

The example buildspec file contains the following code:

version: 0.2

env:
  variables:
    ARTIFACT_BUCKET: "polyglot-codebuild-artifact-eu-west-1"

phases:
  install:
    runtime-versions:	
      nodejs: 10
      java: corretto8
      python: 3.7
      docker: 18
    commands:
      - pip3 install --upgrade aws-sam-cli selenium pylint
  pre_build:
    commands:
      - python --version
      - pylint $CODEBUILD_SRC_DIR/microservices-greeting/greeting_handler.py
      - own_ip=`hostname -i`
      - sed -ie "s/CODEBUILD_IP/$own_ip/g" $CODEBUILD_SRC_DIR/env-webapp.json
  build:
    commands:
      - node --version
      - cd $CODEBUILD_SRC_DIR/microservices-name
      - npm install
      - npm run build
      - java -version
      - cd $CODEBUILD_SRC_DIR/microservices-webapp
      - mvn clean package -Plambda
      - cd $CODEBUILD_SRC_DIR
      - |
        sam package --template-file greeting-sam.yaml \
        --output-template-file greeting-sam-packaged.yaml \
        --s3-bucket $ARTIFACT_BUCKET
      - |
        sam package --template-file name-sam.yaml \
        --output-template-file name-sam-packaged.yaml \
        --s3-bucket $ARTIFACT_BUCKET
      - |
        sam package --template-file webapp-sam.yaml \
        --output-template-file webapp-sam-packaged.yaml \
        --s3-bucket $ARTIFACT_BUCKET
  post_build:
    commands:
      - cd $CODEBUILD_SRC_DIR
      - echo "Starting local DynamoDB instance"
      - docker network create codebuild-local
      - |
        docker run -d -p 8000:8000 --network codebuild-local \
        --name dynamodb amazon/dynamodb-local
      - |
        aws dynamodb create-table --endpoint-url http://127.0.0.1:8000 \
        --table-name NamesTable \
        --attribute-definitions AttributeName=Id,AttributeType=S \
        --key-schema AttributeName=Id,KeyType=HASH \
        --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
      - |
        aws dynamodb --endpoint-url http://127.0.0.1:8000 batch-write-item \
        --cli-input-json file://ddb_names.json
      - echo "Starting APIs locally..."
      - |
        nohup sam local start-api --template greeting-sam.yaml \
        --port 3000 --host 0.0.0.0 &> greeting.log &
      - |
        nohup sam local start-api --template name-sam.yaml \
        --port 3001 --host 0.0.0.0 --env-vars env.json \
        --docker-network codebuild-local &> name.log &
      - |
        nohup sam local start-api --template webapp-sam.yaml \
        --port 3002 --host 0.0.0.0 \
        --env-vars env-webapp.json &> webapp.log &
      - cd $CODEBUILD_SRC_DIR/website
      - nohup python3 -m http.server 8080 &>webserver.log &
      - sleep 20
      - echo "Starting headless UI testing..."
      - python $CODEBUILD_SRC_DIR/tests/testsuite.py
artifacts:
  files:
    - greeting-sam-packaged.yaml
    - name-sam-packaged.yaml
    - webapp-sam-packaged.yaml

In the env section, an environment variable named ARTIFACT_BUCKET is uploaded and initialized. It contains the name of the S3 bucket in which the sam package command generated the AWS SAM template. This variable can be overridden either in the build project or while running the build. For more information about the order of precedence, see Run a Build (Console).

In the install phase, there are two sequences: runtime-versions and commands. As part of the runtime versions, four runtimes are specified: Three programming languages’ runtime (Node.js, Java, and Python), which are required to build the microservices, and Docker runtime to deploy the application locally using the sam local command. In the commands sequence, the required packages are installed and updated.

In the pre_build phase, use pylint to lint the Python-based microservice. Get the IP address of the CodeBuild container, to be used later while running the application locally.

In the build phase, use the command sequence to build individual microservices based on their programming language. Use the sam package command to create the Lambda deployment package and generate an updated AWS SAM template.

In the post_build phase, start a DynamoDB Local container as a daemon, create a table in that database, and insert the user details in to that table. After that, start a local instance of greeting, name, and web app microservices using the sam local command. The repository also contains a simple static website that can make API calls to these microservices and provide a user-friendly response. This website is hosted locally using Python’s built-in HTTP server. After the application starts, execute the automated UI testing by running the testsuite.py script. This test script uses Selenium WebDriver. It runs UI tests on headless Firefox and Chrome browsers to validate whether the local deployment of the application works as expected.

If all the phases complete successfully, the updated AWS SAM template files are stored as artifacts based on the specification in artifacts section.

The repository also contains an AWS CloudFormation template that creates a pipeline on AWS CodePipeline with AWS CodeCommit as sources. It builds the application using the previously mentioned buildspec in CodeBuild, and deploys the application using AWS CloudFormation.

Conclusion

In this post, you learned how to use the runtime-versions capability in CodeBuild to build a polyglot application. You also learned about automated UI testing using built-in headless browsers in CodeBuild. Using polygot build capabilities of CodeBuild you shall efficiently handle building applications developed on multiple programming languages using a single build project, instead of creating and maintaining multiple build projects. Similarly, since all the dependent components are built as part of the same project, it also improves the ease of testing including the integration between components.

Loki, a dynamic mock server for HTTP/TCP testing

Post Syndicated from Grab Tech original https://engineering.grab.com/loki-dynamic-mock-server-http-tcp-testing

Background

In a previous article we introduced Mockers – an innovative tool for local box testing at Grab. Mockers used a Shift Left testing strategy, making testing more effective and cheaper for development teams. Mockers’ popularity and success motivated us to create Loki – a one-stop dynamic mock server for local box testing of mobile apps.

There are some unique challenges in mobile apps testing at Grab. End-to-end testing of an app is difficult due to high dependency on backend services and other apps. Staging environment, which hosts a plethora of backend services, is tough to manage and maintain. Issues such as staging downtime, configuration mismatches, and data corruption can affect staging adding to the testing woes. Moreover, our apps are fairly complex, utilizing multiple transport protocols such as HTTP, HTTPS, TCP for various business flows.

The business flows are also complex, requiring exhaustive set up such as credit card payments set up, location spoofing, etc resulting in high maintenance costs for automated testing. Loki simulates these flows and developers can easily test use cases that take longer to set up in a real backend staging.

Loki is our attempt to address challenges in mobile app testing by turning every developer local box into a full fledged pseudo backend environment where all mobile workflows can be tested without any external dependencies. It mocks backend services on developer local boxes, decoupling the mobile apps from real backend services, which provides several advantages such as:

No need to deploy frequently to staging

Testing is blocked if the app receives a bad response from staging. In these cases, code changes have to be deployed on staging to fix issues before resuming tests. In contrast, using Loki lets developers continue testing without any immediate need to deploy code changes to staging.

Allows parallel frontend and backend development

Loki acts as a mock backend service when the real backend is still evolving. It lets the frontend development run in parallel with backend development.

Overcome time limitations

In a one week regression-and-release scenario, testing time is limited. However, the application UI rendering and functionality still needs reasonable testing. Loki lets developers concentrate on testing in the available time instead of fixing dependencies on backend services.

Loki – Grab’s solution to simplify mobile apps testing

At Grab, we have multiple mobile apps that are dependent on each other. For example, our Passenger and Driver apps are two sides of a coin; the driver gets a job card only when a passenger requests a booking. These apps are developed by different teams, each with its own release cycle. This can make it tricky to confidently and repeatedly test the whole business flow across apps. Apps also depend on multiple backend services to execute a booking or food order and communicate over different protocols.

Here’s a look at how our mobile apps interact with backend services over different protocols:

Mobile app interaction with backend services

Loki is a dynamic mock server, written in Golang, running in a Docker container on the local box or in CI. It is easy to set up and run through standard Docker commands. In the context of mobile app testing, it plays the role of backend services, so you no longer need to set up an extensive staging environment.

The Loki architecture looks like this:

Loki architecture

The technical challenges we had to overcome

We wanted a comprehensive mocking solution so that teams don’t need to integrate multiple tools to achieve independent testing. It turned out that mocking TCP was most challenging because:

  • It is a long running client-server connection, and it doesn’t follow an HTTP-like request/response pattern.
  • Messages can be sent to the app without an incoming request as well, hence we had to expose a way via Loki to set a mock expectation which can send messages to the app without any request triggering it.
  • As TCP is a long running connection, we needed a way to delimit incoming requests so we know when we can truncate and deserialize the incoming request into JSON.

We engineered the Loki backend to support both HTTP and TCP protocols on different ports. Yet, the mock expectations are set up using RESTful APIs over HTTP for both protocols. A single point of entry for setting expectations made it more intuitive for our developers.

An in-memory cron implementation pushes scheduled messages to the app over a TCP connection. This enabled testing of complex use cases such as drivers getting new job cards, driver and passenger chat workflows, etc. The delimiter for TCP protocol is configurable at start up, so each team can decide when to truncate the request.

To enable Loki on our CI, we had to reduce its memory footprint. Hence, we built Loki with pluggable storages. MySQL is used when running on local and on CI we switch seamlessly to in-memory cache or Redis.

For testing apps locally, developers must validate complex use cases such as:

  • Payment related flows, which require the response to include the same payment ID as sent in the request. This is a case of simple mapping of request fields in the response JSON.

  • Flows requiring runtime logic execution. For example, a job card sent to a driver must have a valid timestamp, requiring runtime computation on Loki.

To support these cases and many more, we added JavaScript injection capability to Loki. So, when we set an expectation for an HTTP request/response pair or for TCP events, we can specify JavaScript for computing the dynamic response. This is executed in a sandbox by an in-house JS execution library.

Grab follows a transactional workflow for bookings. Over the life of a ride, bookings go through different statuses. So, Loki had to address multiple HTTP requests to the same endpoint returning different responses. This feature is required for successfully mocking a whole ride end-to-end.

Loki uses  an HTTP API “httpTimesAndOrder” for this feature. For example, using “httpTimesAndOrder”, you can configure the same status endpoint (/ride/status) to return different ride statuses such as “PICKING” for the first five requests, “IN_RIDE” for the next three requests, and so on.

Now, let’s look at how to use Loki to mock HTTP requests and TCP events.

Mocking HTTP requests

To mock HTTP requests, developers first point their app to send requests to the Loki mock server. Then, they set up expectations for all requests sent to the Loki mock server.

Loki mock server

For example, the Passenger app calls an HTTP dependency GET /closeby/drivers/ to get nearby drivers. To mock it with Loki, you set an expected response on the Loki mock server. When the GET /closeby/drivers/ request is actually made from the Passenger app, Loki returns the set response.

This snippet shows how to set an expected response for the GET /closeby/drivers/request:

Loki API: POST `/api/v1/expectations`

Request Body :

{
  "uriToMock": "/closeby/drivers",
  "method": "GET",
  "response": {
    "drivers": [
      1001,
      1002,
      1010
    ]
  }
}

Workflow for setting expectations and receiving responses

Workflow for setting expectations and receiving responses

Mocking TCP events

Developers point their app to Loki over a TCP connection and set up the TCP expectations. Loki then generates scheduled events such as sending push messages (job cards, notifications, etc) to the apps pointing at Loki.

For example, if the Driver app, after it starts, wants to get a job card, you can set an expectation in Loki to push a job card over the TCP connection to the Driver app after a scheduled time interval.

This snippet shows how to set the TCP expectation and schedule a push message:

Loki API: POST `/api/v1/tcp/expectations/pushmessage`

Request Body :

{
  "name": "samplePushMsg",
  "msgSequence": [
    {
      "messages": {
        "body": {
          "jobCardID": 1001
        }
      }
    },
    {
      "messages": {
        "body": {
          "jobCardID": 1002
        }
      }
    }
  ],
  "schedule": "@every 1m"
}

Workflow for scheduling a push message over TCP

Workflow for scheduling a push message over TCP

Some example use cases

Now that you know about Loki, let’s look at some example use cases.

Generating a custom response at runtime

Our first example is customizing a runtime response for both HTTP and TCP requests. This is helpful when developers need dynamic responses to requests. For example, you can add parameters from the request URL or request body to the runtime response.

It’s simple to implement this with a JavaScript function. Assume you want to embed a message parameter in the request URL to the response. To do this, you first use a POST method to set up the expectation (in JSON format) for the request on Loki:

Loki API: POST `/api/v1/feature/expectations`

Request Body :

{
  "expectations": [{
    "name": "Sample call",
    "desc": "v1/test/{name}",
    "tags": "v1/test/{name}",
    "resource": "/v1/test?name=user1",
    "verb": "POST",
    "response": {
      "body": "{ \"msg\": \"Hi \"}",
      "status": 200
    },
    "clientOptions": {
"javascript": "function main(req, resp) { var url = req.RequestURI; var captured = /name=([^&]+)/.exec(url)[1]; resp.msg =  captured ? resp.msg + captured : resp.msg + 'myDefaultValue'; return resp }"
    },
    "isActive": 1
  }]
}

When Loki receives the request, the JavaScript function used in the clientOptionskey, adds name to the response at runtime. For example, this is the request’s fixed response:

{
    "msg": "Hi "
}

But, after using the JavaScript function to add the URL parameter, the dynamic response is:

{
    "msg": "Hi user1"
}

Similarly, you can use JavaScript to add other dynamic responses such as modifying the response’s JSON array, adding parameters to push messages, etc.

Defining a response sequence for mocked API endpoints

Here’s another interesting example – defining the response sequence for API endpoints.

A response sequence is useful when you need different responses from the same API endpoint. For example, a status endpoint should return different ride statuses such as ‘allocating’, ‘allocated’, ‘picking’, etc. depending on the stage of a ride.

To do this, developers set up their HTTP expectations on Loki. Then, they easily define the response sequence for an API endpoint using a Loki POST method.

In this example:

  • times – specifies the number of times the same response is returned.
  • after – specifies one or more expectations that must match before a specified expectation is matched.

Here, the expectations are matched in this sequence when a request is made to an endpoint – Allocating > Allocated > Pickuser > Completed. Further, Completed is set to two times, so Loki returns this response two times.

Loki API: POST `/api/v1/feature/sequence`

Request Body :
  "httpTimesAndOrder": [
      {
          "name": "Allocating",
          "times": 1
      },
      {
          "name": "Allocated",
          "times": 1,
          "after": ["Allocating"]
      },
      {
          "name": "Pickuser",
          "times": 1,
          "after": ["Allocated"]
      },
      {
          "name": "Completed",
          "times": 2,
          "after": ["Pickuser"]
      }
  ]
}

In conclusion

Since Loki’s inception, we have set up a full range CI with proper end-to-end app UI tests and, to a great extent, decoupled our app releases from the staging backend. This improved delivery cycles, and we did faster bug catching and more exhaustive testing. Moreover, both developers and QAs can easily play with apps to perform exploratory testing as well as manual functional validations. Teams are also using Loki to run automated scripts (Espresso and XCUItests) for validating the mobile app pages.

Loki’s adoption is growing steadily at Grab. With our frequent release of new mobile app features, Loki helps teams meet our high quality bar and achieve huge productivity gains.

If you have any feedback or questions on Loki, please leave a comment.

Anatomy of a product quality issue: PoE HAT

Post Syndicated from James Adams original https://www.raspberrypi.org/blog/poe-hat-revision/

One of the neat new features of the Raspberry Pi 3 Model B+ is its support for IEEE 802.3af Power-over-Ethernet (PoE). This standard allows up to 13W of power to be delivered over the twisted pairs in an Ethernet cable without interfering with the transmission of data. The Raspberry Pi board itself provides a PoE-capable Ethernet jack and circuit protection components; the power regulation electronics, which would be too costly and bulky to include on the main board, live on a separate HAT.

Raspberry Pi PoE HAT Power over ethernet

The Raspberry Pi 3B+ wearing a PoE HAT

When we announced the 3B+, we revealed that an official Raspberry Pi PoE HAT was in the works and, after a few unforeseen production delays, we we released this HAT at the end of August. Feedback was, and remains, generally very positive; but fairly quickly, we started to see some reports from users who were experiencing issues.

The problem

The problem they reported was this: when powering certain Raspberry Pi units via the PoE HAT, it was not possible to draw the full rated current from the USB ports.

Our 5V USB output, denoted VBUS, is fed by the main 5V rail via a current-limiting switch. This switch is designed to protect the system by detecting short-circuit, over-current, or reverse-voltage events, and disconnecting the USB ports in response. Our current-limiting switch is set to a limit of just over 1A.

Despite the PoE HAT’s ability to supply up to 2.5A, the experiments we ran in response to the reports suggested that, when it was used to supply some boards, the USB supply would trip out at a much lower current. Mice and keyboards worked fine, but higher-current devices such as wireless dongles and hard disks would fail.

Our initial theory was that the PoE HAT was injecting noise into the Pi via the 5V rail, and that this was somehow upsetting the switch. However, we were able to rule this out, since we found no evidence of high-frequency noise at the input to the switch. Another theory was that the flyback transformer’s close physical proximity to the switch was somehow coupling noise in. But we were able to rule this out as well: we showed that the behaviour persisted when the HAT was connected using a right-angle header, which moves the power electronics away from the Raspberry Pi.

What was happening?

The PoE HAT works by converting the incoming 48V from the Ethernet lines to 5V using a flyback transformer. In simple terms, the primary side of the transformer is switched across the 48V, and energy is stored in the transformer in the form of a magnetic field. The primary is then disconnected and the magnetic field collapses. This changing magnetic field induces a voltage (scaled based on the transformer turns ratio) in the secondary, which is rectified by a schottky diode and output capacitance. This output capacitance is formed from the output capacitors on the PoE HAT itself, the capacitors on the Raspberry Pi 5V rail, and, when the switch is on, the VBUS reservoir capacitors.

The switching frequency of the flyback transformer is relatively low (~100 kHz). This means that when the system is under load, each switching cycle must transfer a relatively large amount of energy. During each cycle, the 5V rail is discharged according to the load on the system, and charged up again by the flyback’s secondary, dumping more energy into the caps. In each cycle, a spike of high current is pushed through the output diode into the capacitors.

To cut a long story short, putting a current probe on the input to switch showed large current spikes, as energy from the flyback made its way into the VBUS reservoir capacitors. This was expected. However, it turned out that the switch was erroneously registering these spikes as true over-current events. The switch is supposed to have a filter that allows it to ignore brief spikes, but we discovered that only one of the two approved versions of the switch did this correctly.

Current into switch (yellow) and VBUS voltage (blue)

If it’s not been tested, it’s broken

It’s a truism that if you don’t test an aspect of a design, it will certainly be broken. Those of us with a Broadcom background sometimes refer to this as Alan Morgan’s rule, after its most enthusiastic proponent.

Extensive testing over all configurations, operating parameters, and use cases is the only way to minimise the likelihood of releasing a product with a hardware issue. Even relatively simple hardware can end up catching you out by throwing up some unexpected bug or issue. And even the big guys with huge development teams and test labs occasionally mess things up — anyone remember the Pentium FDIV bug?

We made several mistakes with the first version of the PoE HAT:

  • USB load testing was performed using boards that had the working switch
  • Our field testing programme was abbreviated because the product was late
  • We didn’t inquire as to whether our field testers were using high-current peripherals (they weren’t)

It’s embarrassing to have released a product with a bug like this, but it’s a lesson well-learned, and we will be improving our internal processes to prevent a recurrence.

The solution

Fortunately, this bug turned out to be easy to fix. We designed an L-C filter to apply further smoothing to the output current from the HAT. The filter consists of a little extra input and output capacitance and a 4.7µH inductor (chosen to have a suitable current rating and DC resistance), as well as 330mR resistor in parallel to provide damping. We were even able to wrap the mod up in a little mezzanine PCB that fits neatly underneath the board.

The original, un-modded board

Hand-modded board with L-C filter

Final board with mezzanine

Once we had confirmed that there was a problem with the PoE HAT, we took the product off sale, and recalled and reworked the outstanding units. We are now happy to announce that most Approved Resellers should now have the revised boards in stock. We believe that most people who have been affected by this issue have already returned their PoE HATs for a refund; if you’re experiencing issues and haven’t yet returned your product, you can get in touch with your reseller to arrange a replacement.

I’d like to thank the members of the Raspberry Pi engineering team, our contract manufacturing partners Taijie, our licensee partners and Approved Resellers, and also the community members who kindly tested prototypes of the fixed board design. This hasn’t been the easiest product launch in our history, but hopefully the lessons learned have set us up well for the future.

The post Anatomy of a product quality issue: PoE HAT appeared first on Raspberry Pi.

How to Test and Debug AWS CodeDeploy Locally Before You Ship Your Code

Post Syndicated from Kirankumar Chandrashekar original https://aws.amazon.com/blogs/devops/how-to-test-and-debug-aws-codedeploy-locally-before-you-ship-your-code/

AWS CodeDeploy is a powerful service for automating deployments to Amazon EC2, AWS Lambda, and on-premises servers. However, it can take some effort to get complex deployments up and running or to identify the error in your application when something goes wrong.

When I set up new deployments or debug existing ones, I like to test and debug locally for these reasons:

  • To speed up the iteration process.
  • To isolate potential issues.
  • To validate code.

You can test application code packages on any machine that has the CodeDeploy agent installed before you deploy it through the service. Likewise, to debug locally, you just need to install the CodeDeploy agent on any machine, including your local server or EC2 instance.

In this blog post, I will walk you through the steps to validate and debug a sample application package using the codedeploy-local command. You can find the sample package in this GitHub repository.

 

 

Prerequisites

Install the CodeDeploy agent on any supported instance type. For information, see Use the AWS CodeDeploy Agent to Validate a Deployment Package on a Local Machine in the AWS CodeDeploy User Guide.

Step 1

Verify the CodeDeploy agent is installed and ready for local testing. By default, codedeploy-local is installed in the following locations:

On Amazon Linux, RHEL, or Ubuntu Server:

/opt/codedeploy-agent/bin/codedeploy-local

On Windows Server:

C:\ProgramData\Amazon\CodeDeploy\bin

For simplicity, I am creating an alias for /opt/codedeploy-agent/bin/codedeploy-local as codedeploy-local so I can use the absolute path. This is optional.

alias codedeploy-local='sudo /opt/codedeploy-agent/bin/codedeploy-local'

When I execute the codedeploy-local command on the Linux terminal, I get the following response from the agent, which indicates that the agent is installed:

[[email protected] ~]$ codedeploy-local 
ERROR: Expecting appspec file at location /home/ec2-user/appspec.yml but it is not found there. Please either run the CLI from within a directory containing the appspec.yml file or specify a bundle location containing an appspec.yml file in its root directory

If you receive an error that the codedeploy-local command is not available or the package was not found, go back to the prerequisites and install the agent.

Step 2
To test the sample application package using the codedeploy-local command, I have to make sure that the application package is available on the local machine. The sample package I am testing here is an Apache (httpd)-based application.

Use wget to download the package to the local machine.

wget https://s3.amazonaws.com/aws-codedeploy-us-east-1/samples/latest/SampleApp_Linux.zip

Now that the sample package is available locally, I can either unzip the package or use the zip file for testing with the codedeploy-local command.

To test the zip file (archive) package (SampleApp_Linux.zip) with the codedeploy-local command, use the -l or –bundle-location option along with the -t or –type option as shown:

On Linux server:

codedeploy-local --bundle-location /home/ec2-user/CodeDeployPackage/SampleApp_Linux.zip -t zip --deployment-group my-deployment-group

On Windows server:

codedeploy-local --bundle-location C:/path/to/local/bundle.zip --type zip --deployment-group my-deployment-group

To unarchive the zip file, either change the directory (cd) to the top-level directory or provide the absolute path to the application package.

The package can be executed by providing the absolute path to the content as shown here:

codedeploy-local --bundle-location /path/to/local/bundle/directory

Or by changing the directory (cd) to the location of the unarchived package and executing the following command:

codedeploy-local

Executing the codedeploy-local command in the directory where the sample package is unzipped shows whether the deployment was successful or failed.

Here is a successful deployment execution and result:

[email protected] CodeDeployPackage]$ ls -a
.  ..  appspec.yml  index.html  LICENSE.txt  SampleApp_Linux.zip  scripts

[email protected] CodeDeployPackage]$ codedeploy-local
Starting to execute deployment from within folder /opt/codedeploy-agent/deployment-root/default-local-deployment-group/d-H3OZK261S-local
See the deployment log at /opt/codedeploy-agent/deployment-root/default-local-deployment-group/d-H3OZK261S-local/logs/scripts.log for more details
AppSpec file valid. Local deployment successful

Step 3

Check the codedeploy-local logs and the deployment archive.

In the previous step, I was able to see that the local deployment was successful. The output included:

  • The log location.
  • The location where the deployment-archive was uploaded. It will be used as a staging directory for that deployment.

Because the –deployment-group, -g option was not provided, a local deployment group folder was created in the following location:

/opt/codedeploy-agent/deployment-root/default-local-deployment-group/d-H3OZK261S-local

The following shows the listing of the files in the codedeploy-local deployment directory for a deployment:

[email protected] ~]$ ls /opt/codedeploy-agent/deployment-root/default-local-deployment-group/d-H3OZK261S-local
deployment-archive  logs

[[email protected] deployment-archive]$ ls -a /opt/codedeploy-agent/deployment-root/default-local-deployment-group/d-H3OZK261S-local/deployment-archive/
.  ..  appspec.yml  index.html  LICENSE.txt  SampleApp_Linux.zip  scripts

[[email protected] deployment-archive]$ ls -a /opt/codedeploy-agent/deployment-root/default-local-deployment-group/d-H3OZK261S-local/logs
.  ..  scripts.log

In the directory path generated for each deployment, default-local-deployment-group  is the name of the deployment group and d-H3OZK261S-local is the deployment ID.

The scripts.log shows the execution logs for the codedeploy-local command for a deployment group and deployment ID. Here is an example of a scripts.log that shows the execution of each lifecycle event defined in the appspec.yml:

[[email protected] deployment-archive]$ cat /opt/codedeploy-agent/deployment-root/default-local-deployment-group/d-H3OZK261S-local/logs/scripts.log
2018-03-13 23:02:37 LifecycleEvent - ApplicationStop
2018-03-13 23:02:37 Script - scripts/stop_server
2018-03-13 23:02:37 [stdout]Stopping httpd: [  OK  ]
2018-03-13 23:02:37 LifecycleEvent - BeforeInstall
2018-03-13 23:02:37 Script - scripts/install_dependencies
2018-03-13 23:02:37 [stdout]Loaded plugins: priorities, update-motd, upgrade-helper
2018-03-13 23:02:37 [stdout]Package httpd-2.2.34-1.16.amzn1.x86_64 already installed and latest version
2018-03-13 23:02:37 [stdout]Nothing to do
2018-03-13 23:02:37 Script - scripts/start_server
2018-03-13 23:02:37 [stdout]Starting httpd: [  OK  ]

There is another log file in this location that comes in handy when deploying the code on the local machine:

/var/log/aws/codedeploy-agent/codedeploy-local.log

You can enable verbose logging in the codedeploy-agent configuration file by setting the parameter :verbose: to true.

By default, the location of the configuration file is:

Amazon Linux, RHEL, or Ubuntu Server instances

/etc/codedeploy-agent/conf/codedeployagent.yml

Windows Server

C:/ProgramData/Amazon/CodeDeploy/conf.yml

Other features for debugging issues locally with codedeploy-local

The codedeploy-local command has other features that you can use to debug and troubleshoot issues.

Override the lifecycle hooks mentioned in the appspec.yml file

You can use codedeploy-local to override the lifecycle hooks provided in the appspec.yml. In this example, only the ApplicationStop lifecycle hook defined in the appspec.yml file will be executed. All other hooks will be ignored.

codedeploy-local -e ApplicationStop

In the same way, you can override the order in which the CodeDeploy agent executes multiple lifecycle hooks. This feature can help you determine and change the sequence before the deployment is performed on the server. For information, see AppSpec ‘hooks’ Section in the AWS CodeDeploy User Guide.

For example, this command executes the BeforeInstall lifecycle hook first and then executes the ApplicationStop lifecycle hook.

codedeploy-local -e BeforeInstall,ApplicationStop

Execute scripts specifically for codedeploy-local

If there are scripts that are used for local testing only and not required for the CodeDeploy deployment, then you can use the $DEPLOYMENT_GROUP_NAME variable, which has a value equal to LocalFleet.

Here are other environment variables and their values:

$APPLICATION_NAME: The location of the deployment package (for example, /home/ec2-user/CodeDeployPackage)

$DEPLOYMENT_ID: Unique per deployment (for example, d-LTVP5L6YY-local)

$DEPLOYMENT_GROUP_ID: The name of the deployment group. When the -g option is used for the command, this value will be passed. For example, in codedeploy-local -g testing, this value is testing. If this option is not set, the value of this environment variable is default-local-deployment-group

$LIFECYCLE_EVENT: The lifecycle hook that echoed this environment variable (for example, ApplicationStop)

Override the CodeDeploy agent configuration

You can override the CodeDeploy agent configuration and use your own configuration file from a custom location. This functionality makes it possible to test multiple configurations with the local deployments using the option -c, –agent-configuration-file while executing the codedeploy-local command. For the options to use, see AWS CodeDeploy Agent Configuration Reference in the AWS CodeDeploy User Guide.

By default, configuration files are stored in the following locations:

On Amazon Linux, RHEL, or Ubuntu Server:

/etc/codedeploy-agent/conf/codedeployagent.yml

On Windows Server:

C:/ProgramData/Amazon/CodeDeploy/conf.yml

Using custom configuration helps when verbose logging is required for package testing. You can do this just by using the -c or –agent-configuration-file option and without changing the default configuration file. Here is an example that shows the use of this option:

codedeploy-local -e BeforeInstall,ApplicationStop -c /<;-local-path->;/

For example, on Amazon Linux, RHEL, or Ubuntu Server instances, when the config file is in /etc/codedeployagent.yml, the command is:

codedeploy-local -e BeforeInstall,ApplicationStop -c /etc/codedeployagent.yml

For example, on Windows Server instances, when the config file is in C:/ProgramData/conf.yml, the command is:

codedeploy-local -e BeforeInstall,ApplicationStop -c C:/ProgramData/conf.yml

Point to an application package in an S3 bucket or GitHub repository

If the application package is stored in an S3 bucket or GitHub repository, codedeploy-local can be executed without downloading the file onto the local machine. You can do this using the -l, –bundle-location and -t, –type with the codedeploy-local command.

Here is an example for deploying a sample application package located in an S3 bucket:

codedeploy-local -l s3://aws-codedeploy-us-east-1/samples/latest/SampleApp_Linux.zip -t zip

Here is an example for deploying a sample application package from a public GitHub repository:

codedeploy-local --bundle-location https://api.github.com/repos/awslabs/aws-codedeploy-sample-tomcat/zipball/master --type zip

If you use GitHub, make sure that the application package with the appspec.yaml is in the root of the directory. If these contents are in a subfolder path, download the package to the local instance or server and then:

  • Execute codedeploy-local from the directory where the file exists.

-OR-

  • Use the -t, –type  option with the value of directory and -l, –bundle-location as the local path.

Troubleshooting common errors using codedeploy-local

The codedeploy-local command can be used to detect if the appspec.yml is in valid YAML format. If the format is invalid, you get the following error:

/usr/share/ruby/vendor_ruby/2.0/psych.rb:205:in `parse': (<unknown>): mapping values are not allowed in this context at line 10 column 13 (Psych::SyntaxError)

If there is an invalid lifecycle hook in the appspec.yml file, the deployment fails with this error:

ERROR: appspec.yml file contains unknown lifecycle events: ["BeforeInstall1"]

The name of a lifecycle hook is case-sensitive. The following error is returned because the BeforeInstall lifecycle hook was entered as Beforeinstall:

ERROR: appspec.yml file contains unknown lifecycle events: ["Beforeinstall"]

If there is any error in the scripts provided for execution in any lifecycle hooks (for example, a problem in the BeforeInstall script), the execution logs show something like this:

codedeploy-local -g testing
Starting to execute deployment from within folder /opt/codedeploy-agent/deployment-root/testing/d-6UBAIVVSK-local
Your local deployment failed while trying to execute your script at /opt/codedeploy-agent/deployment-root/testing/d-6UBAIVVSK-local/deployment-archive/scripts/install_dependencies
See the deployment log at /opt/codedeploy-agent/deployment-root/testing/d-6UBAIVVSK-local/logs/scripts.log for more details

For the preceding error, when you look at the logs in the deployment directory for the deployment group, you will see something like this:

cat /opt/codedeploy-agent/deployment-root/testing/d-6UBAIVVSK-local/logs/scripts.log
2018-03-21 03:34:04 LifecycleEvent - ApplicationStop
2018-03-21 03:34:04 Script - scripts/stop_server
2018-03-21 03:34:04 [stdout]LocalFleet
2018-03-21 03:34:04 [stdout]/home/ec2-user/CodeDeployPackage
2018-03-21 03:34:04 [stdout]d-6UBAIVVSK-local
2018-03-21 03:34:04 [stdout]testing
2018-03-21 03:34:04 [stdout]ApplicationStop
2018-03-21 03:34:04 [stdout]Stopping httpd: [  OK  ]
2018-03-21 03:34:04 LifecycleEvent - BeforeInstall
2018-03-21 03:34:04 Script - scripts/install_dependencies
2018-03-21 03:34:04 [stdout]Loaded plugins: priorities, update-motd, upgrade-helper
2018-03-21 03:34:04 [stdout]No package httpd1 available.
2018-03-21 03:34:04 [stderr]Error: Nothing to do

This log snippet shows that the install_dependencies script had a package called httpd1 that is not available for installation.

If the appspec.yml is not found in the root of the application package, you will see an error like this:

/opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/hook_executor.rb:213:in `parse_app_spec': The CodeDeploy agent did not find an AppSpec file within the unpacked revision directory at revision-relative path "appspec.yml". The revision was unpacked to directory "/opt/codedeploy-agent/deployment-root/default-local-deployment-group/d-BE59ORH9I-local/deployment-archive", and the AppSpec file was expected but not found at path "/opt/codedeploy-agent/deployment-root/default-local-deployment-group/d-BE59ORH9I-local/deployment-archive/appspec.yml". Consult the AWS CodeDeploy Appspec documentation for more information at http://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html (RuntimeError)
    from /opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/hook_executor.rb:100:in `initialize'
    from /opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/command_executor.rb:147:in `new'
    from /opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/command_executor.rb:147:in `block (3 levels) in map'
    from /opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/command_executor.rb:146:in `each'
    from /opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/command_executor.rb:146:in `block (2 levels) in map'
    from /opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/command_executor.rb:68:in `execute_command'
    from /opt/codedeploy-agent/lib/aws/codedeploy/local/deployer.rb:85:in `block in execute_events'
    from /opt/codedeploy-agent/lib/aws/codedeploy/local/deployer.rb:84:in `each'
    from /opt/codedeploy-agent/lib/aws/codedeploy/local/deployer.rb:84:in `execute_events'
    from /opt/codedeploy-agent/bin/codedeploy-local:117:in `<main>'

Conclusion

The codedeploy-local command can be used to validate and debug an application package for deployments to Amazon EC2 instances or on-premises servers. With codedeploy-local, you can test and fix errors on a local machine during the code development phase. CodeDeploy local deployments also make it possible for you to change the order of the lifecycle hooks so you can restructure the appspec.yaml to add commands on the fly.

How to Run Headless Front-End Tests with AWS Cloud9 and AWS CodeBuild

Post Syndicated from Eric Z. Beard original https://aws.amazon.com/blogs/devops/how-to-run-headless-front-end-tests-with-aws-cloud9-and-aws-codebuild/

Automated testing is a critical component to a well-designed software development lifecycle. When you test front-end applications, you often use a browser in combination with testing frameworks. A headless browser is one that is used on a server that does not normally need to run visual applications. In this blog post, I will show you how to configure AWS Cloud9 and AWS CodeBuild to support testing an Angular application with the headless version of Chrome. AWS Cloud9 has deep integration with services such as AWS Lambda, and the environment is easily accessible anywhere, from any internet-connected device.

AWS Cloud9

By default, Cloud9 runs on an Amazon EC2 instance that is managed for you. You can also run it on any Linux machine that is accessible through SSH.

First, create a Cloud9 environment.

  1. Sign in to the AWS Management Console, scroll down to Developer Tools, and choose Cloud9.
  2. On the following page, choose Create Environment.
  3. Enter a name for your environment and then choose Next Step.
  4. On the following page, leave the defaults for the time being and click Next Step.
  5. On the following page, choose Create Environment.

It might take a few minutes for your environment to initialize. Behind the scenes, an EC2 instance is created for you in the region you have currently selected in the console. In the environment, press Alt-T to bring up a bash terminal tab. For the remaining steps in this post, you will enter commands into this tab.

There is a lot to take in if this is your first time using Cloud9. If you need help getting set up or want to learn more, see the Cloud9 User Guide.

Install and configure Angular

The first thing we will do in our new environment is to install and configure an Angular application.

  1. Upgrade Node to the latest version supported by AWS Lambda. (At the time of this writing, that’s 8.10.)
    nvm install 8.10
  2. Install the Angular CLI using npm, the Node Package Manager. Install it as a global package with the –g option so that it is available to run from anywhere in your environment.
    npm install -g @angular/cli
  3. Use the Angular CLI to create an Angular application.
    ng new my-app
    cd my-app/
  4. Run the application to make sure everything is working as expected. To preview a running application in Cloud9, the app must run on a specific port. With Angular, you must disable the default host header check.
    ng serve --port 8080 --host localhost --disable-host-check

     

    On the toolbar, next to Run, choose Preview and then choose Preview Running Application. You should see something like this:

  5. Press Ctrl-C to stop serving and then in the my-app directory, try to test your application.
    cd ..
    ng test --watch=false

    That obviously doesn’t work the way you would expect it to on a regular workstation. The testing framework can’t find Chrome because we are running on a headless EC2 instance. To start addressing the problem, first install a package called Puppeteer as a development dependency in your application.

    I’d like to give credit here to Alex Bainter, a software developer who wrote a comprehensive blog post about replacing PhantomJS with headless Chromium and Karma. His post was extremely helpful to me when I had to figure this out for the first time.

  6. Install Puppeteer and its dependencies.
    npm i -D puppeteer
    npm i –D @angular-devkit/build-angular
  7. You can get a good look at the missing Chrome libraries by running the ldd command on the binary that comes with Puppeteer.
    cd node_modules/puppeteer/.local-chromium/linux-564778/chrome-linux/

    (By the time you read this post, the version number in that path will probably be different. Look in the puppeteer/.local-chromium directory to see what it is for your installation.)

    ldd chrome | grep not

    You should see output that looks like this:

    libXcursor.so.1 => not found
    libXdamage.so.1 => not found
    libXfixes.so.3 => not found
    libcups.so.2 => not found
    libXss.so.1 => not found
    libXrandr.so.2 => not found
    libpangocairo-1.0.so.0 => not found
    libpango-1.0.so.0 => not found
    libcairo.so.2 => not found
    libatk-1.0.so.0 => not found
    libatk-bridge-2.0.so.0 => not found
    libgtk-3.so.0 => not found
    libgdk-3.so.0 => not found
    libgdk_pixbuf-2.0.so.0 => not found

 

Install headless Chrome

Now comes the tricky part. Installing headless Chrome on an Amazon Linux EC2 instance is no simple task. One strategy is to install the various dependencies by compiling from source, but the chain of dependencies for Chrome, which includes gtk+ and glib, soon gets out of hand. I found another blogger who solved the problem by borrowing from the CentOS and Fedora package repositories. Thanks to Yuanyi for this part of the solution.

  1. Install yum packages to cover basic dependencies.
    sudo yum install -y libXcursor libXdamage libcups libXss libXrandr \
        cups-libs dbus-glib libXinerama cairo cairo-gobject pango
  2. Borrow packages from CentOS and Fedora.
    sudo rpm -ivh --nodeps http://mirror.centos.org/centos/7/os/x86_64/Packages/atk-2.22.0-3.el7.x86_64.rpm
    sudo rpm -ivh --nodeps http://mirror.centos.org/centos/7/os/x86_64/Packages/at-spi2-atk-2.22.0-2.el7.x86_64.rpm
    sudo rpm -ivh --nodeps http://mirror.centos.org/centos/7/os/x86_64/Packages/at-spi2-core-2.22.0-1.el7.x86_64.rpm
    sudo rpm -ivh --nodeps http://dl.fedoraproject.org/pub/archive/fedora/linux/releases/20/Fedora/x86_64/os/Packages/g/GConf2-3.2.6-7.fc20.x86_64.rpm
    sudo rpm -ivh --nodeps http://dl.fedoraproject.org/pub/archive/fedora/linux/releases/20/Fedora/x86_64/os/Packages/l/libXScrnSaver-1.2.2-6.fc20.x86_64.rpm
    sudo rpm -ivh --nodeps http://dl.fedoraproject.org/pub/archive/fedora/linux/releases/20/Fedora/x86_64/os/Packages/l/libxkbcommon-0.3.1-1.fc20.x86_64.rpm
    sudo rpm -ivh --nodeps http://dl.fedoraproject.org/pub/archive/fedora/linux/releases/20/Fedora/x86_64/os/Packages/l/libwayland-client-1.2.0-3.fc20.x86_64.rpm
    sudo rpm -ivh --nodeps http://dl.fedoraproject.org/pub/archive/fedora/linux/releases/20/Fedora/x86_64/os/Packages/l/libwayland-cursor-1.2.0-3.fc20.x86_64.rpm
    sudo rpm -ivh --nodeps http://dl.fedoraproject.org/pub/archive/fedora/linux/releases/20/Fedora/x86_64/os/Packages/g/gtk3-3.10.4-1.fc20.x86_64.rpm
    sudo rpm -ivh --nodeps http://dl.fedoraproject.org/pub/archive/fedora/linux/releases/16/Fedora/x86_64/os/Packages/gdk-pixbuf2-2.24.0-1.fc16.x86_64.rpm
  3. Edit src/karma.conf.js to require Puppeteer and set the CHROME_BIN environment variable. Here is the full content of that file after the changes.
    const puppeteer = require("puppeteer");
    process.env.CHROME_BIN = puppeteer.executablePath();
    
    module.exports = function (config) {
        config.set({
            basePath: '',
            frameworks: ['jasmine', ' @angular-devkit/build-angular'],
            plugins: [
                require('karma-jasmine'),
                require('karma-chrome-launcher'),
                require('karma-jasmine-html-reporter'),
                require('karma-coverage-istanbul-reporter'),
               require('@angular-devkit/build-angular/plugins/karma')
            ],
            client:{
                clearContext: false // leave Jasmine Spec Runner output visible in browser
            },
        coverageIstanbulReporter: {
            reports: [ 'html', 'lcovonly' ],
            fixWebpackSourcePaths: true
        },
        angularCli: {
            environment: 'dev'
        },
        reporters: ['progress', 'kjhtml'],
        port: 8080,
        colors: true,
        logLevel: config.LOG_INFO,
        autoWatch: true,
        browsers: ['ChromeHeadlessNoSandbox'],
        customLaunchers: {
            ChromeHeadlessNoSandbox: {
                base: 'ChromeHeadless',
                flags: ['--no-sandbox']
            }
        },
        singleRun: false
    
    });
    
    };
  4. Make a small adjustment to your test specification in src/app/app.component.spec.ts so that it is checking for the title in the test called "should render title in a h1 tag". Run ng test again.
    ng test --watch=false

If you see that green SUCCESS indicator, then you have done it! You installed Angular and created an application, installed Puppeteer, and by filling in the missing libraries for Chrome, you made it possible to run headless Chrome tests in Cloud9!

AWS CodeBuild

The next piece of the puzzle is your CI/CD pipeline. When a developer checks in new code, you want to test that code with a continuous integration tool like AWS CodeBuild. With CodeBuild, the problem related to headless Chrome is slightly different than it was with Cloud9, because the default build environment for Node apps is an Ubuntu image. You still need to install Chromium and its dependencies, but Ubuntu packages make it easier.

  1. Navigate to the CodeBuild console and create a new build project. Give it a name and configure the source repository. You will need to store your code for this exercise with one of the providers listed later so that CodeBuild knows where to find it when you start a build. Since you are already logged in to the AWS console, AWS CodeCommit is a good option, but you could also choose Amazon S3, Bitbucket, or GitHub.
  2. Configure the build environment. For Operating system, choose Ubuntu. For Runtime, choose Node.js. You can specify your own container image for the build, but the buildspec.yml described in step 3 works out of the box with the default image.
  3. For the build specification, provide the following buildspec.yml file in the root directory of the source code repository.
    
    version: 0.1
    phases:
      install:
        commands:
    
          # Install the Angular CLI
          - npm install -g @angular/cli
    
          # Install puppeteer as a dev dependency
          - npm i -D puppeteer
          - npm i –D @angular-devkit/build-angular
    
          # Print out missing libs
          - echo "Missing Libs" || ldd ./node_modules/puppeteer/.local-chromium/linux-549031/chrome-linux/chrome | grep not
    
          # Upgrade apt
          - apt-get upgrade
    
          # Update libs
          - apt-get update
    
          # Install apt-transport-https
          - apt-get install -y apt-transport-https
    
          # Use apt to install the Chrome dependencies
          - apt-get install -y libxcursor1
          - apt-get install -y libgtk-3-dev
          - apt-get install -y libxss1
          - apt-get install -y libasound2
          - apt-get install -y libnspr4
          - apt-get install -y libnss3
          - apt-get install -y libx11-xcb1
    
          # Print out missing libs
          - echo "Missing Libs" || ldd ./node_modules/puppeteer/.local-chromium/linux-549031/chrome-linux/chrome | grep not
    
          # Install project dependencies
          - npm install
    
      pre_build:
        commands
    	  - echo "Nothing to pre_build"
    
      build:
        commands:
    
          - printenv 
    
          # Build the project
          - ng build
    
          # Run headless Chrome tests
          - ng test --watch=false
          - printenv
    
      post_build:
        commands:
    
          - printenv
    
          # Deploy the project to S3
    
          - if [ ${CODEBUILD_BUILD_SUCCEEDING}=1 ]; then aws s3 sync --delete dist/ "s3://${BUCKET_NAME}"; else echo "Skipping aws sync"; fi
    
    artifacts:
      files:
        - src/*
    
    

    Feel free to remove those ldd and printenv statements, but it is worth taking a look at the output to get a better understanding of what is going on with the build.

  4. Specify the location for artifacts. The following step isn’t required, but it makes it possible to incorporate the build project into AWS CodePipeline.
  5. Expand Advanced Settings and configure an environment variable for the website bucket name.
  6. Configure the buckets. CodeBuild can’t write to the S3 buckets unless you give the service explicit permissions to do so. This is one of the most common causes of build failures for projects that involve S3. Attach the following policy to the CodeBuild service role to give it access to those buckets. Choose Continue and Save to create the build project, and then navigate to the IAM console and search for the CodeBuild service role that was just created for you. Add this as an inline policy.
    
    {
    	"Version": "2012-10-17",
    	"Statement": [
    		{
    			"Sid": "VisualEditor0",
    			"Effect": "Allow",
    			"Action": "s3:*",
    			"Resource": [
    				"arn:aws:s3:::YOUR_BUCKET_FOR_ARTIFACTS",
    				"arn:aws:s3:::YOUR_BUCKET_FOR_ARTIFACTS /*"
    			]
    		},
    		{
    			"Sid": "VisualEditor1",
    			"Effect": "Allow",
    			"Action": "s3:*",
    			"Resource": [
    				"arn:aws:s3:::YOUR_BUCKET_FOR_THE_WEBSITE",
    				"arn:aws:s3:::YOUR_BUCKET_FOR_THE_WEBSITE /*"
    			]
    		}
    	]
    }
    
    
  7. You should now be able to start the build and see that the compiled website has been copied to your S3 bucket after the build is complete.

 

Alternative Cloud9 installation using SSH and Ubuntu

You can run the Cloud9 IDE from a Linux machine that you create, rather than letting Cloud9 provision it for you. Create a Cloud9 environment and choose Connect and run in remote server. For more information about this type of setup, see Creating an SSH Environment in the AWS Cloud9 User Guide.

After you have configured the environment, the work you have to do is much simpler than on the Amazon Linux instance, because there are Ubuntu packages that install the required dependencies. Follow the instructions earlier in this post until you get to the “Install headless Chrome” section. Issue this command:

sudo apt install -y libxcursor1 libgtk-3-dev libxss1 libasound2 libnspr4 libnss3

You don’t need to borrow from any of the CentOS or Fedora repositories.

Make changes to karma.conf.js as described earlier and you should then be ready to test your application.

 

Summary

You are now able to run headless integration tests using Cloud9 by installing Puppeteer and filling in the required Chrome dependencies. You can also extend this to the container image used to test your application with CodeBuild. Automated testing is vital to a trustworthy DevOps pipeline, and Cloud9 opens up new possibilities for developers of all types, including front-end developers.

Happy coding! –EZB

[$] A filesystem “change journal” and other topics

Post Syndicated from jake original https://lwn.net/Articles/755277/rss

At the 2017 Linux Storage, Filesystem, and Memory-Management Summit
(LSFMM), Amir Goldstein presented his work
on adding a superblock watch mechanism to provide a scalable way to notify
applications
of changes in a filesystem. At the 2018 edition of LSFMM, he was back to
discuss adding NTFS-like change
journals
to the kernel in support of backup solutions of various
sorts. As a second topic for the session, he also wanted to discuss doing
more performance-regression testing
for filesystems.

Protecting coral reefs with Nemo-Pi, the underwater monitor

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/coral-reefs-nemo-pi/

The German charity Save Nemo works to protect coral reefs, and they are developing Nemo-Pi, an underwater “weather station” that monitors ocean conditions. Right now, you can vote for Save Nemo in the Google.org Impact Challenge.

Nemo-Pi — Save Nemo

Save Nemo

The organisation says there are two major threats to coral reefs: divers, and climate change. To make diving saver for reefs, Save Nemo installs buoy anchor points where diving tour boats can anchor without damaging corals in the process.

reef damaged by anchor
boat anchored at buoy

In addition, they provide dos and don’ts for how to behave on a reef dive.

The Nemo-Pi

To monitor the effects of climate change, and to help divers decide whether conditions are right at a reef while they’re still on shore, Save Nemo is also in the process of perfecting Nemo-Pi.

Nemo-Pi schematic — Nemo-Pi — Save Nemo

This Raspberry Pi-powered device is made up of a buoy, a solar panel, a GPS device, a Pi, and an array of sensors. Nemo-Pi measures water conditions such as current, visibility, temperature, carbon dioxide and nitrogen oxide concentrations, and pH. It also uploads its readings live to a public webserver.

Inside the Nemo-Pi device — Save Nemo
Inside the Nemo-Pi device — Save Nemo
Inside the Nemo-Pi device — Save Nemo

The Save Nemo team is currently doing long-term tests of Nemo-Pi off the coast of Thailand and Indonesia. They are also working on improving the device’s power consumption and durability, and testing prototypes with the Raspberry Pi Zero W.

web dashboard — Nemo-Pi — Save Nemo

The web dashboard showing live Nemo-Pi data

Long-term goals

Save Nemo aims to install a network of Nemo-Pis at shallow reefs (up to 60 metres deep) in South East Asia. Then diving tour companies can check the live data online and decide day-to-day whether tours are feasible. This will lower the impact of humans on reefs and help the local flora and fauna survive.

Coral reefs with fishes

A healthy coral reef

Nemo-Pi data may also be useful for groups lobbying for reef conservation, and for scientists and activists who want to shine a spotlight on the awful effects of climate change on sea life, such as coral bleaching caused by rising water temperatures.

Bleached coral

A bleached coral reef

Vote now for Save Nemo

If you want to help Save Nemo in their mission today, vote for them to win the Google.org Impact Challenge:

  1. Head to the voting web page
  2. Click “Abstimmen” in the footer of the page to vote
  3. Click “JA” in the footer to confirm

Voting is open until 6 June. You can also follow Save Nemo on Facebook or Twitter. We think this organisation is doing valuable work, and that their projects could be expanded to reefs across the globe. It’s fantastic to see the Raspberry Pi being used to help protect ocean life.

The post Protecting coral reefs with Nemo-Pi, the underwater monitor appeared first on Raspberry Pi.

Measuring the throughput for Amazon MQ using the JMS Benchmark

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/measuring-the-throughput-for-amazon-mq-using-the-jms-benchmark/

This post is courtesy of Alan Protasio, Software Development Engineer, Amazon Web Services

Just like compute and storage, messaging is a fundamental building block of enterprise applications. Message brokers (aka “message-oriented middleware”) enable different software systems, often written in different languages, on different platforms, running in different locations, to communicate and exchange information. Mission-critical applications, such as CRM and ERP, rely on message brokers to work.

A common performance consideration for customers deploying a message broker in a production environment is the throughput of the system, measured as messages per second. This is important to know so that application environments (hosts, threads, memory, etc.) can be configured correctly.

In this post, we demonstrate how to measure the throughput for Amazon MQ, a new managed message broker service for ActiveMQ, using JMS Benchmark. It should take between 15–20 minutes to set up the environment and an hour to run the benchmark. We also provide some tips on how to configure Amazon MQ for optimal throughput.

Benchmarking throughput for Amazon MQ

ActiveMQ can be used for a number of use cases. These use cases can range from simple fire and forget tasks (that is, asynchronous processing), low-latency request-reply patterns, to buffering requests before they are persisted to a database.

The throughput of Amazon MQ is largely dependent on the use case. For example, if you have non-critical workloads such as gathering click events for a non-business-critical portal, you can use ActiveMQ in a non-persistent mode and get extremely high throughput with Amazon MQ.

On the flip side, if you have a critical workload where durability is extremely important (meaning that you can’t lose a message), then you are bound by the I/O capacity of your underlying persistence store. We recommend using mq.m4.large for the best results. The mq.t2.micro instance type is intended for product evaluation. Performance is limited, due to the lower memory and burstable CPU performance.

Tip: To improve your throughput with Amazon MQ, make sure that you have consumers processing messaging as fast as (or faster than) your producers are pushing messages.

Because it’s impossible to talk about how the broker (ActiveMQ) behaves for each and every use case, we walk through how to set up your own benchmark for Amazon MQ using our favorite open-source benchmarking tool: JMS Benchmark. We are fans of the JMS Benchmark suite because it’s easy to set up and deploy, and comes with a built-in visualizer of the results.

Non-Persistent Scenarios – Queue latency as you scale producer throughput

JMS Benchmark nonpersistent scenarios

Getting started

At the time of publication, you can create an mq.m4.large single-instance broker for testing for $0.30 per hour (US pricing).

This walkthrough covers the following tasks:

  1.  Create and configure the broker.
  2. Create an EC2 instance to run your benchmark
  3. Configure the security groups
  4.  Run the benchmark.

Step 1 – Create and configure the broker
Create and configure the broker using Tutorial: Creating and Configuring an Amazon MQ Broker.

Step 2 – Create an EC2 instance to run your benchmark
Launch the EC2 instance using Step 1: Launch an Instance. We recommend choosing the m5.large instance type.

Step 3 – Configure the security groups
Make sure that all the security groups are correctly configured to let the traffic flow between the EC2 instance and your broker.

  1. Sign in to the Amazon MQ console.
  2. From the broker list, choose the name of your broker (for example, MyBroker)
  3. In the Details section, under Security and network, choose the name of your security group or choose the expand icon ( ).
  4. From the security group list, choose your security group.
  5. At the bottom of the page, choose Inbound, Edit.
  6. In the Edit inbound rules dialog box, add a role to allow traffic between your instance and the broker:
    • Choose Add Rule.
    • For Type, choose Custom TCP.
    • For Port Range, type the ActiveMQ SSL port (61617).
    • For Source, leave Custom selected and then type the security group of your EC2 instance.
    • Choose Save.

Your broker can now accept the connection from your EC2 instance.

Step 4 – Run the benchmark
Connect to your EC2 instance using SSH and run the following commands:

$ cd ~
$ curl -L https://github.com/alanprot/jms-benchmark/archive/master.zip -o master.zip
$ unzip master.zip
$ cd jms-benchmark-master
$ chmod a+x bin/*
$ env \
  SERVER_SETUP=false \
  SERVER_ADDRESS={activemq-endpoint} \
  ACTIVEMQ_TRANSPORT=ssl\
  ACTIVEMQ_PORT=61617 \
  ACTIVEMQ_USERNAME={activemq-user} \
  ACTIVEMQ_PASSWORD={activemq-password} \
  ./bin/benchmark-activemq

After the benchmark finishes, you can find the results in the ~/reports directory. As you may notice, the performance of ActiveMQ varies based on the number of consumers, producers, destinations, and message size.

Amazon MQ architecture

The last bit that’s important to know so that you can better understand the results of the benchmark is how Amazon MQ is architected.

Amazon MQ is architected to be highly available (HA) and durable. For HA, we recommend using the multi-AZ option. After a message is sent to Amazon MQ in persistent mode, the message is written to the highly durable message store that replicates the data across multiple nodes in multiple Availability Zones. Because of this replication, for some use cases you may see a reduction in throughput as you migrate to Amazon MQ. Customers have told us they appreciate the benefits of message replication as it helps protect durability even in the face of the loss of an Availability Zone.

Conclusion

We hope this gives you an idea of how Amazon MQ performs. We encourage you to run tests to simulate your own use cases.

To learn more, see the Amazon MQ website. You can try Amazon MQ for free with the AWS Free Tier, which includes up to 750 hours of a single-instance mq.t2.micro broker and up to 1 GB of storage per month for one year.

Google’s Chrome Web Store Spammed With Dodgy ‘Pirate’ Movie Links

Post Syndicated from Andy original https://torrentfreak.com/googles-chrome-web-store-spammed-with-dodgy-pirate-movie-links-180527/

Launched in 2010, Google’s Chrome Store is the go-to place for people looking to pimp their Chrome browser.

Often referred to as apps and extensions, the programs offered by the platform run in Chrome and can perform a dazzling array of functions, from improving security and privacy, to streaming video or adding magnet links to torrent sites.

Also available on the Chrome Store are themes, which can be installed locally to change the appearance of the Chrome browser.

While there are certainly plenty to choose from, some additions to the store over the past couple of months are not what most people have come to expect from the add-on platform.

Free movies on Chrome’s Web Store?

As the image above suggests, unknown third parties appear to be exploiting the Chrome Store’s ‘theme’ section to offer visitors access to a wide range of pirate movies including Black Panther, Avengers: Infinity War and Rampage.

When clicking through to the page offering Ready Player One, for example, users are presented with a theme that apparently allows them to watch the movie online in “Full HD Online 4k.”

Of course, the whole scheme is a dubious scam which eventually leads users to Vioos.co, a platform that tries very hard to give the impression of being a pirate streaming portal but actually provides nothing of use.

Nothing to see here

In fact, as soon as one clicks the play button on movies appearing on Vioos.co, visitors are re-directed to another site called Zumastar which asks people to “create a free account” to “access unlimited downloads & streaming.”

“With over 20 million titles, Zumastar is your number one entertainment resource. Join hundreds of thousands of satisfied members and enjoy the hottest movies,” the site promises.

With this kind of marketing, perhaps we should think about this offer for a second. Done. No thanks.

In extended testing, some visits to Vioos.co resulted in a redirection to EtnaMedia.net, a domain that was immediately blocked by MalwareBytes due to suspected fraud. However, after allowing the browser to make the connection, TF was presented with another apparent subscription site.

We didn’t follow through with a sign-up but further searches revealed upset former customers complaining of money being taken from their credit cards when they didn’t expect that to happen.

Quite how many people have signed up to Zumastar or EtnaMedia via this convoluted route from Google’s Chrome Store isn’t clear but a worrying number appear to have installed the ‘themes’ (if that’s what they are) offered on each ‘pirate movie’ page.

At the time of writing the ‘free Watch Rampage Online Full Movie’ ‘theme’ has 2,196 users, the “Watch Avengers Infinity War Full Movie” variant has 974, the ‘Watch Ready Player One 2018 Full HD’ page has 1,031, and the ‘Watch Black Panther Online Free 123putlocker’ ‘theme’ has more than 1,800. Clearly, a worrying number of people will click and install just about anything.

We haven’t tested the supposed themes to see what they do but it’s a cast-iron guarantee that they don’t offer the movies displayed and there’s always a chance they’ll do something awful. As a rule of thumb, it’s nearly always wise to steer clear of anything with “full movie” in the title, they can rarely be trusted.

Finally, those hoping to get some guidance on quality from the reviews on the Chrome Store will be bitterly disappointed.

Garbage reviews, probably left by the scammers

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Practice Makes Perfect: Testing Campaigns Before You Send Them

Post Syndicated from Zach Barbitta original https://aws.amazon.com/blogs/messaging-and-targeting/practice-makes-perfect-testing-campaigns-before-you-send-them/

In an article we posted to Medium in February, we talked about how to determine the best time to engage your customers by using Amazon Pinpoint’s built-in session heat map. The session heat map allows you to find the times that your customers are most likely to use your app. In this post, we continued on the topic of best practices—specifically, how to appropriately test a campaign before going live.

In this post, we’ll talk about the old adage “practice makes perfect,” and how it applies to the campaigns you send using Amazon Pinpoint. Let’s take a scenario many of our customers encounter daily: creating a campaign to engage users by sending a push notification.

As you can see from the preceding screenshot, the segment we plan to target has nearly 1.7M recipients, which is a lot! Of course, before we got to this step, we already put several best practices into practice. For example, we determined the best time to engage our audience, scheduled the message based on recipients’ local time zones, performed A/B/N testing, measured lift using a hold-out group, and personalized the content for maximum effectiveness. Now that we’re ready to send the notification, we should test the message before we send it to all of the recipients in our segment. The reason for testing the message is pretty straightforward: we want to make sure every detail of the message is accurate before we send it to all 1,687,575 customers.

Fortunately, Amazon Pinpoint makes it easy to test your messages—in fact, you don’t even have to leave the campaign wizard in order to do so. In step 3 of the campaign wizard, below the message editor, there’s a button labelled Test campaign.

When you choose the Test campaign button, you have three options: you can send the test message to a segment of 100 endpoints or less, or to a set of specific endpoint IDs (up to 10), or to a set of specific device tokens (up to 10), as shown in the following image.

In our case, we’ve already created a segment of internal recipients who will test our message. On the Test Campaign window, under Send a test message to, we choose A segment. Then, in the drop-down menu, we select our test segment, and then choose Send test message.

Because we’re sending the test message to a segment, Amazon Pinpoint automatically creates a new campaign dedicated to this test. This process executes a test campaign, complete with message analytics, which allows you to perform end-to-end testing as if you sent the message to your production audience. To see the analytics for your test campaign, go to the Campaigns tab, and then choose the campaign (the name of the campaign contains the word “test”, followed by four random characters, followed by the name of the campaign).

After you complete a successful test, you’re ready to launch your campaign. As a final check, the Review & Launch screen includes a reminder that indicates whether or not you’ve tested the campaign, as shown in the following image.

There are several other ways you can use this feature. For example, you could use it for troubleshooting a campaign, or for iterating on existing campaigns. To learn more about testing campaigns, see the Amazon Pinpoint User Guide.

Airbash – Fully Automated WPA PSK Handshake Capture Script

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/05/airbash-fully-automated-wpa-psk-handshake-capture-script/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

Airbash – Fully Automated WPA PSK Handshake Capture Script

Airbash is a POSIX-compliant, fully automated WPA PSK handshake capture script aimed at penetration testing. It is compatible with Bash and Android Shell (tested on Kali Linux and Cyanogenmod 10.2) and uses aircrack-ng to scan for clients that are currently connected to access points (AP).

Those clients are then deauthenticated in order to capture the handshake when attempting to reconnect to the AP. Verification of a captured handshake is done using aircrack-ng.

Read the rest of Airbash – Fully Automated WPA PSK Handshake Capture Script now! Only available at Darknet.

Bad Software Is Our Fault

Post Syndicated from Bozho original https://techblog.bozho.net/bad-software-is-our-fault/

Bad software is everywhere. One can even claim that every software is bad. Cool companies, tech giants, established companies, all produce bad software. And no, yours is not an exception.

Who’s to blame for bad software? It’s all complicated and many factors are intertwined – there’s business requirements, there’s organizational context, there’s lack of sufficient skilled developers, there’s the inherent complexity of software development, there’s leaky abstractions, reliance on 3rd party software, consequences of wrong business and purchase decisions, time limitations, flawed business analysis, etc. So yes, despite the catchy title, I’m aware it’s actually complicated.

But in every “it’s complicated” scenario, there’s always one or two factors that are decisive. All of them contribute somehow, but the major drivers are usually a handful of things. And in the case of base software, I think it’s the fault of technical people. Developers, architects, ops.

We don’t seem to care about best practices. And I’ll do some nasty generalizations here, but bear with me. We can spend hours arguing about tabs vs spaces, curly bracket on new line, git merge vs rebase, which IDE is better, which framework is better and other largely irrelevant stuff. But we tend to ignore the important aspects that span beyond the code itself. The context in which the code lives, the non-functional requirements – robustness, security, resilience, etc.

We don’t seem to get security. Even trivial stuff such as user authentication is almost always implemented wrong. These days Twitter and GitHub realized they have been logging plain-text passwords, for example, but that’s just the tip of the iceberg. Too often we ignore the security implications.

“But the business didn’t request the security features”, one may say. The business never requested 2-factor authentication, encryption at rest, PKI, secure (or any) audit trail, log masking, crypto shredding, etc., etc. Because the business doesn’t know these things – we do and we have to put them on the backlog and fight for them to be implemented. Each organization has its specifics and tech people can influence the backlog in different ways, but almost everywhere we can put things there and prioritize them.

The other aspect is testing. We should all be well aware by now that automated testing is mandatory. We have all the tools in the world for unit, functional, integration, performance and whatnot testing, and yet many software projects lack the necessary test coverage to be able to change stuff without accidentally breaking things. “But testing takes time, we don’t have it”. We are perfectly aware that testing saves time, as we’ve all had those “not again!” recurring bugs. And yet we think of all sorts of excuses – “let the QAs test it”, we have to ship that now, we’ll test it later”, “this is too trivial to be tested”, etc.

And you may say it’s not our job. We don’t define what has do be done, we just do it. We don’t define the budget, the scope, the features. We just write whatever has been decided. And that’s plain wrong. It’s not our job to make money out of our code, and it’s not our job to define what customers need, but apart from that everything is our job. The way the software is structured, the security aspects and security features, the stability of the code base, the way the software behaves in different environments. The non-functional requirements are our job, and putting them on the backlog is our job.

You’ve probably heard that every software becomes “legacy” after 6 months. And that’s because of us, our sloppiness, our inability to mitigate external factors and constraints. Too often we create a mess through “just doing our job”.

And of course that’s a generalization. I happen to know a lot of great professionals who don’t make these mistakes, who strive for excellence and implement things the right way. But our industry as a whole doesn’t. Our industry as a whole produces bad software. And it’s our fault, as developers – as the only people who know why a certain piece of software is bad.

In a talk of his, Bob Martin warns us of the risks of our sloppiness. We have been building websites so far, but we are more and more building stuff that interacts with the real world, directly and indirectly. Ultimately, lives may depend on our software (like the recent unfortunate death caused by a self-driving car). And I’ll agree with Uncle Bob that it’s high time we self-regulate as an industry, before some technically incompetent politician decides to do that.

How, I don’t know. We’ll have to think more about it. But I’m pretty sure it’s our fault that software is bad, and no amount of blaming the management, the budget, the timing, the tools or the process can eliminate our responsibility.

Why do I insist on bashing my fellow software engineers? Because if we start looking at software development with more responsibility; with the fact that if it fails, it’s our fault, then we’re more likely to get out of our current bug-ridden, security-flawed, fragile software hole and really become the experts of the future.

The post Bad Software Is Our Fault appeared first on Bozho's tech blog.

Announcing Local Build Support for AWS CodeBuild

Post Syndicated from Karthik Thirugnanasambandam original https://aws.amazon.com/blogs/devops/announcing-local-build-support-for-aws-codebuild/

Today, we’re excited to announce local build support in AWS CodeBuild.

AWS CodeBuild is a fully managed build service. There are no servers to provision and scale, or software to install, configure, and operate. You just specify the location of your source code, choose your build settings, and CodeBuild runs build scripts for compiling, testing, and packaging your code.

In this blog post, I’ll show you how to set up CodeBuild locally to build and test a sample Java application.

By building an application on a local machine you can:

  • Test the integrity and contents of a buildspec file locally.
  • Test and build an application locally before committing.
  • Identify and fix errors quickly from your local development environment.

Prerequisites

In this post, I am using AWS Cloud9 IDE as my development environment.

If you would like to use AWS Cloud9 as your IDE, follow the express setup steps in the AWS Cloud9 User Guide.

The AWS Cloud9 IDE comes with Docker and Git already installed. If you are going to use your laptop or desktop machine as your development environment, install Docker and Git before you start.

Steps to build CodeBuild image locally

Run git clone https://github.com/aws/aws-codebuild-docker-images.git to download this repository to your local machine.

$ git clone https://github.com/aws/aws-codebuild-docker-images.git

Lets build a local CodeBuild image for JDK 8 environment. The Dockerfile for JDK 8 is present in /aws-codebuild-docker-images/ubuntu/java/openjdk-8.

Edit the Dockerfile to remove the last line ENTRYPOINT [“dockerd-entrypoint.sh”] and save the file.

Run cd ubuntu/java/openjdk-8 to change the directory in your local workspace.

Run docker build -t aws/codebuild/java:openjdk-8 . to build the Docker image locally. This command will take few minutes to complete.

$ cd aws-codebuild-docker-images
$ cd ubuntu/java/openjdk-8
$ docker build -t aws/codebuild/java:openjdk-8 .

Steps to setup CodeBuild local agent

Run the following Docker pull command to download the local CodeBuild agent.

$ docker pull amazon/aws-codebuild-local:latest --disable-content-trust=false

Now you have the local agent image on your machine and can run a local build.

Run the following git command to download a sample Java project.

$ git clone https://github.com/karthiksambandam/sample-web-app.git

Steps to use the local agent to build a sample project

Let’s build the sample Java project using the local agent.

Execute the following Docker command to run the local agent and build the sample web app repository you cloned earlier.

$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock -e "IMAGE_NAME=aws/codebuild/java:openjdk-8" -e "ARTIFACTS=/home/ec2-user/environment/artifacts" -e "SOURCE=/home/ec2-user/environment/sample-web-app" amazon/aws-codebuild-local

Note: We need to provide three environment variables namely  IMAGE_NAME, SOURCE and ARTIFACTS.

IMAGE_NAME: The name of your build environment image.

SOURCE: The absolute path to your source code directory.

ARTIFACTS: The absolute path to your artifact output folder.

When you run the sample project, you get a runtime error that says the YAML file does not exist. This is because a buildspec.yml file is not included in the sample web project. AWS CodeBuild requires a buildspec.yml to run a build. For more information about buildspec.yml, see Build Spec Example in the AWS CodeBuild User Guide.

Let’s add a buildspec.yml file with the following content to the sample-web-app folder and then rebuild the project.

version: 0.2

phases:
  build:
    commands:
      - echo Build started on `date`
      - mvn install

artifacts:
  files:
    - target/javawebdemo.war

$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock -e "IMAGE_NAME=aws/codebuild/java:openjdk-8" -e "ARTIFACTS=/home/ec2-user/environment/artifacts" -e "SOURCE=/home/ec2-user/environment/sample-web-app" amazon/aws-codebuild-local

This time your build should be successful. Upon successful execution, look in the /artifacts folder for the final built artifacts.zip file to validate.

Conclusion:

In this blog post, I showed you how to quickly set up the CodeBuild local agent to build projects right from your local desktop machine or laptop. As you see, local builds can improve developer productivity by helping you identify and fix errors quickly.

I hope you found this post useful. Feel free to leave your feedback or suggestions in the comments.

CI/CD with Data: Enabling Data Portability in a Software Delivery Pipeline with AWS Developer Tools, Kubernetes, and Portworx

Post Syndicated from Kausalya Rani Krishna Samy original https://aws.amazon.com/blogs/devops/cicd-with-data-enabling-data-portability-in-a-software-delivery-pipeline-with-aws-developer-tools-kubernetes-and-portworx/

This post is written by Eric Han – Vice President of Product Management Portworx and Asif Khan – Solutions Architect

Data is the soul of an application. As containers make it easier to package and deploy applications faster, testing plays an even more important role in the reliable delivery of software. Given that all applications have data, development teams want a way to reliably control, move, and test using real application data or, at times, obfuscated data.

For many teams, moving application data through a CI/CD pipeline, while honoring compliance and maintaining separation of concerns, has been a manual task that doesn’t scale. At best, it is limited to a few applications, and is not portable across environments. The goal should be to make running and testing stateful containers (think databases and message buses where operations are tracked) as easy as with stateless (such as with web front ends where they are often not).

Why is state important in testing scenarios? One reason is that many bugs manifest only when code is tested against real data. For example, we might simply want to test a database schema upgrade but a small synthetic dataset does not exercise the critical, finer corner cases in complex business logic. If we want true end-to-end testing, we need to be able to easily manage our data or state.

In this blog post, we define a CI/CD pipeline reference architecture that can automate data movement between applications. We also provide the steps to follow to configure the CI/CD pipeline.

 

Stateful Pipelines: Need for Portable Volumes

As part of continuous integration, testing, and deployment, a team may need to reproduce a bug found in production against a staging setup. Here, the hosting environment is comprised of a cluster with Kubernetes as the scheduler and Portworx for persistent volumes. The testing workflow is then automated by AWS CodeCommit, AWS CodePipeline, and AWS CodeBuild.

Portworx offers Kubernetes storage that can be used to make persistent volumes portable between AWS environments and pipelines. The addition of Portworx to the AWS Developer Tools continuous deployment for Kubernetes reference architecture adds persistent storage and storage orchestration to a Kubernetes cluster. The example uses MongoDB as the demonstration of a stateful application. In practice, the workflow applies to any containerized application such as Cassandra, MySQL, Kafka, and Elasticsearch.

Using the reference architecture, a developer calls CodePipeline to trigger a snapshot of the running production MongoDB database. Portworx then creates a block-based, writable snapshot of the MongoDB volume. Meanwhile, the production MongoDB database continues serving end users and is uninterrupted.

Without the Portworx integrations, a manual process would require an application-level backup of the database instance that is outside of the CI/CD process. For larger databases, this could take hours and impact production. The use of block-based snapshots follows best practices for resilient and non-disruptive backups.

As part of the workflow, CodePipeline deploys a new MongoDB instance for staging onto the Kubernetes cluster and mounts the second Portworx volume that has the data from production. CodePipeline triggers the snapshot of a Portworx volume through an AWS Lambda function, as shown here

 

 

 

AWS Developer Tools with Kubernetes: Integrated Workflow with Portworx

In the following workflow, a developer is testing changes to a containerized application that calls on MongoDB. The tests are performed against a staging instance of MongoDB. The same workflow applies if changes were on the server side. The original production deployment is scheduled as a Kubernetes deployment object and uses Portworx as the storage for the persistent volume.

The continuous deployment pipeline runs as follows:

  • Developers integrate bug fix changes into a main development branch that gets merged into a CodeCommit master branch.
  • Amazon CloudWatch triggers the pipeline when code is merged into a master branch of an AWS CodeCommit repository.
  • AWS CodePipeline sends the new revision to AWS CodeBuild, which builds a Docker container image with the build ID.
  • AWS CodeBuild pushes the new Docker container image tagged with the build ID to an Amazon ECR registry.
  • Kubernetes downloads the new container (for the database client) from Amazon ECR and deploys the application (as a pod) and staging MongoDB instance (as a deployment object).
  • AWS CodePipeline, through a Lambda function, calls Portworx to snapshot the production MongoDB and deploy a staging instance of MongoDB• Portworx provides a snapshot of the production instance as the persistent storage of the staging MongoDB
    • The MongoDB instance mounts the snapshot.

At this point, the staging setup mimics a production environment. Teams can run integration and full end-to-end tests, using partner tooling, without impacting production workloads. The full pipeline is shown here.

 

Summary

This reference architecture showcases how development teams can easily move data between production and staging for the purposes of testing. Instead of taking application-specific manual steps, all operations in this CodePipeline architecture are automated and tracked as part of the CI/CD process.

This integrated experience is part of making stateful containers as easy as stateless. With AWS CodePipeline for CI/CD process, developers can easily deploy stateful containers onto a Kubernetes cluster with Portworx storage and automate data movement within their process.

The reference architecture and code are available on GitHub:

● Reference architecture: https://github.com/portworx/aws-kube-codesuite
● Lambda function source code for Portworx additions: https://github.com/portworx/aws-kube-codesuite/blob/master/src/kube-lambda.py

For more information about persistent storage for containers, visit the Portworx website. For more information about Code Pipeline, see the AWS CodePipeline User Guide.