Tag Archives: Uncategorized

Manipulating Systems Using Remote Lasers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/12/manipulating-systems-using-remote-lasers.html

Many systems are vulnerable:

Researchers at the time said that they were able to launch inaudible commands by shining lasers — from as far as 360 feet — at the microphones on various popular voice assistants, including Amazon Alexa, Apple Siri, Facebook Portal, and Google Assistant.

[…]

They broadened their research to show how light can be used to manipulate a wider range of digital assistants — including Amazon Echo 3 — but also sensing systems found in medical devices, autonomous vehicles, industrial systems and even space systems.

The researchers also delved into how the ecosystem of devices connected to voice-activated assistants — such as smart-locks, home switches and even cars — also fail under common security vulnerabilities that can make these attacks even more dangerous. The paper shows how using a digital assistant as the gateway can allow attackers to take control of other devices in the home: Once an attacker takes control of a digital assistant, he or she can have the run of any device connected to it that also responds to voice commands. Indeed, these attacks can get even more interesting if these devices are connected to other aspects of the smart home, such as smart door locks, garage doors, computers and even people’s cars, they said.

Another article. The researchers will present their findings at Black Hat Europe — which, of course, will be happening virtually — on December 10.

Check Washing

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/11/check-washing.html

I can’t believe that check washing is still a thing:

“Check washing” is a practice where thieves break into mailboxes (or otherwise steal mail), find envelopes with checks, then use special solvents to remove the information on that check (except for the signature) and then change the payee and the amount to a bank account under their control so that it could be deposited at out-state-banks and oftentimes by a mobile phone.

The article suggests a solution: stop using paper checks.

Friday Squid Blogging: Diplomoceras Maximum

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/11/friday-squid-blogging-diplomoceras-maximum.html

Diplomoceras maximum is an ancient squid-like creature. It lived about 68 million years ago, looked kind of like a giant paperclip, and may have had a lifespan of 200 years.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Undermining Democracy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/11/undermining-democracy.html

Last Thursday, Rudy Giuliani, a Trump campaign lawyer, alleged a widespread voting conspiracy involving Venezuela, Cuba, and China. Another lawyer, Sidney Powell, argued that Mr. Trump won in a landslide, the entire election in swing states should be overturned and the legislatures should make sure that the electors are selected for the president.

The Republican National Committee swung in to support her false claim that Mr. Trump won in a landslide, while Michigan election officials have tried to stop the certification of the vote.

It is wildly unlikely that their efforts can block Joe Biden from becoming president. But they may still do lasting damage to American democracy for a shocking reason: the moves have come from trusted insiders.

American democracy’s vulnerability to disinformation has been very much in the news since the Russian disinformation campaign in 2016. The fear is that outsiders, whether they be foreign or domestic actors, will undermine our system by swaying popular opinion and election results.

This is half right. American democracy is an information system, in which the information isn’t bits and bytes but citizens’ beliefs. When peoples’ faith in the democratic system is undermined, democracy stops working. But as information security specialists know, outsider attacks are hard. Russian trolls, who don’t really understand how American politics works, have actually had a difficult time subverting it.

When you really need to worry is when insiders go bad. And that is precisely what is happening in the wake of the 2020 presidential election. In traditional information systems, the insiders are the people who have both detailed knowledge and high level access, allowing them to bypass security measures and more effectively subvert systems. In democracy, the insiders aren’t just the officials who manage voting but also the politicians who shape what people believe about politics. For four years, Donald Trump has been trying to dismantle our shared beliefs about democracy. And now, his fellow Republicans are helping him.

Democracy works when we all expect that votes will be fairly counted, and defeated candidates leave office. As the democratic theorist Adam Przeworski puts it, democracy is “a system in which parties lose elections.” These beliefs can break down when political insiders make bogus claims about general fraud, trying to cling to power when the election has gone against them.

It’s obvious how these kinds of claims damage Republican voters’ commitment to democracy. They will think that elections are rigged by the other side and will not accept the judgment of voters when it goes against their preferred candidate. Their belief that the Biden administration is illegitimate will justify all sorts of measures to prevent it from functioning.

It’s less obvious that these strategies affect Democratic voters’ faith in democracy, too. Democrats are paying attention to Republicans’ efforts to stop the votes of Democratic voters ­- and especially Black Democratic voters -­ from being counted. They, too, are likely to have less trust in elections going forward, and with good reason. They will expect that Republicans will try to rig the system against them. Mr. Trump is having a hard time winning unfairly, because he has lost in several states. But what if Mr. Biden’s margin of victory depended only on one state? What if something like that happens in the next election?

The real fear is that this will lead to a spiral of distrust and destruction. Republicans ­ who are increasingly committed to the notion that the Democrats are committing pervasive fraud -­ will do everything that they can to win power and to cling to power when they can get it. Democrats ­- seeing what Republicans are doing ­ will try to entrench themselves in turn. They suspect that if the Republicans really win power, they will not ever give it back. The claims of Republicans like Senator Mike Lee of Utah that America is not really a democracy might become a self-fulfilling prophecy.

More likely, this spiral will not directly lead to the death of American democracy. The U.S. federal system of government is complex and hard for any one actor or coalition to dominate completely. But it may turn American democracy into an unworkable confrontation between two hostile camps, each unwilling to make any concession to its adversary.

We know how to make voting itself more open and more secure; the literature is filled with vital and important suggestions. The more difficult problem is this. How do you shift the collective belief among Republicans that elections are rigged?

Political science suggests that partisans are more likely to be persuaded by fellow partisans, like Brad Raffensperger, the Republican secretary of state in Georgia, who said that election fraud wasn’t a big problem. But this would only be effective if other well-known Republicans supported him.

Public outrage, alternatively, can sometimes force officials to back down, as when people crowded in to denounce the Michigan Republican election officials who were trying to deny certification of their votes.

The fundamental problem, however, is Republican insiders who have convinced themselves that to keep and hold power, they need to trash the shared beliefs that hold American democracy together.

They may have long-term worries about the consequences, but they’re unlikely to do anything about those worries in the near-term unless voters, wealthy donors or others whom they depend on make them pay short-term costs.

This essay was written with Henry Farrell, and previously appeared in the New York Times.

New Synchronous Express Workflows for AWS Step Functions

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/new-synchronous-express-workflows-for-aws-step-functions/

Today, AWS is introducing Synchronous Express Workflows for AWS Step Functions. This is a new way to run Express Workflows to orchestrate AWS services at high-throughput.

Developers have been using asynchronous Express Workflows since December 2019 for workloads that require higher event rates and shorter durations. Customers were looking for ways to receive an immediate response from their Express Workflows without having to write additional code or introduce additional services.

What’s new?

Synchronous Express Workflows allow developers to quickly receive the workflow response without needing to poll additional services or build a custom solution. This is useful for high-volume microservice orchestration and fast compute tasks that communicate via HTTPS.

Getting started

You can build and run Synchronous Express Workflows using the AWS Management Console, the AWS Serverless Application Model (AWS SAM), the AWS Cloud Development Kit (AWS CDK), AWS CLI, or AWS CloudFormation.

To create Synchronous Express Workflows from the AWS Management Console:

  1. Navigate to the Step Functions console and choose Create State machine.
  2. Choose Author with code snippets. Choose Express.
    This generates a sample workflow definition that you can change once the workflow is created.
  3. Choose Next, then choose Create state machine. It may take a moment for the workflow to deploy.

Starting Synchronous Express Workflows

When starting an Express Workflow, a new Type parameter is required. To start a synchronous workflow from the AWS Management Console:

  1. Navigate to the Step Functions console.
  2. Choose an Express Workflow from the list.
  3. Choose Start execution.

    Here you have an option to run the Express Workflow as a synchronous or asynchronous type.
  4. Choose Synchronous and choose Start execution.

  5. Expand Details in the results message to view the output.

Monitoring, logging and tracing

Enable logging to inspect and debug Synchronous Express Workflows. All execution history is sent to CloudWatch Logs. Use the Monitoring and Logging tabs in the Step Functions console to gain visibility into Express Workflow executions.

The Monitoring tab shows six graphs with CloudWatch metrics for Execution Errors, Execution Succeeded, Execution Duration, Billed Duration, Billed Memory, and Executions Started. The Logging tab shows recent logs and the logging configuration, with a link to CloudWatch Logs.

Enable X-Ray tracing to view trace maps and timelines of the underlying components that make up a workflow. This helps to discover performance issues, detect permission problems, and track requests made to and from other AWS services.

Creating an example workflow

The following example uses Amazon API Gateway HTTP APIs to start an Express Workflow synchronously. The workflow analyses web form submissions for negative sentiment. It generates a case reference number and saves the data in an Amazon DynamoDB table. The workflow returns the case reference number and message sentiment score.

  1. The API endpoint is generated by an API Gateway HTTP APIs. A POST request is made to the API which invokes the workflow. It contains the contact form’s message body.
  2. The message sentiment is analyzed by Amazon Comprehend.
  3. The Lambda function generates a case reference number, which is recorded in the DynamoDB table.
  4. The workflow choice state branches based on the detected sentiment.
  5. If a negative sentiment is detected, a notification is sent to an administrator via Amazon Simple Email Service (SES).
  6. When the workflow completes, it returns a ticketID to API Gateway.
  7. API Gateway returns the ticketID in the API response.

The code for this application can be found in this GitHub repository. Three important files define the application and its resources:

Deploying the application

Clone the GitHub repository and deploy with the AWS SAM CLI:

$ git clone https://github.com/aws-samples/contact-form-processing-with-synchronous-express-workflows.git
$ cd contact-form-processing-with-synchronous-express-workflows 
$ sam build 
$ sam deploy -g

This deploys 12 resources, including a Synchronous Express Workflow, three Lambda functions, an API Gateway HTTP API endpoint, and all the AWS Identity & Access Management (IAM) roles and permissions required for the application to run.

Note the HTTP APIs endpoint and workflow ARN outputs.

Testing Synchronous Express Workflows:

A new StartSyncExecution AWS CLI command is used to run the synchronous Express Workflow:

aws stepfunctions start-sync-execution \
--state-machine-arn <your-workflow-arn> \
--input "{\"message\" : \"This is bad service\"}"

The response is received once the workflow completes. It contains the workflow output (sentiment and ticketid), the executionARN, and some execution metadata.

Starting the workflow from HTTP API Gateway:

The application deploys an API Gateway HTTP API, with a Step Functions integration. This is configured in the api.yaml file. It starts the state machine with the POST body provided as the input payload.

Trigger the workflow with a POST request, using the API HTTP API endpoint generated from the deploy step. Enter the following CURL command into the terminal:

curl --location --request POST '<YOUR-HTTP-API-ENDPOINT>' \
--header 'Content-Type: application/json' \
--data-raw '{"message":" This is bad service"}'

The POST request returns a 200 status response. The output field of the response contains the sentiment results (negative) and the generated ticketId (jc4t8i).

Putting it all together

You can use this application to power a web form backend to help expedite customer complaints. In the following example, a frontend application submits form data via an AJAX POST request. The application waits for the response, and presents the user with a message appropriate to the detected sentiment, and a case reference number.

If a negative sentiment is returned in the API response, the user is informed of their case number:

Setting IAM permissions

Before a user or service can start a Synchronous Express Workflow, it must be granted permission to perform the states:StartSyncExecution API operation. This is a new state-machine level permission. Existing Express Workflows can be run synchronously once the correct IAM permissions for StartSyncExecution are granted.

The example application applies this to a policy within the HttpApiRole in the AWS SAM template. This role is added to the HTTP API integration within the api.yaml file.

Conclusion

Step Functions Synchronous Express Workflows allow developers to receive a workflow response without having to poll additional services. This helps developers orchestrate microservices without needing to write additional code to handle errors, retries, and run parallel tasks. They can be invoked in response to events such as HTTP requests via API Gateway, from a parent state machine, or by calling the StartSyncExecution API action.

This feature is available in all Regions where AWS Step Functions is available. View the AWS Regions table to learn more.

For more serverless learning resources, visit Serverless Land.

Introducing Amazon API Gateway service integration for AWS Step Functions

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/introducing-amazon-api-gateway-service-integration-for-aws-step-functions/

AWS Step Functions now integrates with Amazon API Gateway to enable backend orchestration with minimal code and built-in error handling.

API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. These APIs enable applications to access data, business logic, or functionality from your backend services.

Step Functions allows you to build resilient serverless orchestration workflows with AWS services such as AWS Lambda, Amazon SNS, Amazon DynamoDB, and more. AWS Step Functions integrates with a number of services natively. Using Amazon States Language (ASL), you can coordinate these services directly from a task state.

What’s new?

The new Step Functions integration with API Gateway provides an additional resource type, arn:aws:states:::apigateway:invoke and can be used with both Standard and Express workflows. It allows customers to call API Gateway REST APIs and API Gateway HTTP APIs directly from a workflow, using one of two integration patterns:

  1. Request-Response: calling a service and let Step Functions progress to the next state immediately after it receives an HTTP response. This pattern is supported by Standard and Express Workflows.
  2. Wait-for-Callback: calling a service with a task token and have Step Functions wait until that token is returned with a payload. This pattern is supported by Standard Workflows.

The new integration is configured with the following Amazon States Language parameter fields:

  • ApiEndpoint: The API root endpoint.
  • Path: The API resource path.
  • Method: The HTTP request method.
  • HTTP headers: Custom HTTP headers.
  • RequestBody: The body for the API request.
  • Stage: The API Gateway deployment stage.
  • AuthType: The authentication type.

Refer to the documentation for more information on API Gateway fields and concepts.

Getting started

The API Gateway integration with Step Functions is configured using AWS Serverless Application Model (AWS SAM), the AWS Command Line Interface (AWS CLI), AWS CloudFormation or from within the AWS Management Console.

To get started with Step Functions and API Gateway using the AWS Management Console:

  1. Go to the Step Functions page of the AWS Management Console.
  2. Choose Run a sample project and choose Make a call to API Gateway.The Definition section shows the ASL that makes up the example workflow. The following example shows the new API Gateway resource and its parameters:
  3. Review example Definition, then choose Next.
  4. Choose Deploy resources.

This deploys a Step Functions standard workflow and a REST API with a /pets resource containing a GET and a POST method. It also deploys an IAM role with the required permissions to invoke the API endpoint from Step Functions.

The RequestBody field lets you customize the API’s request input. This can be a static input or a dynamic input taken from the workflow payload.

Running the workflow

  1. Choose the newly created state machine from the Step Functions page of the AWS Management Console
  2. Choose Start execution.
  3. Paste the following JSON into the input field:
    {
      "NewPet": {
        "type": "turtle",
        "price": 74.99
      }
    }
  4. Choose Start execution
  5. Choose the Retrieve Pet Store Data step, then choose the Step output tab.

This shows the successful responseBody output from the “Add to pet store” POST request and the response from the “Retrieve Pet Store Data” GET request.

Access control

The API Gateway integration supports AWS Identity and Access Management (IAM) authentication and authorization. This includes IAM roles, policies, and tags.

AWS IAM roles and policies offer flexible and robust access controls that can be applied to an entire API or individual methods. This controls who can create, manage, or invoke your REST API or HTTP API.

Tag-based access control allows you to set more fine-grained access control for all API Gateway resources. Specify tag key-value pairs to categorize API Gateway resources by purpose, owner, or other criteria. This can be used to manage access for both REST APIs and HTTP APIs.

API Gateway resource policies are JSON policy documents that control whether a specified principal (typically an IAM user or role) can invoke the API. Resource policies can be used to grant access to a REST API via AWS Step Functions. This could be for users in a different AWS account or only for specified source IP address ranges or CIDR blocks.

To configure access control for the API Gateway integration, set the AuthType parameter to one of the following:

  1. {“AuthType””: “NO_AUTH”}
    Call the API directly without any authorization. This is the default setting.
  2. {“AuthType””: “IAM_ROLE”}
    Step Functions assumes the state machine execution role and signs the request with credentials using Signature Version 4.
  3. {“AuthType””: “RESOURCE_POLICY”}
    Step Functions signs the request with the service principal and calls the API endpoint.

Orchestrating microservices

Customers are already using Step Functions’ built in failure handling, decision branching, and parallel processing to orchestrate application backends. Development teams are using API Gateway to manage access to their backend microservices. This helps to standardize request, response formats and decouple business logic from routing logic. It reduces complexity by allowing developers to offload responsibilities of authentication, throttling, load balancing and more. The new API Gateway integration enables developers to build robust workflows using API Gateway endpoints to orchestrate microservices. These microservices can be serverless or container-based.

The following example shows how to orchestrate a microservice with Step Functions using API Gateway to access AWS services. The example code for this application can be found in this GitHub repository.

To run the application:

  1. Clone the GitHub repository:
    $ git clone https://github.com/aws-samples/example-step-functions-integration-api-gateway.git
    $ cd example-step-functions-integration-api-gateway
  2. Deploy the application using AWS SAM CLI, accepting all the default parameter inputs:
    $ sam build && sam deploy -g

    This deploys 17 resources including a Step Functions standard workflow, an API Gateway REST API with three resource endpoints, 3 Lambda functions, and a DynamoDB table. Make a note of the StockTradingStateMachineArn value. You can find this in the command line output or in the Applications section of the AWS Lambda Console:

     

  3. Manually trigger the workflow from a terminal window:
    aws stepFunctions start-execution \
    --state-machine-arn <StockTradingStateMachineArnValue>

The response looks like:

 

When the workflow is run, a Lambda function is invoked via a GET request from API Gateway to the /check resource. This returns a random stock value between 1 and 100. This value is evaluated in the Buy or Sell choice step, depending on if it is less or more than 50. The Sell and Buy states use the API Gateway integration to invoke a Lambda function, with a POST method. A stock_value is provided in the POST request body. A transaction_result is returned in the ResponseBody and provided to the next state. The final state writes a log of the transition to a DynamoDB table.

Defining the resource with an AWS SAM template

The Step Functions resource is defined in this AWS SAM template. The DefinitionSubstitutions field is used to pass template parameters to the workflow definition.

StockTradingStateMachine:
    Type: AWS::Serverless::StateMachine # More info about State Machine Resource: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-statemachine.html
    Properties:
      DefinitionUri: statemachine/stock_trader.asl.json
      DefinitionSubstitutions:
        StockCheckPath: !Ref CheckPath
        StockSellPath: !Ref SellPath
        StockBuyPath: !Ref BuyPath
        APIEndPoint: !Sub "${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com"
        DDBPutItem: !Sub arn:${AWS::Partition}:states:::dynamodb:putItem
        DDBTable: !Ref TransactionTable

The workflow is defined on a separate file (/statemachine/stock_trader.asl.json).

The following code block defines the Check Stock Value state. The new resource, arn:aws:states:::apigateway:invoke declares the API Gateway service integration type.

The parameters object holds the required fields to configure the service integration. The Path and ApiEndpoint values are provided by the DefinitionsSubstitutions field in the AWS SAM template. The RequestBody input is defined dynamically using Amazon States Language. The .$ at the end of the field name RequestBody specifies that the parameter use a path to reference a JSON node in the input.

"Check Stock Value": {
  "Type": "Task",
  "Resource": "arn:aws:states:::apigateway:invoke",
  "Parameters": {
      "ApiEndpoint":"${APIEndPoint}",
      "Method":"GET",
      "Stage":"Prod",
      "Path":"${StockCheckPath}",
      "RequestBody.$":"$",
      "AuthType":"NO_AUTH"
  },
  "Retry": [
      {
          "ErrorEquals": [
              "States.TaskFailed"
          ],
          "IntervalSeconds": 15,
          "MaxAttempts": 5,
          "BackoffRate": 1.5
      }
  ],
  "Next": "Buy or Sell?"
},

The deployment process validates the ApiEndpoint value. The service integration builds the API endpoint URL from the information provided in the parameters block in the format https://[APIendpoint]/[Stage]/[Path].

Conclusion

The Step Functions integration with API Gateway provides customers with the ability to call REST APIs and HTTP APIs directly from a Step Functions workflow.

Step Functions’ built in error handling helps developers reduce code and decouple business logic. Developers can combine this with API Gateway to offload responsibilities of authentication, throttling, load balancing and more. This enables developers to orchestrate microservices deployed on containers or Lambda functions via API Gateway without managing infrastructure.

This feature is available in all Regions where both AWS Step Functions and Amazon API Gateway are available. View the AWS Regions table to learn more. For pricing information, see Step Functions pricing. Normal service limits of API Gateway and service limits of Step Functions apply.

For more serverless learning resources, visit Serverless Land.

On That Dusseldorf Hospital Ransomware Attack and the Resultant Death

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/11/on-that-dusseldorf-hospital-ransomware-attack-and-the-resultant-death.html

Wired has a detailed story about the ransomware attack on a Dusseldorf hospital, the one that resulted in an ambulance being redirected to a more distant hospital and the patient dying. The police wanted to prosecute the ransomware attackers for negligent homicide, but the details were more complicated:

After a detailed investigation involving consultations with medical professionals, an autopsy, and a minute-by-minute breakdown of events, Hartmann believes that the severity of the victim’s medical diagnosis at the time she was picked up was such that she would have died regardless of which hospital she had been admitted to. “The delay was of no relevance to the final outcome,” Hartmann says. “The medical condition was the sole cause of the death, and this is entirely independent from the cyberattack.” He likens it to hitting a dead body while driving: while you might be breaking the speed limit, you’re not responsible for the death.

So while this might not be an example of death by cyberattack, the article correctly notes that it’s only a matter of time:

But it’s only a matter of time, Hartmann believes, before ransomware does directly cause a death. “Where the patient is suffering from a slightly less severe condition, the attack could certainly be a decisive factor,” he says. “This is because the inability to receive treatment can have severe implications for those who require emergency services.” Success at bringing a charge might set an important precedent for future cases, thereby deepening the toolkit of prosecutors beyond the typical cybercrime statutes.

“The main hurdle will be one of proof,” Urban says. “Legal causation will be there as soon as the prosecution can prove that the person died earlier, even if it’s only a few hours, because of the hack, but this is never easy to prove.” With the Düsseldorf attack, it was not possible to establish that the victim could have survived much longer, but in general it’s “absolutely possible” that hackers could be found guilty of manslaughter, Urban argues.

And where causation is established, Hartmann points out that exposure for criminal prosecution stretches beyond the hackers. Instead, anyone who can be shown to have contributed to the hack may also be prosecuted, he says. In the Düsseldorf case, for example, his team was preparing to consider the culpability of the hospital’s IT staff. Could they have better defended the hospital by monitoring the network more closely, for instance?

More on the Security of the 2020 US Election

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/11/more-on-the-security-of-the-2020-us-election.html

Last week I signed on to two joint letters about the security of the 2020 election. The first was as one of 59 election security experts, basically saying that while the election seems to have been both secure and accurate (voter suppression notwithstanding), we still need to work to secure our election systems:

We are aware of alarming assertions being made that the 2020 election was “rigged” by exploiting technical vulnerabilities. However, in every case of which we are aware, these claims either have been unsubstantiated or are technically incoherent. To our collective knowledge, no credible evidence has been put forth that supports a conclusion that the 2020 election outcome in any state has been altered through technical compromise.

That said, it is imperative that the US continue working to bolster the security of elections against sophisticated adversaries. At a minimum, all states should employ election security practices and mechanisms recommended by experts to increase assurance in election outcomes, such as post-election risk-limiting audits.

The New York Times wrote about the letter.

The second was a more general call for election security measures in the US:

Obviously elections themselves are partisan. But the machinery of them should not be. And the transparent assessment of potential problems or the assessment of allegations of security failure — even when they could affect the outcome of an election — must be free of partisan pressures. Bottom line: election security officials and computer security experts must be able to do their jobs without fear of retribution for finding and publicly stating the truth about the security and integrity of the election.

These pile on to the November 12 statement from Cybersecurity and Infrastructure Security Agency (CISA) and the other agencies of the Election Infrastructure Government Coordinating Council (GCC) Executive Committee. While I’m not sure how they have enough comparative data to claim that “the November 3rd election was the most secure in American history,” they are certainly credible in saying that “there is no evidence that any voting system deleted or lost votes, changed votes, or was in any way compromised.”

We have a long way to go to secure our election systems from hacking. Details of what to do are known. Getting rid of touch-screen voting machines is important, but baseless claims of fraud don’t help.

Indistinguishability Obfuscation

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/11/indistinguishability-obfuscation.html

Quanta magazine recently published a breathless article on indistinguishability obfuscation — calling it the “‘crown jewel’ of cryptography” — and saying that it had finally been achieved, based on a recently published paper. I want to add some caveats to the discussion.

Basically, obfuscation makes a computer program “unintelligible” by performing its functionality. Indistinguishability obfuscation is more relaxed. It just means that two different programs that perform the same functionality can’t be distinguished from each other. A good definition is in this paper.

This is a pretty amazing theoretical result, and one to be excited about. We can now do obfuscation, and we can do it using assumptions that make real-world sense. The proofs are kind of ugly, but that’s okay — it’s a start. What it means in theory is that we have a fundamental theoretical result that we can use to derive a whole bunch of other cryptographic primitives.

But — and this is a big one — this result is not even remotely close to being practical. We’re talking multiple days to perform pretty simple calculations, using massively large blocks of computer code. And this is likely to remain true for a very long time. Unless researchers increase performance by many orders of magnitude, nothing in the real world will make use of this work anytime soon.

But but, consider fully homomorphic encryption. It, too, was initially theoretically interesting and completely impractical. And now, after decades of work, it seems to be almost just-barely maybe approaching practically useful. This could very well be on the same trajectory, and perhaps in twenty to thirty years we will be celebrating this early theoretical result as the beginning of a new theory of cryptography.

Ще има ли реална 10 годишна абсолютна давност за лошите кредити в България?

Post Syndicated from VassilKendov original http://kendov.com/10_years_absolute_statute_of_limitations/

Ще има ли реална 10 годишна абсолютна давност за лошите кредити в България?

Тези, които следят тематиката най-вероятно вече са чули за гласуваните в парламента в петък промени в Закона за задълженията и договорите.
С тях се въвежда понятието 10 годишна абсолютна давност за ФИЗИЧЕСКИ ЛИЦА.
Това е ново понятие за нашето право и се регламентира в нов член от ЗЗД – чл. 112

„С изтичането на десетгодишна давност, се погасяват парични вземания срещу физически лица, независимо от прекъсването ѝ, освен когато задължението е отсрочено или разсрочено“

Въвеждат се и още 6 случая, в които 10 годишната давност не се прилага.

За кредитоскателите е важен единствено случай 1
„Вземания от търговската дейност на еднолични търговци или на физически лица – съдружници в дружество по чл. 357 ЗЗД”

На база на казаното до тук вече можем да отговорим на въпроса зададен в заглавието – Ще има ли 10 годишна абсолютна давност за лошите кредити в България?

Отговорът е ДА, но за около 5% от кредитоискателите. С останалите ще трябва да се видим за да им видим какво и дали могат да направят, за да могат да се възползват.

За срещи и консултации по банкови неволи, моля използвайте посочената форма.

[contact-form-7]

Около 30% от портфейла на всяка банка са предоговорените кредити – разсрочени, анексирани, с извънсъдебно споразумение… А както пише в приетите изминения, те важат за физически лица ОСВЕН КОГАТО ЗАДЪЛЖЕНИЕТО Е ОТСРОЧЕНО ИЛИ РАЗСРОЧЕНО.
С две думи, ако сте подписали анекси или споразумения с банката, за Вас 10 годишна давност няма да има.

Няма да има и за бизнесмените. Когато се тегли кредит на фирма (МСП. За корпоративните е различно) , в 99% от случаите банките поставят като условие за отпускането поръчителство на управителя и/или собствениците на предприятието.
Ако фирмата спре изплащането на кредита, банките започват да търсят парите си от поръчителите по фирмения кредит.
Ето какво е записано в изключенията – „физически лица – съдружници в дружество по чл. 357 ЗЗД.”
Тоест, ако сте поръчител и съдружник в предприятието, 10 годишната давност не важи за Вас.

С две думи тези промени ще послужат на около 5% от хората и то предимно по задължения, които не са към банка.
За останалите 95% продължават да важат сегашните методи и правила при неплащане на кредита към банка.

Ако ВИ е харесала тази статия, моля споделете я във Facebook, за да не се подлъжат хората със сегашните лоши кредити.

The post Ще има ли реална 10 годишна абсолютна давност за лошите кредити в България? appeared first on Kendov.com.

Symantec Reports on Cicada APT Attacks against Japan

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/11/symantec-reports-on-cicada-apt-attacks-against-japan.html

Symantec is reporting on an APT group linked to China, named Cicada. They have been attacking organizations in Japan and elsewhere.

Cicada has historically been known to target Japan-linked organizations, and has also targeted MSPs in the past. The group is using living-off-the-land tools as well as custom malware in this attack campaign, including a custom malware — Backdoor.Hartip — that Symantec has not seen being used by the group before. Among the machines compromised during this attack campaign were domain controllers and file servers, and there was evidence of files being exfiltrated from some of the compromised machines.

The attackers extensively use DLL side-loading in this campaign, and were also seen leveraging the ZeroLogon vulnerability that was patched in August 2020.

Interesting details about the group’s tactics.

News article.

Classify your trash with Raspberry Pi

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/classify-your-trash-with-raspberry-pi/

Maker Jen Fox took to hackster.io to share a Raspberry Pi–powered trash classifier that tells you whether the trash in your hand is recyclable, compostable, or just straight-up garbage.

Jen reckons this project is beginner-friendly, as you don’t need any code to train the machine learning model, just a little to load it on Raspberry Pi. It’s also a pretty affordable build, costing less than $70 including a Raspberry Pi 4.

“Haz waste”?!

Hardware:

  • Raspberry Pi 4 Model B
  • Raspberry Pi Camera Module
  • Adafruit push button
  • Adafruit LEDs
Watch Jen giving a demo of her creation

Software

The code-free machine learning model is created using Lobe, a desktop tool that automatically trains a custom image classifier based on what objects you’ve shown it.

The image classifier correctly guessing it has been shown a bottle cap

Training the image classifier

Basically, you upload a tonne of photos and tell Lobe what object each of them shows. Jen told the empty classification model which photos were of compostable waste, which were of recyclable and items, and which were of garbage or bio-hazardous waste. Of course, as Jen says, “the more photos you have, the more accurate your model is.”

Loading up Raspberry Pi

Birds eye view of Raspberry Pi 4 with a camera module connected
The Raspberry Pi Camera Module attached to Raspberry Pi 4

As promised, you only need a little bit of code to load the image classifier onto your Raspberry Pi. The Raspberry Pi Camera Module acts as the image classifier’s “eyes” so Raspberry Pi can find out what kind of trash you hold up for it.

The push button and LEDs are wired up to the Raspberry Pi GPIO pins, and they work together with the camera and light up according to what the image classifier “sees”.

Here’s the fritzing diagram showing how to wire the push button and LEDS to the Raspberry Pi GPIO pins

You’ll want to create a snazzy case so your trash classifier looks good mounted on the wall. Kate cut holes in a cardboard box to make sure that the camera could “see” out, the user can see the LEDs, and the push button is accessible. Remember to leave room for Raspberry Pi’s power supply to plug in.

Jen’s hand-painted case mounted to the wall, having a look at a plastic bag

Jen has tonnes of other projects on her Hackster profile — check out the micro:bit Magic Wand.

The post Classify your trash with Raspberry Pi appeared first on Raspberry Pi.

Monitoring dashboard for AWS ParallelCluster

Post Syndicated from Ben Peven original https://aws.amazon.com/blogs/compute/monitoring-dashboard-for-aws-parallelcluster/

This post is contributed by Nicola Venuti, Sr. HPC Solutions Architect.

AWS ParallelCluster is an AWS-supported, open source cluster management tool that makes it easy to deploy and manage High Performance Computing (HPC) clusters on AWS. While AWS ParallelCluster includes many benefits for its users, it has not provided straightforward support for monitoring your workloads. In this post, I walk you through an add-on extension that you can use to help monitor your cloud resources with AWS ParallelCluster.

Product overview

AWS ParallelCluster offers many benefits that hide the complexity of the underlying platform and processes. Some of these benefits include:

  • Automatic resource scaling
  • A batch scheduler
  • Easy cluster management that allows you to build and rebuild your infrastructure without the need for manual actions
  • Seamless migration to the cloud supporting a wide variety of operating systems.

While all of these benefits streamline processes for you, one crucial task for HPC stakeholders that customers have found challenging with AWS ParallelCluster is monitoring.

Resources in an HPC cluster like schedulers, instances, storage, and networking are important assets to monitor, especially when you pay for what you use. Organizing and displaying all these metrics becomes challenging as the scale of the infrastructure increases or changes over time which is the typical scenario in the Cloud.

Given these challenges, AWS wants to provide AWS ParallelCluster users with two additional benefits: 1/ facilitate the optimization of price performance and 2/ visualize and monitor the components of cost for their workloads.

The HPC Solution Architects team has created an AWS ParallelCluster add-on that is easy to use and customize. In this blog post, I demonstrate how Grafana – a platform for monitoring and observability – can run on AWS ParallelCluster to enable infrastructure monitoring.

AWS ParallelCluster integrated dashboard and Grafana add-on dashboards

Many customers want a tool that consolidates information coming from different AWS services and makes it easy to monitor the computing infrastructure created and managed by AWS ParallelCluster. The AWS ParallelCluster team – just like every other service team at AWS – is open and keen to listen from our customers.

This feedback helps us inform our product roadmap and build new, customer-focused features.

We recently released AWS ParallelCluster 2.10.0 based on customer feedback. Here are some key features have been released with it:

  • Support for the CentOS 8 Operating System: Customers can now choose CentOS 8 as their base operating system of choice to run their clustersfor both x86 and Arm architectures.
  • Support for P4d instances along with NVIDIA GPUDirect Remote Direct Memory Access (RDMA).
  • FSx for Lustre enhancements (support for  FSx AutoImport and HDD-based support filesystem options)
  • And finally, a new CloudWatch Dashboards designed to help aggregating cluster information, metrics, and logging already available on CloudWatch.

The last of these features above, integrated CloudWatch Dashboards, are designed to help customers face the challenge of cluster monitoring mentioned above. In addition, the Grafana dashboards demonstrated later in this blog are a complementary add-on to this new, CloudWatch-based, dashboard. There are some key differences between the two, summarized in the table below.

The latter does not require any additional component or agent running on either the head or the compute nodes. It aggregates the metrics already pushed by AWS ParallelCluster on CloudWatch into a single dashboard. This translates into zero overhead for the cluster, but at the expense of less flexibility and expandability.

Instead, the Grafana-based dashboard offers additional flexibility and customizability and requires a few lightweight components installed on either the head or the compute nodes. Another key difference between the two monitoring dashboards is that the CloudWatch based one requires AWS credentials, IAM User and Roles configured and access to the AWS web-console, while the Grafana-based one has its-own built-in authentication and authorization system unrelated from the AWS account, end-users or HPC admins might not have permissions (or simply are not willing) to access the AWS Management Console in order to monitor their clusters.

CloudWatch based dashboard Grafana-based monitoring add-on
No additional component Grafana + Prometheus
No overhead Minimal overhead
Little to no expandability Support full customizability
Requires AWS credentials and IAM configured Custom credentials, no AWS access required
Custom user-interface

Grafana add-on dashboards on GitHub

There are many components of an HPC cluster to monitor. Moreover, the cluster is built on a system that is continuously evolving: AWS ParallelCluster and AWS services in general are updated and new features are released often. Because of this, we wanted to have a monitoring solution that is developed on flexible components, that can evolve rapidly. So, we released a Grafana add-on as an open-source project onto this GitHub repository.

Releasing it as an open-source project allows us to more easily and frequently release new updates and enhancements. It also enables users to customize and extend its functionalities by adding new dashboards, or extending the dashboards functionalities (like GPU monitoring or track EC2 Spot Instance interruptions).

At the moment, this new solution is composed of the following open-source components:

  • Grafana is an open-source platform for monitoring and observing. Grafana allows you to query, visualize, alert, and understand your metrics. It also helps you create, explore, and share dashboards fostering a data-driven culture.
  • Prometheus is an open source project for systems and service monitoring from the Cloud Native Computing Foundation. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true.
  • The Prometheus Pushgateway is an open source tool that allows ephemeral and batch jobs to expose their metrics to Prometheus.
  • NGINX is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server.
  • Prometheus-Slurm-Exporter is a Prometheus collector and exporter for metrics extracted from the Slurm resource scheduling system.
  • Node_exporter is a Prometheus exporter for hardware and OS metrics exposed by *NIX kernels, written in Go with pluggable metric collectors.

Note: while almost all components are under the Apache2 license, only Prometheus-Slurm-Exporter is licensed under GPLv3. You should be aware of this license and accept its terms before proceeding and installing this component.

The dashboards

I demonstrate a few different Grafana dashboards in this post. These dashboards are available for you in the AWS Samples GitHub repository. In addition, two dashboards – still under development – are proposed in beta. The first shows the cluster logs coming from Amazon CloudWatch Logs. The second one shows the costs associated to each AWS service utilized.

All these dashboards can be used as they are or customized as you need:

  • AWS ParallelCluster Summary – This is the main dashboard that shows general monitoring info and metrics for the whole cluster. It includes: Slurm metrics, compute related metrics, storage performance metrics, and network usage.

ParallelCluster dashboard

  • Master Node Details – This dashboard shows detailed metrics for the head node, including CPU, memory, network, and storage utilization.

Master node details

  • Compute Node List – This dashboard shows the list of the available compute nodes. Each entry is a link to a more detailed dashboard: the compute node details (see the following image).

Compute node list

  • Compute Node Details – Similar to the head node details, this dashboard shows detailed metrics for the compute nodes.
  • Cluster Logs – This dashboard (still under development) shows all the logs of your HPC cluster. The logs are pushed by AWS ParallelCluster to Amazon CloudWatch Logs and are reported here.

Cluster logs

  • Cluster Costs (also under development) – This dashboard shows the cost associated to AWS Service utilized by your cluster. It includes: EC2, Amazon EBS, FSx, Amazon S3, Amazon EFS. as well as an aggregation of all the costs of every single component.

Cluster costs

How to deploy it

You can simply use the post-install script that you can find in this GitHub repo as it is, or customize it as you need. For instance, you might want to change your Grafana password to something more secure and meaningful for you, or you might want to customize some dashboards by adding additional components to monitor.

#Load AWS Parallelcluster environment variables
. /etc/parallelcluster/cfnconfig
#get GitHub repo to clone and the installation script
monitoring_url=$(echo ${cfn_postinstall_args}| cut -d ',' -f 1 )
monitoring_dir_name=$(echo ${cfn_postinstall_args}| cut -d ',' -f 2 )
monitoring_tarball="${monitoring_dir_name}.tar.gz"
setup_command=$(echo ${cfn_postinstall_args}| cut -d ',' -f 3 )
monitoring_home="/home/${cfn_cluster_user}/${monitoring_dir_name}"
case ${cfn_node_type} in
    MasterServer)
        wget ${monitoring_url} -O ${monitoring_tarball}
        mkdir -p ${monitoring_home}
        tar xvf ${monitoring_tarball} -C ${monitoring_home} --strip-components 1
    ;;
    ComputeFleet)
    ;;
esac
#Execute the monitoring installation script
bash -x "${monitoring_home}/parallelcluster-setup/${setup_command}" >/tmp/monitoring-setup.log 2>&1
exit $?

The proposed post-install script takes care of installing and configuring everything for you. Although a few additional parameters are needed in the AWS ParallelCluster configuration file: the post-install argument, additional IAM policies, custom security group, and a tag. You can find an AWS ParallelCluster template here.

Please note that, at the moment, the post install script has only been tested using Amazon Linux 2.

Add the following parameters to your AWS ParallelCluster configuration file, and then build up your cluster:

base_os = alinux2
post_install = s3://<my-bucket-name>/post-install.sh
post_install_args = https://github.com/aws-samples/aws-parallelcluster-monitoring/tarball/main,aws-parallelcluster-monitoring,install-monitoring.sh
additional_iam_policies = arn:aws:iam::aws:policy/CloudWatchFullAccess,arn:aws:iam::aws:policy/AWSPriceListServiceFullAccess,arn:aws:iam::aws:policy/AmazonSSMFullAccess,arn:aws:iam::aws:policy/AWSCloudFormationReadOnlyAccess
tags = {"Grafana" : "true"}

Make sure that port 80 and port 443 of your head node are accessible from the internet or from your network. You can achieve this by creating the appropriate security group via AWS Management Console or via Command Line Interface (AWS CLI), see the following example:

aws ec2 create-security-group --group-name my-grafana-sg --description "Open Grafana dashboard ports" —vpc-id vpc-1a2b3c4d
aws ec2 authorize-security-group-ingress --group-id sg-12345 --protocol tcp --port 443 —cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-id sg-12345 --protocol tcp --port 80 —cidr 0.0.0.0/0

There is more information on how to create your security groups here.

Finally, set the additional_sg parameter in the [VPC] section of your AWS ParallelCluster configuration file.

After your cluster is created, you can open a web-browser and connect to https://your_public_ip. You should see a landing page with links to the Prometheus database service and the Grafana dashboards.

Size your compute and head nodes

We looked into resource utilization of the components required for building this monitoring solution. In particular, the Prometheus node exporter installed in the compute nodes uses a small (almost negligible) number of CPU cycles, memory, and network.

Depending on the size of your cluster the components installed on the head node (see the list in the chapter “Solution components” of this blog) it might require additional CPU, memory and network capabilities. In particular, if you expect to run a large-scale cluster (hundreds of instances) because of the higher volume of network traffic due to the compute nodes continuously pushing metrics into the head node, we recommend you use an instance type bigger than what you planned.

We cannot advise you exactly how to size your head node because there are many factors that can influence resource utilization. The best recommendation we could give you is to use the Grafana dashboard itself to monitor the CPU, memory, disk, and most importantly, network utilization, and then resize your head node (or other components) accordingly.

Conclusions

This monitoring solution for AWS ParallelCluster is an add-on for your HPC Cluster on AWS and a complement to new features recently released in AWS ParallelCluster 2.10. This blog aimed to provide you with instructions and basic tooling that can be customized based on your needs and that can be adapted quickly as the underlying AWS services evolve.

We are open to hear from you, receive feedback, issues, and pull requests to extend its functionalities.

The US Military Buys Commercial Location Data

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/11/the-us-military-buys-commercial-location-data.html

Vice has a long article about how the US military buys commercial location data worldwide.

The U.S. military is buying the granular movement data of people around the world, harvested from innocuous-seeming apps, Motherboard has learned. The most popular app among a group Motherboard analyzed connected to this sort of data sale is a Muslim prayer and Quran app that has more than 98 million downloads worldwide. Others include a Muslim dating app, a popular Craigslist app, an app for following storms, and a “level” app that can be used to help, for example, install shelves in a bedroom.

This isn’t new, this isn’t just data of non-US citizens, and this isn’t the US military. We have lots of instances where the government buys data that it cannot legally collect itself.

Some app developers Motherboard spoke to were not aware who their users’ location data ends up with, and even if a user examines an app’s privacy policy, they may not ultimately realize how many different industries, companies, or government agencies are buying some of their most sensitive data. U.S. law enforcement purchase of such information has raised questions about authorities buying their way to location data that may ordinarily require a warrant to access. But the USSOCOM contract and additional reporting is the first evidence that U.S. location data purchases have extended from law enforcement to military agencies.

Tracking the latest server images in Amazon EC2 Image Builder pipelines

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/tracking-the-latest-server-images-in-amazon-ec2-image-builder-pipelines/

This post courtesy of Anoop Rachamadugu, Cloud Architect at AWS

The Amazon EC2 Image Builder service helps users to build and maintain server images. The images created by EC2 Image Builder can be used with Amazon Elastic Compute Cloud (EC2) and on-premises. Image Builder reduces the effort of keeping images up-to-date and secure by providing a graphical interface, built-in automation, and AWS-provided security settings. Customers have told us that they manage multiple server images and are looking for ways to track the latest server images created by the pipelines.

In this blog post, I walk through a solution that uses AWS Lambda and AWS Systems Manager (SSM) Parameter Store. It tracks and updates the latest Amazon Machine Image (AMI) IDs every time an Image Builder pipeline is run. With Lambda, you pay only for what you use. You are charged based on the number of requests for your functions and the time it takes for your code to run. In this case, the Lambda function is invoked upon the completion of the image builder pipeline. Standard SSM parameters are available at no additional charge.

Users can reference the SSM parameters in automation scripts and AWS CloudFormation templates providing access to the latest AMI ID for your EC2 infrastructure. Consider the use case of updating Amazon Machine Image (AMI) IDs for the EC2 instances in your CloudFormation templates. Normally, you might map AMI IDs to specific instance types and Regions. Then to update these, you would manually change them in each of your templates. With the SSM parameter integration, your code remains untouched and a CloudFormation stack update operation automatically fetches the latest Parameter Store value.

Overview

This solution uses a Lambda function written in Python that subscribes to an Amazon Simple Notification Service (SNS) topic. The Lambda function and the SNS topic are deployed using AWS SAM CLI. Once deployed, the SNS topic must be configured in an existing Image Builder pipeline. This results in the Lambda function being invoked at the completion of the Image Builder pipeline.

When a Lambda function subscribes to an SNS topic, it is invoked with the payload of the published messages. The Lambda function receives the message payload as an input parameter. The Lambda function first checks the message payload to see if the image status is available. If the image state is available, it retrieves the AMI ID from the message payload and updates the SSM parameter.

EC2 Image builder architecture diagram

EC2 Image builder architecture diagram

Prerequisites

To get started with this solution, the following is required:

Deploying the solution

The solution consists of two files, which can be downloaded from the amazon-ec2-image-builder GitHub repository.

  1. The Python file image-builder-lambda-update-ssm.py contains the code for the Lambda function. It first checks the SNS message payload to determine if the image is available. If it’s available, it extracts the AMI ID from the SNS message payload and updates the SSM parameter specified.The ‘ssm_parameter_name’ variable specifies the SSM parameter path where the AMI ID should be stored and updated. The Lambda function finishes by adding tags to the SSM parameter.
  2. The template.yaml file is an AWS SAM template. It deploys the Lambda function, SNS topic, and IAM role required for the Lambda function. I use Python 3.7 as the runtime and assign a memory of 256 MB for the Lambda function. The IAM policy gives the Lambda function permissions to retrieve and update SSM parameters. Deploy this application using the AWS SAM CLI guided deploy:
    sam deploy --guided

After deploying the application, note the ARN of the created SNS topic. Next, update the infrastructure settings of an existing Image Builder pipeline with this newly created SNS topic. This results in the Lambda function being invoked upon the completion of the image builder pipeline.

Configuration details

Configuration details

Verifying the solution

After the completion of the image builder pipeline, use the AWS CLI or check the AWS Management Console to verify the updated SSM parameter. To verify via AWS CLI, run the following commands to retrieve and list the tags attached to the SSM parameter:

aws ssm get-parameter --name ‘/ec2-imagebuilder/latest’
aws ssm list-tags-for-resource --resource-type "Parameter" --resource-id ‘/ec2-imagebuilder/latest’

To verify via the AWS Management Console, navigate to the Parameter Store under AWS Systems Manager. Search for the parameter /ec2-imagebuilder/latest:

AWS Systems Manager: Parameter Store

AWS Systems Manager: Parameter Store

Select the Tags tab to view the tags attached to the SSM parameter:

Image builder tags list

Image builder tags list

Referencing the SSM Parameter in CloudFormation templates

Users can reference the SSM parameters in automation scripts and AWS CloudFormation templates providing access to the latest AMI ID for your EC2 infrastructure. This sample code shows how to reference the SSM parameter in a CloudFormation template.

Parameters :
  LatestAmiId :
    Type : 'AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>'
    Default: ‘/ec2-imagebuilder/latest’

Resources :
  Instance :
    Type : 'AWS::EC2::Instance'
    Properties :
      ImageId : !Ref LatestAmiId

Conclusion

In this blog post, I demonstrate a solution that allows users to track and update the latest AMI ID created by the Image Builder pipelines. The Lambda function retrieves the AMI ID of the image created by a pipeline and update an AWS Systems Manager parameter. This Lambda function is triggered via an SNS topic configured in an Image Builder pipeline.

The solution is deployed using AWS SAM CLI. I also note how users can reference Systems Manager parameters in AWS CloudFormation templates providing access to the latest AMI ID for your EC2 infrastructure.

The amazon-ec2-image-builder-samples GitHub repository provides a number of examples for getting started with EC2 Image Builder. Image Builder can make it easier for you to build virtual machine (VM) images.

Performing canary deployments for service integrations with Amazon API Gateway

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/performing-canary-deployments-for-service-integrations-with-amazon-api-gateway/

This post authored by Dhiraj Thakur and Sameer Goel, Solutions Architects at AWS.

When building serverless web applications, it is common to use AWS Lambda functions as the compute layer for business logic. To manage canary releases, it’s best practice to use Lambda deployment preferences. However, if you use Amazon API Gateway service integrations instead of Lambda functions, it is necessary to manage the canary release at the API level. This post shows how to use canary releases in REST APIs to gradually deploy changes to serverless applications.

Overview

Modern applications frequently deploy updates to implement new features. But updating or changing a production application is often risky and may introduce bugs. Canary deployments are a popular strategy to help mitigate this risk.

In a canary deployment, you partially deploy a new software feature and shift some percentage of traffic to a new version of the application. This allows you to verify stability and reduce risk associated with the new release. After gaining confidence in the new version, you continually increment traffic until all traffic flows to the new release. Additionally, a canary deployment can be a cost-effective approach as there is no need to duplicate application resources, compared with other deployment strategies such as blue/green deployments.

In this example, there are two service versions deployed with API Gateway. The canary version receives 10% of traffic and the remaining 90% is routed to the stable version.

Canary deploy example

Canary deploy example

After deploying the new version, you can test the health and performance of the new version. Once you are confident that it is ready for release, you can promote the canary version and send 100% of traffic to this API version.

Promoted deployment example

Promoted deployment example

In this post, I show how to use AWS Serverless Application Model (AWS SAM) to build a canary release with a REST API in API Gateway. This is an open-source framework for building serverless applications. It enables developers to define and deploy canary releases and then shift the traffic programmatically. In this example, AWS SAM creates the canary settings necessary to divide traffic and the IAM role used by API Gateway.

API Gateway canary deployment example

For this tutorial, a REST API integrates directly with Amazon DynamoDB. This returns three data attributes from the DynamoDB table. In the canary version, the code is modified to provide additional information from the table.

Create Amazon REST API and other resources

Download the code from this post from https://github.com/aws-samples/amazon-api-gateway-canary-deployment. The template.yaml file is the AWS SAM configuration for the application, and the api.yaml is the OpenAPI configuration for the API. Deploy this application by following the instructions in the README.md file.

The deployment creates an empty DynamoDB table called “<sam-stack-name>-DataTable-*” and an API Gateway REST API called “Canary Deployment” with the stage “PROD”.

  1. Run the Amazon DynamoDB put-item command to create a new item in the DynamoDB table from the AWS CLI. Ensure you have configured AWS CLI – refer to the quickstart guide to learn more.Replace <tablename> with the DynamoDB table name.
    aws dynamodb put-item --table-name <tablename> --item "{""country"":{""S"":""Germany""},""runner-up"":{""S"":""France""},""winner"":{""S"":""Italy""},""year"":{""S"":""2006""}}" --return-consumed-capacity TOTAL

    It returns a success message:

    Update Amazon DynamoDB output

    Update Amazon DynamoDB output

    You can verify the record in the DynamoDB table in the AWS Management Console:

    Scan of Amazon DynamoDB table

    Scan of Amazon DynamoDB table

  2. Select the REST API “Canary Deployment” in Amazon API Gateway. Choose “GET” under the resource section. In the Integration Request, you see the Mapping Template:
    {
      "Key": {
        "year": {
          "S": "$input.params("year")"
        }
      },
      "TableName": "<stack-name>-DataTable-<random-string>"
    }

    The Integration Response is an HTTP response encapsulating the backend response and template looks like this:The TableName indicates which table is used in the REST API call. The value for year is extracted from the request URL using $input.params(‘year’)

    {
      "year": "$input.path('$.Item.year.S')",
      "country": "$input.path('$.Item.country.S')",
      "winner": "$input.path('$.Item.winner.S')"
    }

    It returns the “country”, “year”, “winner” attributes.

  3. You can also check the logs/tracing configuration in the API stage as per the following settings. You can see Amazon CloudWatch Logs are enabled for the API, which helps to check the health of the canary API version.For example, a response code of 2xx indicates that the operation was successful. Other error codes indicate either a client error (4xx) or a server error (5xx). See this link for status code details. Analyze the status of the API in the logs before promoting the canary.

    Enabling logs on the Amazon API Gateway console

    Enabling logs on the Amazon API Gateway console

If you invoke the API endpoint URL in your browser, you can see it returns “country”, “year” and “winner”, as expected from the DynamoDB table.

Invoking endpoint from browser example

Invoking endpoint from browser example

Next, set up the canary release deployment to create a new version of the deployed API and route 10% of the API traffic to it.

Canary deployment

You can now create a new version of the API using the AWS SAM template, which changes the number of attributes returned. With the new version of the API, the additional attribute “runner-up” is returned from the DynamoDB table. For the initial deployment, 10% of API traffic is routed to this API version.

  1. Go to the canary-stack directory and deploy the application. Be sure to use the same stack name that you used for the previous deployment:
    sam deploy -gAWS CloudFormation deploys the canary version and configures the API to route 10% of traffic the new version.You can validate this by checking the canary setting in the PROD stage. You can see “percentage of requests directed to canary” (new version) is “10%” and “percentage of requests directed to Prod” (previous version) is 90%.
  2. Check the Integration Response. The modified template looks like this:
    {
      "year": "$input.path('$.Item.year.S')",
      "country": "$input.path('$.Item.country.S')",
      "winner": "$input.path('$.Item.winner.S')",
      "runner-up": "$input.path('$.Item.runner-up.S')"
    }
  3. Now, test the canary deployment using the API endpoint URL. You can refresh the browser and see the “runner-up” results shown for a small percentage of requests. This demonstrates that 10% of the traffic is routed to the canary. If don’t see this new attribute, even after multiple refreshes, clear your browser cache.Reviewing the Integration Response, you can see that the template now includes the additional attribute “runner-up”. This returns “country”, “year”, “winner” and “runner-up”, as per the new canary release requirement.

    Testing response in browser after change

    Testing response in browser after change

Analyze Amazon CloudWatch Logs

You can analyze the health of the canary version via Amazon CloudWatch Logs. To ensure that there is data in CloudWatch Logs, refresh your browser several times when accessing the API URL.

  1. In the AWS Management Console, navigate to Services -> CloudWatch.
  2. Choose the Region that matches your API Gateway Region, then select Logs on the Left menu.
  3. The logs for API Gateway are named based on the ID of the API. The form is “API-Gateway-Execution-Logs_<api id>/<api stage>
    Viewing the logs, you can see a list of log streams with GUID identifiers. Use the Last Event Time column for a date/time stamp and find a recent execution.
  4. Analyze the canary log to confirm that the REST API call is successful.
Canary promotion options

Canary promotion options

Promote or delete the canary version

To roll back to the initial version, choose Delete Canary or set “Percentage of requests directed to Canary“ to 0. If the Amazon CloudWatch analysis shows that the canary version is operating successfully, you are ready to promote the canary to receive all API traffic.

  1. Navigate to the Canary tab and choose Promote Canary.

    Promoting the canary in the Amazon API Gateway console

    Promoting the canary in the Amazon API Gateway console

  2. Choose Update to accept the settings. This sends 100% traffic to the new version.

    Canary promotion options

    Canary promotion options

Cleanup

See the repo’s README.md for cleanup instructions.

Conclusion

Canary deployments are a recommended practice for testing new versions of applications. This blog post shows how to implement canary deployments for service integrations in API Gateway. I walk through how to analyze the logs generated for canary requests and promote the canary to complete the deployment. Using AWS SAM, you deploy a canary in API Gateway with a predefined routing configuration and strategy.

To learn more, read Building APIs with Amazon API Gateway and Implementing safe AWS Lambda deployments with AWS CodeDeploy.

Michael Ellis as NSA General Counsel

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/11/michael-ellis-as-nsa-general-counsel.html

Over at Lawfare, Susan Hennessey has an excellent primer on how Trump loyalist Michael Ellis got to be the NSA General Counsel, over the objections of NSA Director Paul Nakasone, and what Biden can and should do about it.

While important details remain unclear, media accounts include numerous indications of irregularity in the process by which Ellis was selected for the job, including interference by the White House. At a minimum, the evidence of possible violations of civil service rules demand immediate investigation by Congress and the inspectors general of the Department of Defense and the NSA.

The moment also poses a test for President-elect Biden’s transition, which must address the delicate balance between remedying improper politicization of the intelligence community, defending career roles against impermissible burrowing, and restoring civil service rules that prohibit both partisan favoritism and retribution. The Biden team needs to set a marker now, to clarify the situation to the public and to enable a new Pentagon general counsel to proceed with credibility and independence in investigating and potentially taking remedial action upon assuming office.

The NSA general counsel is not a Senate-confirmed role. Unlike the general counsels of the CIA, Pentagon and Office of the Director of National Intelligence (ODNI), all of which require confirmation, the NSA’s general counsel is a senior career position whose occupant is formally selected by and reports to the general counsel of the Department of Defense. It’s an odd setup — ­and one that obscures certain realities, like the fact that the NSA general counsel in practice reports to the NSA director. This structure is the source of a perennial legislative fight. Every few years, Congress proposes laws to impose a confirmation requirement as more appropriately befits an essential administration role, and every few years, the executive branch opposes those efforts as dangerously politicizing what should be a nonpolitical job.

While a lack of Senate confirmation reduces some accountability and legislative screening, this career selection process has the benefit of being designed to eliminate political interference and to ensure the most qualified candidate is hired. The system includes a complex set of rules governing a selection board that interviews candidates, certifies qualifications and makes recommendations guided by a set of independent merit-based principles. The Pentagon general counsel has the final call in making a selection. For example, if the panel has ranked a first-choice candidate, the general counsel is empowered to choose one of the others.

Ryan Goodman has a similar article at Just Security.

On Blockchain Voting

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/11/on-blockchain-voting.html

Blockchain voting is a spectacularly dumb idea for a whole bunch of reasons. I have generally quoted Matt Blaze:

Why is blockchain voting a dumb idea? Glad you asked.

For starters:

  • It doesn’t solve any problems civil elections actually have.
  • It’s basically incompatible with “software independence”, considered an essential property.
  • It can make ballot secrecy difficult or impossible.

I’ve also quoted this XKCD cartoon.

But now I have this excellent paper from MIT researchers:

“Going from Bad to Worse: From Internet Voting to Blockchain Voting”
Sunoo Park, Michael Specter, Neha Narula, and Ronald L. Rivest

Abstract: Voters are understandably concerned about election security. News reports of possible election interference by foreign powers, of unauthorized voting, of voter disenfranchisement, and of technological failures call into question the integrity of elections worldwide.This article examines the suggestions that “voting over the Internet” or “voting on the blockchain” would increase election security, and finds such claims to be wanting and misleading. While current election systems are far from perfect, Internet- and blockchain-based voting would greatly increase the risk of undetectable, nation-scale election failures.Online voting may seem appealing: voting from a computer or smart phone may seem convenient and accessible. However, studies have been inconclusive, showing that online voting may have little to no effect on turnout in practice, and it may even increase disenfranchisement. More importantly: given the current state of computer security, any turnout increase derived from with Internet- or blockchain-based voting would come at the cost of losing meaningful assurance that votes have been counted as they were cast, and not undetectably altered or discarded. This state of affairs will continue as long as standard tactics such as malware, zero days, and denial-of-service attacks continue to be effective.This article analyzes and systematizes prior research on the security risks of online and electronic voting, and show that not only do these risks persist in blockchain-based voting systems, but blockchains may introduce additional problems for voting systems. Finally, we suggest questions for critically assessing security risks of new voting system proposals.

You may have heard of Voatz, which uses blockchain for voting. It’s an insecure mess. And this is my general essay on blockchain. Short summary: it’s completely useless.