Tag Archives: frida

Security updates for Friday

Post Syndicated from ris original https://lwn.net/Articles/744175/rss

Security updates have been issued by Arch Linux (intel-ucode), Debian (gifsicle), Fedora (awstats and kernel), Gentoo (icoutils, pysaml2, and tigervnc), Mageia (dokuwiki and poppler), Oracle (kernel), SUSE (glibc, kernel, microcode_ctl, tiff, and ucode-intel), and Ubuntu (intel-microcode).

Friday Squid Blogging: How the Optic Lobe Controls Squid Camouflage

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/friday_squid_bl_608.html

Experiments on the oval squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Security updates for Friday

Post Syndicated from jake original https://lwn.net/Articles/743242/rss

Security updates have been issued by Arch Linux (kernel), CentOS (kernel, libvirt, microcode_ctl, and qemu-kvm), Debian (kernel and xen), Fedora (kernel), Mageia (backintime, erlang, and wildmidi), openSUSE (kernel and ucode-intel), Oracle (kernel, libvirt, microcode_ctl, and qemu-kvm), Red Hat (kernel, kernel-rt, libvirt, microcode_ctl, qemu-kvm, and qemu-kvm-rhev), Scientific Linux (libvirt and qemu-kvm), SUSE (kvm and qemu), and Ubuntu (ruby1.9.1, ruby2.0, ruby2.3).

Friday Squid Blogging: Squid Populations Are Exploding

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/12/friday_squid_bl_607.html

New research:

“Global proliferation of cephalopods”

Summary: Human activities have substantially changed the world’s oceans in recent decades, altering marine food webs, habitats and biogeochemical processes. Cephalopods (squid, cuttlefish and octopuses) have a unique set of biological traits, including rapid growth, short lifespans and strong life-history plasticity, allowing them to adapt quickly to changing environmental conditions. There has been growing speculation that cephalopod populations are proliferating in response to a changing environment, a perception fuelled by increasing trends in cephalopod fisheries catch. To investigate long-term trends in cephalopod abundance, we assembled global time-series of cephalopod catch rates (catch per unit of fishing or sampling effort). We show that cephalopod populations have increased over the last six decades, a result that was remarkably consistent across a highly diverse set of cephalopod taxa. Positive trends were also evident for both fisheries-dependent and fisheries-independent time-series, suggesting that trends are not solely due to factors associated with developing fisheries. Our results suggest that large-scale, directional processes, common to a range of coastal and oceanic environments, are responsible. This study presents the first evidence that cephalopod populations have increased globally, indicating that these ecologically and commercially important invertebrates may have benefited from a changing ocean environment.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Friday Squid Blogging: Gonatus Squid Eating a Dragonfish

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/12/friday_squid_bl_606.html

There’s a video:

Last July, Choy was on a ship off the shore of Monterey Bay, looking at the video footage transmitted by an ROV many feet below. A Gonatus squid was spotted sucking off the face of a “really huge dragonfish,” she says. “It took a little while to figure out what’s going on here, who’s eating whom, how is this going to end?” (The squid won.)

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Security updates for Friday

Post Syndicated from jake original https://lwn.net/Articles/742134/rss

Security updates have been issued by Debian (bouncycastle, enigmail, and sensible-utils), Fedora (kernel), Mageia (dhcp, flash-player-plugin, glibc, graphicsmagick, java-1.8.0-openjdk, kernel, kernel-linus, kernel-tmb, mariadb, pcre, rootcerts, rsync, shadow-utils, and xrdp), and SUSE (java-1_8_0-ibm and kernel).

New Zealand Prepares Consultation to Modernize Copyright Laws

Post Syndicated from Andy original https://torrentfreak.com/new-zealand-prepares-consultation-to-modernize-copyright-laws-171218/

The Copyright Act 1994 is the key legislation governing New Zealand’s handling of intellectual property issues, covering protection, infringement, exceptions and enforcement. It last underwent a review more than a decade ago resulting in the Copyright (New Technologies) Amendment Act 2008.

Like much copyright law worldwide, New Zealand’s legislation has struggled to keep pace with technological change so, during the summer, the last government announced plans for a review with several key goals:

Assess the performance of the Copyright Act against the objectives of New Zealand’s copyright regime.

Identify barriers to achieving the objectives of New Zealand’s copyright regime, and the level of impact that these barriers have.

Formulate a preferred approach to addressing these issues – including amendments to the Copyright Act, and the commissioning of further work on any other regulatory or non-regulatory options that are identified.

The former government planned to initiate a public consultation in the second quarter of 2018, with a review being informed by the responses. According to an announcement Friday, the new government plans to go ahead with the overhaul, beginning in April as previously envisioned.

Many of the hot topics in the United States, Europe and closer to home in Australia are expected to come to the forefront, including site-blocking, service provider safe harbor provisions, and the thorny issue of fair use.

Speaking with RadioNZ, New Zealand Screen Association managing director Matthew Cheetham says that new legislation is required to keep pace with a rapidly moving landscape.

“In New Zealand, piracy is almost an accepted thing, because no one’s really doing anything about it, because no one actually can do anything about it,” Cheetham says.

“As new technologies have evolved, the law has struggled to keep pace with those new technologies and to make sure that the law is fit for purpose in the digital age.”

As the local representative for several Hollywood studios, it’s no surprise that NZSA will be seeking amendments that will force ISPs to block access to popular pirate sites, as they do already in the UK, Europe, and Australia.

“If the site is infringing [a court] can order internet service providers to block access to that site. Forty-two countries around the world have recognised that blocking access when it’s carefully defined is a perfectly legitimate avenue for rights holders to protect their rights,” Cheetham notes.

While there hasn’t been a major copyright overhaul in more than a decade, New Zealand is no stranger to prolonged exercises to try and stop piracy.

The country spent huge amounts of time and money late last decade in order to come up with the Copyright (Infringing File Sharing) Amendment Act 2011. It laid out a system under which pirates received escalating warnings culminating in eventual disconnection from the Internet. But, with escalating costs (between NZ$20 and NZ$25 per notice), the scheme was ultimately an expensive flop.

“We have an entire regime that allows copyright holders to seek and send notices to users that are committing piracy and actually have a process in a court-based system that allows remedies to be pursued,” Internet New Zealand deputy chief executive Andrew Cushen told RadioNZ.

“None of them are using it. Why would we now look at a wholly different solution that none of them are going to use as well?”

As someone who has been acutely affected by New Zealand’s approach to intellectual property rights enforcement, Kim Dotcom certainly has an interest in the development of local copyright law. The Megaupload founder was arrested in 2012 for alleged copyright offenses that he insists aren’t even a crime in New Zealand. So what advice does he have for the review?

According to the entrepreneur, the NZ Copyright Act is “mostly good”, noting that it protects both ISPs and consumers. Given the chance, however, he would remind judges about the purpose of the act.

“The NZ Copyright Act is a code. The Copyright Act creates a special property right. No other act applies to this special property right, including the crimes act,” Dotcom informs TF.

“This might be a helpful yardstick for Judges who don’t understand the Copyright Act and attempt to create new and unintended law from the bench. Just like in my case.”

Only time will tell how the public consultation will play out but it seems likely that tackling the “Value Gap” situation will be high up the agenda, especially if that can be achieved by eroding Internet companies’ safe harbors under copyright law. Expect that to receive significant push-back from the technology sector.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

US Government Teaches Anti-Piracy Skills Around The Globe

Post Syndicated from Ernesto original https://torrentfreak.com/us-government-teaches-anti-piracy-skills-around-the-globe-171217/

Online piracy is a global issue. Pirate sites and services tend to operate in multiple jurisdictions and are purposefully set up to evade law enforcement.

This makes it hard for police from one country to effectively crack down on a site in another. International cooperation is often required, and the US Government is one of the leaders on this front.

The US Department of Justice (DoJ) has quite a bit of experience in tracking down pirates and they are actively sharing this knowledge with countries that can use some help. This goes far beyond the occasional seminar.

A diplomatic cable obtained through a Freedom of Information request provides a relatively recent example of these efforts. The document gives an overview of anti-piracy training, provided and funded by the US Government, during the fall of 2015.

“On November 24 and 25, prosecutors and investigators from Romania, Moldova, Bulgaria, and Turkey participated in a two-day, US. Department of Justice (USDOJ)-sponsored training program on combatting online piracy.

“The program updated participants on legal issues, including data retention legislation, surrounding the investigation and prosecution of online piracy,” the cable adds.

According to the cable, piracy has become a very significant problem in Eastern Europe, costing rightsholders and governments millions of dollars in revenues. After the training, local law enforcement officers in these countries should be better equipped to deal with the problem.

Pirates Beware

The event was put together with help from various embassies and among the presenters were law enforcement professionals from around the world.

The Director of the DoJ’s CCIPS Cybercrime Laboratory was among the speakers. He gave training on computer forensics and participants were provided with various tools to put this to use.

“Participants were given copies of forensic tools at the conclusion of the program so that they could put to use some of what they saw demonstrated during the training,” the cable reads.

While catching pirates can be quite hard already, getting them convicted is a challenge as well. Increasingly we’ve seen criminal complaints using non-copyright claims to have site owners prosecuted.

By using money laundering and tax offenses, pirates can receive tougher penalties. This was one of the talking points during the training as well.

“Participants were encouraged to consider the use of statutes such as money laundering and tax evasion, in addition to those protecting copyrights and trademarks, since these offenses are often punished more severely than standalone intellectual property crimes.”

The cable, written by the US Embassy in Bucharest, provides a lot of detail about the two-day training session. It’s also clear on the overall objective. The US wants to increase the likelihood that pirate sites are brought to justice. Not only in the homeland, but around the globe.

“By focusing approximately forty investigators and prosecutors from four countries on how they can more effectively attack rogue sites, and by connecting rights holders and their investigators with law enforcement, the chances of pirates being caught and held accountable have increased.”

While it’s hard to link the training to any concrete successes, Romanian law enforcement did shut down the country’s leading pirate site a few months later. As with a previous case in Romania, which involved the FBI, money laundering and tax evasion allegations were expected.

While it’s not out of the ordinary for international law enforcers to work together, it’s notable how coordinated the US efforts are. Earlier this week we wrote about the US pressure on Sweden to raid The Pirate Bay. And these are not isolated incidents.

While the US Department of Justice doesn’t reveal all details of its operations, it is very open about its global efforts to protect Intellectual Property.

Around the world..

The DoJ’s Computer Crime and Intellectual Property Section (CCIPS) has relationships with law enforcement worldwide and regularly provides training to foreign officers.

A crucial part of the Department’s international enforcement activities is the Intellectual Property Law Enforcement Coordinator (IPLEC) program, which started in 2006.

Through IPLECs, the department now has Attorneys stationed in Thailand, Hong Kong, Romania, Brazil, and Nigeria. These Attorneys keep an eye on local law enforcement and provide assistance and training, to protect US copyright holders.

“Our strategically placed coordinators draw upon their subject matter expertise to help ensure that property holders’ rights are enforced across the globe, and that the American people are protected from harmful products entering the marketplace,” Attorney General John Cronan of the Criminal Division said just last Friday.

Or to end with the title of the Romanian cable: ‘Pirates beware!’

The cable cited here was made available in response to a Freedom of Information request, which was submitted by Rachael Tackett and shared with TorrentFreak. It starts at page 47 of document 2.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

UK Should Hold Google & Facebook “Liable for Illegal Content” After Brexit

Post Syndicated from Andy original https://torrentfreak.com/uk-should-hold-google-facebook-liable-for-illegal-content-after-brexit-171217/

In order to operate and innovate in the online space, Internet giants such as Google, YouTube, and Facebook can’t be held immediately liable for everything that appears on their platforms.

If Google indexes an objectionable website, if someone posts an infringing video to YouTube, or if abusive or violent messages appear on Facebook, that is currently and quite rightly the responsibility of the person who put the offending content there.

However, once the platforms in question are advised by an appropriate authority that content posted on their services breaks the law, they are required to take it down. If they do not, they can then be held liable under local and EU law.

While essential for tech companies, this so-called safe harbor is a thorn in the side of copyright holders. They contend that platforms like YouTube abuse their freedoms in order to monetize infringing content while gaining advantages in licensing negotiations.

The protection offered by the E-Commerce Directive is a hot topic right now, one which necessarily involves the UK. However, with the UK due to leave the EU at 11pm local time on Friday 29 March, 2019, it will then be free to make its own laws. It’s now being suggested that as soon as Brexit happens, the UK should introduce new laws that hold tech companies liable for “illegal content” that appears on their platforms.

The advice can be found in a new report published by the Committee on Standards in Public Life. Titled “Intimidation in Public Life”, the report focuses on the online threats and intimidation experienced by Parliamentary candidates and others.

However, the laws that currently protect information society service providers apply to a much broader range of content, including that alleged to be copyright-infringing.

“Currently, social media companies do not have liability for the content on their sites, even where that content is illegal. This is largely due to the EU E-Commerce Directive (2000), which treats the social media companies as ‘hosts’ of online content. It is clear, however, that this legislation is out of date,” the report reads.

“Facebook, Twitter and Google are not simply platforms for the content that others post; they play a role in shaping what users see. We understand that they do not consider themselves as publishers, responsible for reviewing and editing everything that others post on their sites. But with developments in technology, the time has come for the companies to take more responsibility for illegal material that appears on their platforms.”

That responsibility should be increased immediately upon Brexit, the Committee recommends, via new legislation that won’t be hindered by the safe harbors offered by the E-Commerce Directive. Doing so will force online platforms to take more direct action to combat the appearance of illegal content, the Committee argues.

“The government should seek to legislate to shift the balance of liability for illegal content to the social media companies away from them being passive ‘platforms’ for illegal content. Given the government’s stated intention to leave the EU Single Market, legislation can be introduced to this effect without being in breach of EU law,” the report notes.

“We believe government should legislate to rebalance this liability for illegal content, and thereby drive change in the way social media companies operate in combatting illegal behavior online in the UK.”

How the process will play out from here remains to be seen but there is likely to be significant push-back from companies including the likes of Google, Facebook, and Twitter. Whether the “illegal content” they’re to be held liable for is deemed threatening, racist, or indeed copyright-infringing, matters are rarely clear-cut and there could be significant fall out if conditions are set too tightly.

Expect plenty of stakeholders to get involved when it comes to diminishing the protections of the E-Commerce Directive. To be continued…..

The full report can be found here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Security updates for Friday

Post Syndicated from jake original https://lwn.net/Articles/741580/rss

Security updates have been issued by Debian (erlang), Fedora (python-dulwich), Gentoo (curl, opencv, openssl, and webkit-gtk), openSUSE (libapr-util1 and php5), Red Hat (qemu-kvm-rhev), and Ubuntu (linux, linux-aws, linux-kvm, linux-raspi2 and linux-lts-xenial, linux-aws).

Managing AWS Lambda Function Concurrency

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/managing-aws-lambda-function-concurrency/

One of the key benefits of serverless applications is the ease in which they can scale to meet traffic demands or requests, with little to no need for capacity planning. In AWS Lambda, which is the core of the serverless platform at AWS, the unit of scale is a concurrent execution. This refers to the number of executions of your function code that are happening at any given time.

Thinking about concurrent executions as a unit of scale is a fairly unique concept. In this post, I dive deeper into this and talk about how you can make use of per function concurrency limits in Lambda.

Understanding concurrency in Lambda

Instead of diving right into the guts of how Lambda works, here’s an appetizing analogy: a magical pizza.
Yes, a magical pizza!

This magical pizza has some unique properties:

  • It has a fixed maximum number of slices, such as 8.
  • Slices automatically re-appear after they are consumed.
  • When you take a slice from the pizza, it does not re-appear until it has been completely consumed.
  • One person can take multiple slices at a time.
  • You can easily ask to have the number of slices increased, but they remain fixed at any point in time otherwise.

Now that the magical pizza’s properties are defined, here’s a hypothetical situation of some friends sharing this pizza.

Shawn, Kate, Daniela, Chuck, Ian and Avleen get together every Friday to share a pizza and catch up on their week. As there is just six of them, they can easily all enjoy a slice of pizza at a time. As they finish each slice, it re-appears in the pizza pan and they can take another slice again. Given the magical properties of their pizza, they can continue to eat all they want, but with two very important constraints:

  • If any of them take too many slices at once, the others may not get as much as they want.
  • If they take too many slices, they might also eat too much and get sick.

One particular week, some of the friends are hungrier than the rest, taking two slices at a time instead of just one. If more than two of them try to take two pieces at a time, this can cause contention for pizza slices. Some of them would wait hungry for the slices to re-appear. They could ask for a pizza with more slices, but then run the same risk again later if more hungry friends join than planned for.

What can they do?

If the friends agreed to accept a limit for the maximum number of slices they each eat concurrently, both of these issues are avoided. Some could have a maximum of 2 of the 8 slices, or other concurrency limits that were more or less. Just so long as they kept it at or under eight total slices to be eaten at one time. This would keep any from going hungry or eating too much. The six friends can happily enjoy their magical pizza without worry!

Concurrency in Lambda

Concurrency in Lambda actually works similarly to the magical pizza model. Each AWS Account has an overall AccountLimit value that is fixed at any point in time, but can be easily increased as needed, just like the count of slices in the pizza. As of May 2017, the default limit is 1000 “slices” of concurrency per AWS Region.

Also like the magical pizza, each concurrency “slice” can only be consumed individually one at a time. After consumption, it becomes available to be consumed again. Services invoking Lambda functions can consume multiple slices of concurrency at the same time, just like the group of friends can take multiple slices of the pizza.

Let’s take our example of the six friends and bring it back to AWS services that commonly invoke Lambda:

  • Amazon S3
  • Amazon Kinesis
  • Amazon DynamoDB
  • Amazon Cognito

In a single account with the default concurrency limit of 1000 concurrent executions, any of these four services could invoke enough functions to consume the entire limit or some part of it. Just like with the pizza example, there is the possibility for two issues to pop up:

  • One or more of these services could invoke enough functions to consume a majority of the available concurrency capacity. This could cause others to be starved for it, causing failed invocations.
  • A service could consume too much concurrent capacity and cause a downstream service or database to be overwhelmed, which could cause failed executions.

For Lambda functions that are launched in a VPC, you have the potential to consume the available IP addresses in a subnet or the maximum number of elastic network interfaces to which your account has access. For more information, see Configuring a Lambda Function to Access Resources in an Amazon VPC. For information about elastic network interface limits, see Network Interfaces section in the Amazon VPC Limits topic.

One way to solve both of these problems is applying a concurrency limit to the Lambda functions in an account.

Configuring per function concurrency limits

You can now set a concurrency limit on individual Lambda functions in an account. The concurrency limit that you set reserves a portion of your account level concurrency for a given function. All of your functions’ concurrent executions count against this account-level limit by default.

If you set a concurrency limit for a specific function, then that function’s concurrency limit allocation is deducted from the shared pool and assigned to that specific function. AWS also reserves 100 units of concurrency for all functions that don’t have a specified concurrency limit set. This helps to make sure that future functions have capacity to be consumed.

Going back to the example of the consuming services, you could set throttles for the functions as follows:

Amazon S3 function = 350
Amazon Kinesis function = 200
Amazon DynamoDB function = 200
Amazon Cognito function = 150
Total = 900

With the 100 reserved for all non-concurrency reserved functions, this totals the account limit of 1000.

Here’s how this works. To start, create a basic Lambda function that is invoked via Amazon API Gateway. This Lambda function returns a single “Hello World” statement with an added sleep time between 2 and 5 seconds. The sleep time simulates an API providing some sort of capability that can take a varied amount of time. The goal here is to show how an API that is underloaded can reach its concurrency limit, and what happens when it does.
To create the example function

  1. Open the Lambda console.
  2. Choose Create Function.
  3. For Author from scratch, enter the following values:
    1. For Name, enter a value (such as concurrencyBlog01).
    2. For Runtime, choose Python 3.6.
    3. For Role, choose Create new role from template and enter a name aligned with this function, such as concurrencyBlogRole.
  4. Choose Create function.
  5. The function is created with some basic example code. Replace that code with the following:

import time
from random import randint
seconds = randint(2, 5)

def lambda_handler(event, context):
time.sleep(seconds)
return {"statusCode": 200,
"body": ("Hello world, slept " + str(seconds) + " seconds"),
"headers":
{
"Access-Control-Allow-Headers": "Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token",
"Access-Control-Allow-Methods": "GET,OPTIONS",
}}

  1. Under Basic settings, set Timeout to 10 seconds. While this function should only ever take up to 5-6 seconds (with the 5-second max sleep), this gives you a little bit of room if it takes longer.

  1. Choose Save at the top right.

At this point, your function is configured for this example. Test it and confirm this in the console:

  1. Choose Test.
  2. Enter a name (it doesn’t matter for this example).
  3. Choose Create.
  4. In the console, choose Test again.
  5. You should see output similar to the following:

Now configure API Gateway so that you have an HTTPS endpoint to test against.

  1. In the Lambda console, choose Configuration.
  2. Under Triggers, choose API Gateway.
  3. Open the API Gateway icon now shown as attached to your Lambda function:

  1. Under Configure triggers, leave the default values for API Name and Deployment stage. For Security, choose Open.
  2. Choose Add, Save.

API Gateway is now configured to invoke Lambda at the Invoke URL shown under its configuration. You can take this URL and test it in any browser or command line, using tools such as “curl”:


$ curl https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01
Hello world, slept 2 seconds

Throwing load at the function

Now start throwing some load against your API Gateway + Lambda function combo. Right now, your function is only limited by the total amount of concurrency available in an account. For this example account, you might have 850 unreserved concurrency out of a full account limit of 1000 due to having configured a few concurrency limits already (also the 100 concurrency saved for all functions without configured limits). You can find all of this information on the main Dashboard page of the Lambda console:

For generating load in this example, use an open source tool called “hey” (https://github.com/rakyll/hey), which works similarly to ApacheBench (ab). You test from an Amazon EC2 instance running the default Amazon Linux AMI from the EC2 console. For more help with configuring an EC2 instance, follow the steps in the Launch Instance Wizard.

After the EC2 instance is running, SSH into the host and run the following:


sudo yum install go
go get -u github.com/rakyll/hey

“hey” is easy to use. For these tests, specify a total number of tests (5,000) and a concurrency of 50 against the API Gateway URL as follows(replace the URL here with your own):


$ ./go/bin/hey -n 5000 -c 50 https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01

The output from “hey” tells you interesting bits of information:


$ ./go/bin/hey -n 5000 -c 50 https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01

Summary:
Total: 381.9978 secs
Slowest: 9.4765 secs
Fastest: 0.0438 secs
Average: 3.2153 secs
Requests/sec: 13.0891
Total data: 140024 bytes
Size/request: 28 bytes

Response time histogram:
0.044 [1] |
0.987 [2] |
1.930 [0] |
2.874 [1803] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
3.817 [1518] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
4.760 [719] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
5.703 [917] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
6.647 [13] |
7.590 [14] |
8.533 [9] |
9.477 [4] |

Latency distribution:
10% in 2.0224 secs
25% in 2.0267 secs
50% in 3.0251 secs
75% in 4.0269 secs
90% in 5.0279 secs
95% in 5.0414 secs
99% in 5.1871 secs

Details (average, fastest, slowest):
DNS+dialup: 0.0003 secs, 0.0000 secs, 0.0332 secs
DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0046 secs
req write: 0.0000 secs, 0.0000 secs, 0.0005 secs
resp wait: 3.2149 secs, 0.0438 secs, 9.4472 secs
resp read: 0.0000 secs, 0.0000 secs, 0.0004 secs

Status code distribution:
[200] 4997 responses
[502] 3 responses

You can see a helpful histogram and latency distribution. Remember that this Lambda function has a random sleep period in it and so isn’t entirely representational of a real-life workload. Those three 502s warrant digging deeper, but could be due to Lambda cold-start timing and the “second” variable being the maximum of 5, causing the Lambda functions to time out. AWS X-Ray and the Amazon CloudWatch logs generated by both API Gateway and Lambda could help you troubleshoot this.

Configuring a concurrency reservation

Now that you’ve established that you can generate this load against the function, I show you how to limit it and protect a backend resource from being overloaded by all of these requests.

  1. In the console, choose Configure.
  2. Under Concurrency, for Reserve concurrency, enter 25.

  1. Click on Save in the top right corner.

You could also set this with the AWS CLI using the Lambda put-function-concurrency command or see your current concurrency configuration via Lambda get-function. Here’s an example command:


$ aws lambda get-function --function-name concurrencyBlog01 --output json --query Concurrency
{
"ReservedConcurrentExecutions": 25
}

Either way, you’ve set the Concurrency Reservation to 25 for this function. This acts as both a limit and a reservation in terms of making sure that you can execute 25 concurrent functions at all times. Going above this results in the throttling of the Lambda function. Depending on the invoking service, throttling can result in a number of different outcomes, as shown in the documentation on Throttling Behavior. This change has also reduced your unreserved account concurrency for other functions by 25.

Rerun the same load generation as before and see what happens. Previously, you tested at 50 concurrency, which worked just fine. By limiting the Lambda functions to 25 concurrency, you should see rate limiting kick in. Run the same test again:


$ ./go/bin/hey -n 5000 -c 50 https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01

While this test runs, refresh the Monitoring tab on your function detail page. You see the following warning message:

This is great! It means that your throttle is working as configured and you are now protecting your downstream resources from too much load from your Lambda function.

Here is the output from a new “hey” command:


$ ./go/bin/hey -n 5000 -c 50 https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01
Summary:
Total: 379.9922 secs
Slowest: 7.1486 secs
Fastest: 0.0102 secs
Average: 1.1897 secs
Requests/sec: 13.1582
Total data: 164608 bytes
Size/request: 32 bytes

Response time histogram:
0.010 [1] |
0.724 [3075] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
1.438 [0] |
2.152 [811] |∎∎∎∎∎∎∎∎∎∎∎
2.866 [11] |
3.579 [566] |∎∎∎∎∎∎∎
4.293 [214] |∎∎∎
5.007 [1] |
5.721 [315] |∎∎∎∎
6.435 [4] |
7.149 [2] |

Latency distribution:
10% in 0.0130 secs
25% in 0.0147 secs
50% in 0.0205 secs
75% in 2.0344 secs
90% in 4.0229 secs
95% in 5.0248 secs
99% in 5.0629 secs

Details (average, fastest, slowest):
DNS+dialup: 0.0004 secs, 0.0000 secs, 0.0537 secs
DNS-lookup: 0.0002 secs, 0.0000 secs, 0.0184 secs
req write: 0.0000 secs, 0.0000 secs, 0.0016 secs
resp wait: 1.1892 secs, 0.0101 secs, 7.1038 secs
resp read: 0.0000 secs, 0.0000 secs, 0.0005 secs

Status code distribution:
[502] 3076 responses
[200] 1924 responses

This looks fairly different from the last load test run. A large percentage of these requests failed fast due to the concurrency throttle failing them (those with the 0.724 seconds line). The timing shown here in the histogram represents the entire time it took to get a response between the EC2 instance and API Gateway calling Lambda and being rejected. It’s also important to note that this example was configured with an edge-optimized endpoint in API Gateway. You see under Status code distribution that 3076 of the 5000 requests failed with a 502, showing that the backend service from API Gateway and Lambda failed the request.

Other uses

Managing function concurrency can be useful in a few other ways beyond just limiting the impact on downstream services and providing a reservation of concurrency capacity. Here are two other uses:

  • Emergency kill switch
  • Cost controls

Emergency kill switch

On occasion, due to issues with applications I’ve managed in the past, I’ve had a need to disable a certain function or capability of an application. By setting the concurrency reservation and limit of a Lambda function to zero, you can do just that.

With the reservation set to zero every invocation of a Lambda function results in being throttled. You could then work on the related parts of the infrastructure or application that aren’t working, and then reconfigure the concurrency limit to allow invocations again.

Cost controls

While I mentioned how you might want to use concurrency limits to control the downstream impact to services or databases that your Lambda function might call, another resource that you might be cautious about is money. Setting the concurrency throttle is another way to help control costs during development and testing of your application.

You might want to prevent against a function performing a recursive action too quickly or a development workload generating too high of a concurrency. You might also want to protect development resources connected to this function from generating too much cost, such as APIs that your Lambda function calls.

Conclusion

Concurrent executions as a unit of scale are a fairly unique characteristic about Lambda functions. Placing limits on how many concurrency “slices” that your function can consume can prevent a single function from consuming all of the available concurrency in an account. Limits can also prevent a function from overwhelming a backend resource that isn’t as scalable.

Unlike monolithic applications or even microservices where there are mixed capabilities in a single service, Lambda functions encourage a sort of “nano-service” of small business logic directly related to the integration model connected to the function. I hope you’ve enjoyed this post and configure your concurrency limits today!

Security updates for Friday

Post Syndicated from jake original https://lwn.net/Articles/740997/rss

Security updates have been issued by Arch Linux (chromium and vlc), Debian (erlang), Mageia (ffmpeg, tor, and wireshark), openSUSE (chromium, opensaml, openssh, openvswitch, and php7), Oracle (postgresql), Red Hat (chromium-browser, postgresql, rh-postgresql94-postgresql, rh-postgresql95-postgresql, and rh-postgresql96-postgresql), SUSE (firefox, java-1_6_0-ibm, opensaml, and xen), and Ubuntu (kernel, linux, linux-aws, linux-kvm, linux-raspi2, linux-snapdragon, linux, linux-raspi2, linux-azure, linux-gcp, linux-hwe, linux-lts-trusty, linux-lts-xenial, linux-aws, and rsync).

Friday Squid Blogging: Research into Squid-Eating Beaked Whales

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/12/friday_squid_bl_603.html

Beaked whales, living off the coasts of Ireland, feed on squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Security updates for Friday

Post Syndicated from jake original https://lwn.net/Articles/740431/rss

Security updates have been issued by Debian (curl, libxml2, optipng, and sox), Fedora (kernel, mediawiki, moodle, nodejs-balanced-match, nodejs-brace-expansion, and python-werkzeug), openSUSE (optipng), Oracle (kernel and qemu-kvm), Red Hat (kernel, kernel-rt, qemu-kvm, and qemu-kvm-rhev), SUSE (kernel), and Ubuntu (thunderbird).

Friday Squid Blogging: Fake Squid Seized in Cambodia

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/friday_squid_bl_602.html

Falsely labeled squid snacks were seized in Cambodia. I don’t know what food product it really was.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.