[$] The (non-)return of the Python print statement

Post Syndicated from jake original https://lwn.net/Articles/823292/rss

In what may have seemed like an April Fool’s
Day
joke to some, Python creator Guido van Rossum recently floated
the idea of bringing back the print statement—several months after
Python 2, which had such a statement, reached its end of life. In fact, Van
Rossum acknowledged that readers of his message to the python-ideas mailing
list might be checking the date: “No, it’s not April 1st.” He
was serious about the idea—at least if others were interested in having the
feature—but he withdrew it fairly quickly when it became clear that there
were few takers. The main reason he brought it up is interesting, though:
the new parser for CPython makes it
easy to bring back print from Python 2 (and before).

U.S. Eases Restrictions on Private Remote-Sensing Satellites

Post Syndicated from David Schneider original https://spectrum.ieee.org/tech-talk/aerospace/satellites/eased-restrictions-on-commercial-remote-sensing-satellites

Later this month, satellite-based remote-sensing in the United States will be getting a big boost. Not from a better rocket, but from the U.S. Commerce Department, which will be relaxing the rules that govern how companies provide such services.

For many years, the Commerce Department has been tightly regulating those satellite-imaging companies, because of worries about geopolitical adversaries buying images for nefarious purposes and compromising U.S. national security. But the newly announced rules, set to go into effect on July 20, represent a significant easing of restrictions.

Previously, obtaining permission to operate a remote-sensing satellite has been a gamble—the criteria by which a company’s plans were judged were vague, as was the process, an inter-agency review requiring input from the U.S. Department of Defense as well as the State Department. But in May of 2018, the Trump administration’s Space Policy Directive-2 made it apparent that the regulatory winds were changing. In an effort to promote economic growth, the Commerce Department was commanded to rescind or revise regulations established in the Land Remote Sensing Policy Act of 1992,  a piece of legislation that compelled remote-sensing satellite companies to obtain licenses and required that their operations not compromise national security.

Following that directive, in May of 2019 the Commerce Department issued a Notice of Proposed Rulemaking in an attempt to streamline what many in the satellite remote-sensing industry saw as a cumbersome and restrictive process.

But the proposed rules didn’t please industry players. To the surprise of many of them, though, the final rules announced last May were significantly less strict. For example, they allow satellite remote-sensing companies to sell images of a particular type and resolution if substantially similar images are already commercially available in other countries. The new rules also drop earlier restrictions on nighttime imaging, radar imaging, and short-wave infrared imaging.

On June 25th, Commerce Secretary Wilbur Ross explained at a virtual meeting of the National Oceanic and Atmospheric Administration’s Advisory Committee on Commercial Remote Sensing why the final rules differ so much from what was proposed in 2019:

Last year at this meeting, you told us that our first draft of the rule would be detrimental to the U.S. industry and that it could threaten a decade’s worth of progress. You provided us with assessments of technology, foreign competition, and the impact of new remote sensing applications. We listened. We made the case with our government colleagues that the U.S. industry must innovate and introduce new products as quickly as possible. We argued that it was no longer possible to control new applications in the intensifying global competition for dominance.

In other words, the cat was already out of the bag: there’s no sense prohibiting U.S. companies from offering satellite-imaging services already available from foreign companies.

An area where the new rules remain relatively strict though, concerns the taking of pictures of other objects in orbit. Companies that want to offer satellite inspection or maintenance services would need rules that allow what regulators call “non-Earth imaging.” But there are national security implications here, because pictures obtained in this way could blow the cover of U.S. spy satellites masquerading as space debris.

While the extent to which spy satellites cloak themselves in the guise of space debris isn’t known, it seems clear that this would be an ideal tactic for avoiding detection. That strategy won’t work, though, if images taken by commercial satellites reveal a radar-reflecting object to be a cubesat instead of a mangled mass of metal.

Because of that concern, the current rules demand that companies limit the detailed imaging of other objects in space to ones for which they have obtained permission from the satellite owner and from the Secretary of Commerce at least 5 days in advance of obtaining images. But that stipulation begs a key question: Who should a satellite-imaging company contact if it wants to take pictures of a piece of space debris? Maybe imaging space debris would only require the permission of the Secretary of Commerce. But then, would the Secretary ever give such a request a green light? After all, if permission were typically granted, instances when it wasn’t would become suspicious.

More likely, imaging space debris—or spy satellites trying to pass as junk—is going to remain off the table for the time being. So even though the new rules are a welcome development to most commercial satellite companies, some will remain disappointed, including those companies that make up the Consortium for the Execution of Rendezvous and Servicing Operations (CONFERS), which had recommended that “the U.S. government should declare the space domain as a public space and the ability to conduct [non-Earth imaging] as the equivalent of taking photos of public activities on a public street.”

Why We Need Robot Sloths

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/why-we-need-robot-sloths

An inherent characteristic of a robot (I would argue) is embodied motion. We tend to focus on motion rather a lot with robots, and the most dynamic robots get the most attention. This isn’t to say that highly dynamic robots don’t deserve our attention, but there are other robotic philosophies that, while perhaps less visually exciting, are equally valuable under the right circumstances. Magnus Egerstedt, a robotics professor at Georgia Tech, was inspired by some sloths he met in Costa Rica to explore the idea of “slowness as a design paradigm” through an arboreal robot called SlothBot.

IBM’s New AI Tool Parses A Tidal Wave of Coronavirus Research

Post Syndicated from Lynne Peskoe-Yang original https://spectrum.ieee.org/news-from-around-ieee/the-institute/ieee-member-news/ibms-new-ai-tool-parses-a-tidal-wave-of-coronavirus-research

IEEE COVID-19 coverage logo, link to landing page

THE INSTITUTE In the race to develop a vaccine for the novel coronavirus, health care providers and scientists must sift through a growing mountain of research, both new and old. But they face several obstacles. The sheer volume of material makes using traditional search engines difficult because simple keyword searches aren’t sufficient to extract meaning from the published research. This is further complicated by the fact that most search engines present research results in visual file formats like pdfs and bitmaps, which are unreadable to typical web browsers.

IEEE Member Peter Staar, a researcher at IBM Research Europe, in Zurich, and manager of the Scalable Knowledge Ingestion group, has built a platform called Deep Search that could help speed along the process. The cloud-based platform combs through literature, reads and labels each data point, table, image, and paragraph, and translates scientific content into a uniform, searchable structure.

The reading function of the Deep Search platform consists of a natural language processing (NLP) tool called the corpus conversion service (CCS), developed by Staar for other information-dense domains. The CCS trains itself on already-annotated documents to create a ground truth, or knowledge base, of how papers in a given realm are typically arranged, Staar says. After the training phase, new papers uploaded to the service can be quickly compared to the ground truth for faster recognition of each element.

Once the CCS has a general understanding of how papers in a field are structured, Staar says, the Deep Search platform presents two options. It can either generate simple results in response to a traditional search query, essentially serving as an advanced pdf reader, or it can generate a report on a specific topic, such as the dosage of a particular drug, with deeper analysis that the group calls a knowledge graph.

“[The] knowledge graph allows us to answer these relatively complex questions that are not able to be answered with just a keyword lookup,” Staar explains.

To keep the data in the platform’s knowledge base up to the highest standards possible, Staar says the team bolsters their corpora with trusted, open-source databases such as DrugBank for chemical, pharmaceutical, and pharmacological drug data and GenBank for established and publicly available data sequences.

SEARCH HISTORY

Deep Search is based on a similar platform that Staar built in 2018 for material science and for oil and gas research, fields that both faced a deluge of data. Staar recognized that the same solution could be used to parse the tsunami of data about SARS-CoV-2. The platform was designed to be generic enough to be extended to other domains of research.

“Our goal was to help the medical community with a tool that we already had in our hands,” Staar says. Currently, the COVID-19 Deep Search service supports 460 active users and has ingested nearly 46,000 scientific articles.

The platform can even use search queries to divide results according to scientific camp.

“In the oil and gas business, when different philosophies [on environmental impact] collide, you can say, ‘Okay, if you follow a certain stream of thought, then you might be more interested in papers that are associated with this group of people, rather than with that group,’” Staar says.

If the scientific community is divided on a major attribute of SARS-CoV-2, for example, Deep Search might cluster search results around each camp. When a user searches for that attribute, the platform could analyze the wording of their search string and then guide the user to the cluster of results that most closely aligns with the user’s approach.

STREAMLINING RESEARCH

This isn’t the first time a pressing global health crisis has prompted scientists to try to streamline the publishing process. A 2010 analysis of literature from the 2003 SARS outbreak found that, despite efforts to shorten wait times for both acceptance and publishing, 93 percent of the papers on SARS didn’t come out until the epidemic had already ended and the bulk of deaths had already occurred.

Unlike their counterparts in 2003, however, present-day epidemic researchers have benefitted from the advent of preprint servers such bioRxiv and medRxiv, which enable uncorrected articles to be shared digitally regardless of acceptance or submission status. Preprints have been around since the early 1990s, but the public health emergency of SARS-CoV-2 prompted a new surge in popularity for the alternative publishing practice, as well as a new round of concern over its impact.

Deep Search capitalizes on the preprint trend to further reduce obstacles to sharing the content of research papers. But it also aims to address one of the chief criticisms of preprints: that without peer review, the average reader may be unable to distinguish high-quality research from low-quality research. Though every new paper has equal weight in the Deep Search algorithms, the volume of data it ingests allows for statistical comparisons among conclusions. Users can easily see whether a result is consistent with previous findings or seems to be an outlier.

These relational functions, in which Deep Search sorts, links, and compares data as it returns results constitute the platform’s signature advantage, Staar says. Developing a treatment molecule, for example, might start with a search to determine which gene to target within the viral RNA.

“If you understand which genes are important, then you can start understanding which proteins are important, which leads you to which kinds of molecules you can build for which kinds of targets,” he says. “That’s what our tool is really built for.”

Bio-Ink for 3-D Printing Inside the Body

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/the-human-os/biomedical/devices/invivo-printing

Right now, almost 70,000 people in the United States alone are on active waiting lists for organ donations. The dream of bio-printing is that one day, instead of waiting for a donor, a patient could receive, say, a kidney  assembled on demand from living cells using 3-D printing techniques. But one problem with this dream is that bio-printing an organ outside the body necessarily requires surgery to implant it.  This may mean large incisions, which in turn adds the risk of infection and increased recovery time for patients. Doctors would also have to postpone surgery until the necessary implant was bio-printed, vital time patients might not have. 

A way around this problem could be provided by new bio-ink, composed of living cells suspended in a gel, that is safe for use inside people and could help enable 3-D printing in the body. Doctors could produce living parts inside patients through small incisions using minimally invasive surgical techniques. Such an option might prove safer and faster than major surgery.

Announcing the Porting Assistant for .NET

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/announcing-the-porting-assistant-for-net/

.NET Core is the future of .NET! Version 4.8 of the .NET Framework is the last major version to be released, and Microsoft has stated it will receive only bug-, reliability-, and security-related fixes going forward. For applications where you want to continue to take advantage of future investments and innovations in the .NET platform, you need to consider porting your applications to .NET Core. Also, there are additional reasons to consider porting applications to .NET Core such as benefiting from innovation in Linux and open source, improved application scaling and performance, and reducing licensing spend. Porting can, however, entail significant manual effort, some of which is undifferentiated such as updating references to project dependencies.

When porting .NET Framework applications, developers need to search for compatible NuGet packages and update those package references in the application’s project files, which also need to be updated to the .NET Core project file format. Additionally, they need to discover replacement APIs since .NET Core contains a subset of the APIs available in the .NET Framework. As porting progresses, developers have to wade through long lists of compile errors and warnings to determine the best or highest priority places to continue chipping away at the task. Needless to say, this is challenging, and the added friction could be a deterrent for customers with large portfolios of applications.

Today we announced the Porting Assistant for .NET, a new tool that helps customers analyze and port their .NET Framework applications to .NET Core running on Linux. The Porting Assistant for .NET assesses both the application source code and the full tree of public API and NuGet package dependencies to identify those incompatible with .NET Core and guides developers to compatible replacements when available. The suggestion engine for API and package replacements is designed to improve over time as the assistant learns more about the usage patterns and frequency of missing packages and APIs.

The Porting Assistant for .NET differs from other tools in that it is able to assess the full tree of package dependencies, not just incompatible APIs. It also uses solution files as the starting point, which makes it easier to assess monolithic solutions containing large numbers of projects, instead of having to analyze and aggregate information on individual binaries. These and other abilities gives developers a jump start in the porting process.

Analyzing and porting an application
Getting started with porting applications using the Porting Assistant for .NET is easy, with just a couple of prerequisites. First, I need to install the .NET Core 3.1 SDK. Secondly I need a credential profile (compatible with the AWS Command Line Interface (CLI), although the CLI is not used or required). The credential profile is used to collect compatibility information on the public APIs and packages (from NuGet and core Microsoft packages) used in your application and public NuGet packages that it references. With those prerequisites taken care of, I download and run the installer for the assistant.

With the assistant installed, I check out my application source code and launch the Porting Assistant for .NET from the Start menu. If I’ve previously assessed some solutions, I can view and open those from the Assessed solutions screen, enabling me to pick up where I left off. Or I can select Get started, as I’ll do here, from the home page to begin assessing my application’s solution file.

I’m asked to select the credential profile I want to use, and here I can also elect to opt-in to share my telemetry data. Sharing this data helps to further improve on suggestion accuracy for all users as time goes on, and is useful in identifying issues, so we hope you consider opting-in.

I click Next, browse to select the solution file that I want, and then click Assess to begin the analysis. For this post I’m going to use the open source NopCommerce project.

When analysis is complete I am shown the overall results – the number of incompatible packages the application depends on, APIs it uses that are incompatible, and an overall Portability score. This score is an estimation of the effort required to port the application to .NET Core, based on the number of incompatible APIs it uses. If I’m working on porting multiple applications I can use this to identify and prioritize the applications I want to start on first.

Let’s dig into the assessment overview to see what was discovered. Clicking on the solution name takes me to a more detailed dashboard and here I can see the projects that make up the application in the solution file, and for each the numbers of incompatible package and API dependencies, along with the portability score for each particular project. The current port status of each project is also listed, if I’ve already begun porting the application and have reopened the assessment.

Note that with no project selected in the Projects tab the data shown in the Project references, NuGet packages, APIs, and Source files tabs is solution-wide, but I can scope the data if I wish by first selecting a project.

The Project references tab shows me a graphical view of the package dependencies and I can see where the majority of the dependencies are consumed, in this case the Npp.Core, Npp.Services, and Npp.Web.Framework projects. This view can help me decide where I might want to start first, so as to get the most ‘bang for my buck’ when I begin. I can also select projects to see the specific dependencies more clearly.

The NuGet packages tab gives me a look at the compatible and incompatible dependencies, and suggested replacements if available. The APIs tab lists the incompatible APIs, what package they are in, and how many times they are referenced. Source files lists all of the source files making up the projects in the application, with an indication of how many incompatible API calls can be found in each file. Selecting a source file opens a view showing me where the incompatible APIs are being used and suggested package versions to upgrade to, if they exist, to resolve the issue. If there is no suggested replacement by simply updating to a different package version then I need to crack open a source editor and update the code to use a different API or approach. Here I’m looking at the report for DependencyRegistrar.cs, that exists in the Nop.Web project, and uses the Autofac NuGet package.

Let’s start porting the application, starting with the Nop.Core project. First, I navigate back to the Projects tab, select the project, and then click Port project. During porting the tool will help me update project references to NuGet packages, and also updates the project files themselves to the newer .NET Core formats. I have the option of either making a copy of the application’s solution file, project files, and source files, or I can have the changes made in-place. Here I’ve elected to make a copy.

Clicking Save copies the application source code to the selected location, and opens the Port projects view where I can set the new target framework version (in this case netcoreapp3.1), and the list of NuGet dependencies for the project that I need to upgrade. For each incompatible package the Porting Assistant for .NET gives me a list of possible version upgrades and for each version, I am shown the number of incompatible APIs that will either remain, or will additionally become incompatible. For the package I selected here there’s no difference but for cases where later versions potentially increase the number of incompatible APIs that I would then need to manually fix up in the source code, this indication helps me to make a trade-off decision around whether to upgrade to the latest versions of a package, or stay with an older one.

Once I select a version, the Deprecated API calls field alongside the package will give me a reminder of what I need to fix up in a code editor. Clicking on the value summarizes the deprecated calls.

I continue with this process for each package dependency and when I’m ready, click Port to have the references updated. Using my IDE, I can then go into the source files and work on replacing the incompatible API calls using the Porting Assistant for .NET‘s source file and deprecated API list views as a reference, and follow a similar process for the other projects in my application.

Improving the suggestion engine
The suggestion engine behind the Porting Assistant for .NET is designed to learn and give improved results over time, as customers opt-in to sharing their telemetry. The data models behind the engine, which are the result of analyzing hundreds of thousands of unique packages with millions of package versions, are available on GitHub. We hope that you’ll consider helping improve accuracy and completeness of the results by contributing your data. The user guide gives more details on how the data is used.

The Porting Assistant for .NET is free to use and is available now.

— Steve

[$] Generics for Go

Post Syndicated from jake original https://lwn.net/Articles/824716/rss

The Go programming language was first released
in 2009, with its 1.0 release made in March 2012. Even before the 1.0 release,
some developers criticized the language as being too simplistic, partly due
to its lack of user-defined generic
types
and functions parameterized by type. Despite this omission, Go is
widely used, with an estimated 1-2 million
developers worldwide. Over the years there have been several proposals to
add some form of generics to the language, but the recent
proposal
written by core developers Ian Lance Taylor and Robert
Griesemer looks likely to be included in a future version of Go.

Security updates for Wednesday

Post Syndicated from ris original https://lwn.net/Articles/824955/rss

Security updates have been issued by Arch Linux (bind, chromium, freerdp, imagemagick, sqlite, and tomcat8), Debian (coturn, imagemagick, jackson-databind, libmatio, mutt, nss, and wordpress), Fedora (libEMF, lynis, and php-PHPMailer), Red Hat (httpd24-nghttp2), and SUSE (ntp, openconnect, squid, and transfig).

Securing the International IoT Supply Chain

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/securing_the_in_1.html

Together with Nate Kim (former student) and Trey Herr (Atlantic Council Cyber Statecraft Initiative), I have written a paper on IoT supply chain security. The basic problem we try to solve is: how to you enforce IoT security regulations when most of the stuff is made in other countries? And our solution is: enforce the regulations on the domestic company that’s selling the stuff to consumers. There’s a lot of detail between here and there, though, and it’s all in the paper.

We also wrote a Lawfare post:

…we propose to leverage these supply chains as part of the solution. Selling to U.S. consumers generally requires that IoT manufacturers sell through a U.S. subsidiary or, more commonly, a domestic distributor like Best Buy or Amazon. The Federal Trade Commission can apply regulatory pressure to this distributor to sell only products that meet the requirements of a security framework developed by U.S. cybersecurity agencies. That would put pressure on manufacturers to make sure their products are compliant with the standards set out in this security framework, including pressuring their component vendors and original device manufacturers to make sure they supply parts that meet the recognized security framework.

News article.

Making the WAF 40% faster

Post Syndicated from Miguel de Moura original https://blog.cloudflare.com/making-the-waf-40-faster/

Making the WAF 40% faster

Cloudflare’s Web Application Firewall (WAF) protects against malicious attacks aiming to exploit vulnerabilities in web applications. It is continuously updated to provide comprehensive coverage against the most recent threats while ensuring a low false positive rate.

As with all Cloudflare security products, the WAF is designed to not sacrifice performance for security, but there is always room for improvement.

This blog post provides a brief overview of the latest performance improvements that were rolled out to our customers.

Transitioning from PCRE to RE2

Back in July of 2019, the WAF transitioned from using a regular expression engine based on PCRE to one inspired by RE2, which is based around using a deterministic finite automaton (DFA) instead of backtracking algorithms. This change came as a result of an outage where an update added a regular expression which backtracked enormously on certain HTTP requests, resulting in exponential execution time.

After the migration was finished, we saw no measurable difference in CPU consumption at the edge, but noticed execution time outliers in the 95th and 99th percentiles decreased, something we expected given RE2’s guarantees of a linear time execution with the size of the input.

As the WAF engine uses a thread pool, we also had to implement and tune a regex cache shared between the threads to avoid excessive memory consumption (the first implementation turned out to use a staggering amount of memory).

These changes, along with others outlined in the post-mortem blog post, helped us improve reliability and safety at the edge and have the confidence to explore further performance improvements.

But while we’ve highlighted regular expressions, they are only one of the many capabilities of the WAF engine.

Matching Stages

When an HTTP request reaches the WAF, it gets organized into several logical sections to be analyzed: method, path, headers, and body. These sections are all stored in Lua variables. If you are interested in more detail on the implementation of the WAF itself you can watch this old presentation.

Before matching these variables against specific malicious request signatures, some transformations are applied. These transformations are functions that range from simple modifications like lowercasing strings to complex tokenizers and parsers looking to fingerprint certain malicious payloads.

As the WAF currently uses a variant of the ModSecurity syntax, this is what a rule might look like:

SecRule REQUEST_BODY "@rx /\x00+evil" "drop, t:urlDecode, t:lowercase"

It takes the request body stored in the REQUEST_BODY variable, applies the urlDecode() and lowercase() functions to it and then compares the result with the regular expression signature \x00+evil.

In pseudo-code, we can represent it as:

rx( "/\x00+evil", lowercase( urlDecode( REQUEST_BODY ) ) )

Which in turn would match a request whose body contained percent encoded NULL bytes followed by the word “evil”, e.g.:

GET /cms/admin?action=post HTTP/1.1
Host: example.com
Content-Type: text/plain; charset=utf-8
Content-Length: 16

thiSis%2F%00eVil

The WAF contains thousands of these rules and its objective is to execute them as quickly as possible to minimize any added latency to a request. And to make things harder, it needs to run most of the rules on nearly every request. That’s because almost all HTTP requests are non-malicious and no rules are going to match. So we have to optimize for the worst case: execute everything!

To help mitigate this problem, one of the first matching steps executed for many rules is pre-filtering. By checking if a request contains certain bytes or sets of strings we are able to potentially skip a considerable number of expressions.

In the previous example, doing a quick check for the NULL byte (represented by \x00 in the regular expression) allows us to completely skip the rule if it isn’t found:

contains( "\x00", REQUEST_BODY )
and
rx( "/\x00+evil", lowercase( urlDecode( REQUEST_BODY ) ) )

Since most requests don’t match any rule and these checks are quick to execute, overall we aren’t doing more operations by adding them.

Other steps can also be used to scan through and combine several regular expressions and avoid execution of rule expressions. As usual, doing less work is often the simplest way to make a system faster.

Memoization

Which brings us to memoization – caching the output of a function call to reuse it in future calls.

Let’s say we have the following expressions:

1. rx( "\x00+evil", lowercase( url_decode( body ) ) )
2. rx( "\x00+EVIL", remove_spaces( url_decode( body ) ) )
3. rx( "\x00+evil", lowercase( url_decode( headers ) ) )
4. streq( "\x00evil", lowercase( url_decode( body ) ) )

In this case, we can reuse the result of the nested function calls (1) as they’re the same in (4). By saving intermediate results we are also able to take advantage of the result of url_decode( body ) from (1) and use it in (2) and (4). Sometimes it is also possible to swap the order functions are applied to improve caching, though in this case we would get different results.

A naive implementation of this system can simply be a hash table with each entry having the function(s) name(s) and arguments as the key and its output as the value.

Some of these functions are expensive and caching the result does lead to significant savings. To give a sense of magnitude, one of the rules we modified to ensure memoization took place saw its execution time reduced by about 95%:

Making the WAF 40% faster
Execution time per rule

The WAF engine implements memoization and the rules take advantage of it, but there’s always room to increase cache hits.

Rewriting Rules and Results

Cloudflare has a very regular cadence of releasing updates and new rules to the Managed Rulesets. However, as more rules are added and new functions implemented, the memoization cache hit rate tends to decrease.

To improve this, we first looked into the rules taking the most wall-clock time to execute using some of our performance metrics:

Making the WAF 40% faster
Execution time per rule

Having these, we cross-referenced them with the ones having cache misses (output is truncated with […]):

[email protected] $ ./parse.py --profile
Hit Ratio:
-------------
0.5608

Hot entries:
-------------
[urlDecode, replaceComments, REQUEST_URI, REQUEST_HEADERS, ARGS_POST]
[urlDecode, REQUEST_URI]
[urlDecode, htmlEntityDecode, jsDecode, replaceNulls, removeWhitespace, REQUEST_URI, REQUEST_HEADERS]
[urlDecode, lowercase, REQUEST_FILENAME]
[urlDecode, REQUEST_FILENAME]
[urlDecode, lowercase, replaceComments, compressWhitespace, ARGS, REQUEST_FILENAME]
[urlDecode, replaceNulls, removeWhitespace, REQUEST_URI, REQUEST_HEADERS, ARGS_POST]
[...]

Candidates:
-------------
100152A - replace t:removeWhitespace with t:compressWhitespace,t:removeWhitespace
100214 - replace t:lowercase with (?i)
100215 - replace t:lowercase with (?i)
100300 - consider REQUEST_URI over REQUEST_FILENAME
100137D - invert order of t:replaceNulls,t:lowercase
[...]

After identifying more than 40 rules, we rewrote them to take full advantage of memoization and added pre-filter checks where possible. Many of these changes were not immediately obvious, which is why we’re also creating tools to aid analysts in creating even more efficient rules. This also helps ensure they run in accordance with the latency budgets the team has set.

This change resulted in an increase of the cache hit percentage from 56% to 74%, which crucially included the most expensive transformations.

Most importantly, we also observed a sharp decrease of 40% in the average time the WAF takes to process and analyze an HTTP request at the Cloudflare edge.

Making the WAF 40% faster
WAF Request Processing – Time Average

A comparable decrease was also observed for the 95th and 99th percentiles.

Finally, we saw a drop of CPU consumption at the edge of around 4.3%.

Next Steps

While the Lua WAF has served us well throughout all these years, we are currently porting it to use the same engine powering Firewall Rules. It is based on our open-sourced wirefilter execution engine, which uses a filter syntax inspired by Wireshark®. In addition to allowing more flexible filter expressions, it provides better performance and safety.

The rule optimizations we’ve described in this blog post are not lost when moving to the new engine, however, as the changes were deliberately not specific to the current Lua engine’s implementation. And while we’re routinely profiling, benchmarking and making complex optimizations to the Firewall stack, sometimes just relatively simple changes can have a surprisingly huge effect.

OpenVX API for Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/openvx-api-for-raspberry-pi/

Raspberry Pi is excited to bring the Khronos OpenVX 1.3 API to our line of single-board computers. Here’s Kiriti Nagesh Gowda, AMD‘s MTS Software Development Engineer, to tell you more.

OpenVX for computer vision

OpenVX™ is an open, royalty-free API standard for cross-platform acceleration of computer vision applications developed by The Khronos Group. The Khronos Group is an open industry consortium of more than 150 leading hardware and software companies creating advanced, royalty-free acceleration standards for 3D graphics, augmented and virtual reality, vision, and machine learning. Khronos standards include Vulkan®, OpenCL™, SYCL™, OpenVX™, NNEF™, and many others.

Now with added Raspberry Pi

The Khronos Group and Raspberry Pi have come together to work on an open-source implementation of OpenVX™ 1.3, which passes the conformance on Raspberry Pi. The open-source implementation passes the Vision, Enhanced Vision, & Neural Net conformance profiles specified in OpenVX 1.3 on Raspberry Pi.

Application developers may always freely use Khronos standards when they are available on the target system. To enable companies to test their products for conformance, Khronos has established an Adopters Program for each standard. This helps to ensure that Khronos standards are consistently implemented by multiple vendors to create a reliable platform for developers. Conformant products also enjoy protection from the Khronos IP Framework, ensuring that Khronos members will not assert their IP essential to the specification against the implementation.

OpenVX enables a performance and power-optimized computer vision processing, especially important in embedded and real-time use cases such as face, body, and gesture tracking, smart video surveillance, advanced driver assistance systems (ADAS), object and scene reconstruction, augmented reality, visual inspection, robotics, and more. The developers can take advantage of using this robust API in their application and know that the application is portable across all the conformant hardware.

Below, we will go over how to build and install the open-source OpenVX 1.3 library on Raspberry Pi 4 Model B. We will run the conformance for the Vision, Enhanced Vision, & Neural Net conformance profiles and create a simple computer vision application to get started with OpenVX on Raspberry Pi.

OpenVX 1.3 implementation for Raspberry Pi

The OpenVX 1.3 implementation is available on GitHub. To build and install the library, follow the instructions below.

Build OpenVX 1.3 on Raspberry Pi

Git clone the project with the recursive flag to get submodules:

git clone --recursive https://github.com/KhronosGroup/OpenVX-sample-impl.git

Note: The API Documents and Conformance Test Suite are set as submodules in the sample implementation project.

Use the Build.py script to build and install OpenVX 1.3:

cd OpenVX-sample-impl/
python Build.py --os=Linux --venum --conf=Debug --conf_vision --enh_vision --conf_nn

Build and run the conformance:

export OPENVX_DIR=$(pwd)/install/Linux/x32/Debug
export VX_TEST_DATA_PATH=$(pwd)/cts/test_data/
mkdir build-cts
cd build-cts
cmake -DOPENVX_INCLUDES=$OPENVX_DIR/include -DOPENVX_LIBRARIES=$OPENVX_DIR/bin/libopenvx.so\;$OPENVX_DIR/bin/libvxu.so\;pthread\;dl\;m\;rt -DOPENVX_CONFORMANCE_VISION=ON -DOPENVX_USE_ENHANCED_VISION=ON -DOPENVX_CONFORMANCE_NEURAL_NETWORKS=ON ../cts/
cmake --build .
LD_LIBRARY_PATH=./lib ./bin/vx_test_conformance

Sample application

Use the open-source samples on GitHub to test the installation.

The post OpenVX API for Raspberry Pi appeared first on Raspberry Pi.

AWS App2Container – A New Containerizing Tool for Java and ASP.NET Applications

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/aws-app2container-a-new-containerizing-tool-for-java-and-asp-net-applications/

Our customers are increasingly developing their new applications with containers and serverless technologies, and are using modern continuous integration and delivery (CI/CD) tools to automate the software delivery life cycle. They also maintain a large number of existing applications that are built and managed manually or using legacy systems. Maintaining these two sets of applications with disparate tooling adds to operational overhead and slows down the pace of delivering new business capabilities. As much as possible, they want to be able to standardize their management tooling and CI/CD processes across both their existing and new applications, and see the option of packaging their existing applications into containers as the first step towards accomplishing that goal.

However, containerizing existing applications requires a long list of manual tasks such as identifying application dependencies, writing dockerfiles, and setting up build and deployment processes for each application. These manual tasks are time consuming, error prone, and can slow down the modernization efforts.

Today, we are launching AWS App2Container, a new command-line tool that helps containerize existing applications that are running on-premises, in Amazon Elastic Compute Cloud (EC2), or in other clouds, without needing any code changes. App2Container discovers applications running on a server, identifies their dependencies, and generates relevant artifacts for seamless deployment to Amazon ECS and Amazon EKS. It also provides integration with AWS CodeBuild and AWS CodeDeploy to enable a repeatable way to build and deploy containerized applications.

AWS App2Container generates the following artifacts for each application component: Application artifacts such as application files/folders, Dockerfiles, container images in Amazon Elastic Container Registry (ECR), ECS Task definitions, Kubernetes deployment YAML, CloudFormation templates to deploy the application to Amazon ECS or EKS, and templates to set up a build/release pipeline in AWS Codepipeline which also leverages AWS CodeBuild and CodeDeploy.

Starting today, you can use App2Container to containerize ASP.NET (.NET 3.5+) web applications running in IIS 7.5+ on Windows, and Java applications running on Linux—standalone JBoss, Apache Tomcat, and generic Java applications such as Spring Boot, IBM WebSphere, Oracle WebLogic, etc.

By modernizing existing applications using containers, you can make them portable, increase development agility, standardize your CI/CD processes, and reduce operational costs. Now let’s see how it works!

AWS App2Container – Getting Started
AWS App2Container requires that the following prerequisites be installed on the server(s) hosting your application: AWS Command Line Interface (CLI) version 1.14 or later, Docker tools, and (in the case of ASP.NET) Powershell 5.0+ for applications running on Windows. Additionally, you need to provide appropriate IAM permissions to App2Container to interact with AWS services.

For example, let’s look how you containerize your existing Java applications. App2Container CLI for Linux is packaged as a tar.gz archive. The file provides users an interactive shell script, install.sh to install the App2Container CLI. Running the script guides users through the install steps and also updates the user’s path to include the App2Container CLI commands.

First, you can begin by running a one-time initialization on the installed server for the App2Container CLI with the init command.

$ sudo app2container init
Workspace directory path for artifacts[default:  /home/ubuntu/app2container/ws]:
AWS Profile (configured using 'aws configure --profile')[default: default]:  
Optional S3 bucket for application artifacts (Optional)[default: none]: 
Report usage metrics to AWS? (Y/N)[default: y]:
Require images to be signed using Docker Content Trust (DCT)? (Y/N)[default: n]:
Configuration saved

This sets up a workspace to store application containerization artifacts (minimum 20GB of disk space available). You can extract them into your Amazon Simple Storage Service (S3) bucket using your AWS profile configured to use AWS services.

Next, you can view Java processes that are running on the application server by using the inventory command. Each Java application process has a unique identifier (for example, java-tomcat-9e8e4799) which is the application ID. You can use this ID to refer to the application with other App2Container CLI commands.

$ sudo app2container inventory
{
    "java-jboss-5bbe0bec": {
        "processId": 27366,
        "cmdline": "java ... /home/ubuntu/wildfly-10.1.0.Final/modules org.jboss.as.standalone -Djboss.home.dir=/home/ubuntu/wildfly-10.1.0.Final -Djboss.server.base.dir=/home/ubuntu/wildfly-10.1.0.Final/standalone ",
        "applicationType": "java-jboss"
    },
    "java-tomcat-9e8e4799": {
        "processId": 2537,
        "cmdline": "/usr/bin/java ... -Dcatalina.home=/home/ubuntu/tomee/apache-tomee-plume-7.1.1 -Djava.io.tmpdir=/home/ubuntu/tomee/apache-tomee-plume-7.1.1/temp org.apache.catalina.startup.Bootstrap start ",
        "applicationType": "java-tomcat"
    }
}

You can also intialize ASP.NET applications on an administrator-run PowerShell session of Windows Servers with IIS version 7.0 or later. Note that Docker tools and container support are available on Windows Server 2016 and later versions. You can select to run all app2container operations on the application server with Docker tools installed or use a worker machine with Docker tools using Amazon ECS-optimized Windows Server AMIs.

PS> app2container inventory
{
    "iis-smarts-51d2dbf8": {
        "siteName": "nopCommerce39",
        "bindings": "http/*:90:",
        "applicationType": "iis"
    }
}

The inventory command displays all IIS websites on the application server that can be containerized. Each IIS website process has a unique identifier (for example, iis-smarts-51d2dbf8) which is the application ID. You can use this ID to refer to the application with other App2Container CLI commands.

You can choose a specific application by referring to its application ID and generate an analysis report for the application by using the analyze command.

$ sudo app2container analyze --application-id java-tomcat-9e8e4799
Created artifacts folder /home/ubuntu/app2container/ws/java-tomcat-9e8e4799
Generated analysis data in /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/analysis.json
Analysis successful for application java-tomcat-9e8e4799
Please examine the same, make appropriate edits and initiate containerization using "app2container containerize --application-id java-tomcat-9e8e4799"

You can use the analysis.json template generated by the application analysis to gather information on the analyzed application that helps identify all system dependencies from the analysisInfo section, and update containerization parameters to customize the container images generated for the application using the containerParameters section.

$ cat java-tomcat-9e8e4799/analysis.json
{
    "a2CTemplateVersion": "1.0",
	"createdTime": "2020-06-24 07:40:5424",
    "containerParameters": {
        "_comment1": "*** EDITABLE: The below section can be edited according to the application requirements. Please see the analyisInfo section below for deetails discoverd regarding the application. ***",
        "imageRepository": "java-tomcat-9e8e4799",
        "imageTag": "latest",
        "containerBaseImage": "ubuntu:18.04",
        "coopProcesses": [ 6446, 6549, 6646]
    },
    "analysisInfo": {
        "_comment2": "*** NON-EDITABLE: Analysis Results ***",
        "processId": 2537
        "appId": "java-tomcat-9e8e4799",
		"userId": "1000",
        "groupId": "1000",
        "cmdline": [...],
        "os": {...},
        "ports": [...]
    }
}

Also, you can run the $ app2container extract --application-id java-tomcat-9e8e4799 command to generate an application archive for the analyzed application. This depends on the analysis.json file generated earlier in the workspace folder for the application,and adheres to any containerization parameter updates specified in there. By using extract command, you can continue the workflow on a worker machine after running the first set of commands on the application server.

Now you can containerize command generated Docker images for the selected application.

$ sudo app2container containerize --application-id java-tomcat-9e8e4799
AWS pre-requisite check succeeded
Docker pre-requisite check succeeded
Extracted container artifacts for application
Entry file generated
Dockerfile generated under /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/Artifacts
Generated dockerfile.update under /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/Artifacts
Generated deployment file at /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/deployment.json
Containerization successful. Generated docker image java-tomcat-9e8e4799
You're all set to test and deploy your container image.

Next Steps:
1. View the container image with \"docker images\" and test the application.
2. When you're ready to deploy to AWS, please edit the deployment file as needed at /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/deployment.json.
3. Generate deployment artifacts using app2container generate app-deployment --application-id java-tomcat-9e8e4799

Using this command, you can view the generated container images using Docker images on the machine where the containerize command is run. You can use the docker run command to launch the container and test application functionality.

Note that in addition to generating container images, the containerize command also generates a deployment.json template file that you can use with the next generate-appdeployment command. You can edit the parameters in the deployment.json template file to change the image repository name to be registered in Amazon ECR, the ECS task definition parameters, or the Kubernetes App name.

$ cat java-tomcat-9e8e4799/deployment.json
{
       "a2CTemplateVersion": "1.0",
       "applicationId": "java-tomcat-9e8e4799",
       "imageName": "java-tomcat-9e8e4799",
       "exposedPorts": [
              {
                     "localPort": 8090,
                     "protocol": "tcp6"
              }
       ],
       "environment": [],
       "ecrParameters": {
              "ecrRepoTag": "latest"
       },
       "ecsParameters": {
              "createEcsArtifacts": true,
              "ecsFamily": "java-tomcat-9e8e4799",
              "cpu": 2,
              "memory": 4096,
              "dockerSecurityOption": "",
              "enableCloudwatchLogging": false,
              "publicApp": true,
              "stackName": "a2c-java-tomcat-9e8e4799-ECS",
              "reuseResources": {
                     "vpcId": "",
                     "cfnStackName": "",
                     "sshKeyPairName": ""
              },
              "gMSAParameters": {
                     "domainSecretsArn": "",
                     "domainDNSName": "",
                     "domainNetBIOSName": "",
                     "createGMSA": false,
                     "gMSAName": ""
              }
       },
       "eksParameters": {
              "createEksArtifacts": false,
              "applicationName": "",
              "stackName": "a2c-java-tomcat-9e8e4799-EKS",
              "reuseResources": {
                     "vpcId": "",
                     "cfnStackName": "",
                     "sshKeyPairName": ""
              }
       }
 }

At this point, the application workspace where the artifacts are generated serves as an iteration sandbox. You can choose to edit the Dockerfile generated here to make changes to their application and use the docker build command to build new container images as needed. You can generate the artifacts needed to deploy the application containers in Amazon EKS by using the generate-deployment command.

$ sudo app2container generate app-deployment --application-id java-tomcat-9e8e4799
AWS pre-requisite check succeeded
Docker pre-requisite check succeeded
Created ECR Repository
Uploaded Cloud Formation resources to S3 Bucket: none
Generated Cloud Formation Master template at: /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/EksDeployment/amazon-eks-master.template.yaml
EKS Cloudformation templates and additional deployment artifacts generated successfully for application java-tomcat-9e8e4799

You're all set to use AWS Cloudformation to manage your application stack.
Next Steps:
1. Edit the cloudformation template as necessary.
2. Create an application stack using the AWS CLI or the AWS Console. AWS CLI command:

       aws cloudformation deploy --template-file /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/EksDeployment/amazon-eks-master.template.yaml --capabilities CAPABILITY_NAMED_IAM --stack-name java-tomcat-9e8e4799

3. Setup a pipeline for your application stack:

       app2container generate pipeline --application-id java-tomcat-9e8e4799

This command works based on the deployment.json template file produced as part of running the containerize command. App2Container will now generate ECS/EKS cloudformation templates as well and an option to deploy those stacks.

The command registers the container image to user specified ECR repository, generates cloudformation template for Amazon ECS and EKS deployments. You can register ECS task definition with Amazon ECS and use kubectl to launch the containerized application on the existing Amazon EKS or self-managed kubernetes cluster using App2Container generated amazon-eks-master.template.deployment.yaml.

Alternatively, you can directly deploy containerized applications by --deploy options into Amazon EKS.

$ sudo app2container generate app-deployment --application-id java-tomcat-9e8e4799 --deploy
AWS pre-requisite check succeeded
Docker pre-requisite check succeeded
Created ECR Repository
Uploaded Cloud Formation resources to S3 Bucket: none
Generated Cloud Formation Master template at: /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/EksDeployment/amazon-eks-master.template.yaml
Initiated Cloudformation stack creation. This may take a few minutes. Please visit the AWS Cloudformation Console to track progress.
Deploying application to EKS

Handling ASP.NET Applications with Windows Authentication
Containerizing ASP.NET applications is almost same process as Java applications, but Windows containers cannot be directly domain joined. They can however still use Active Directory (AD) domain identities to support various authentication scenarios.

App2Container detects if a site is using Windows authentication and accordingly makes the IIS site’s application pool run as the network service identity, and generates the new cloudformation templates for Windows authenticated IIS applications. The creation of gMSA and AD Security group, domain join ECS nodes and making containers use this gMSA are all taken care of by those templates.

Also, it provides two PowerShell scripts as output to the $ app2container containerize command along with an instruction file on how to use it.

The following is an example output:

PS C:\Windows\system32> app2container containerize --application-id iis-SmartStoreNET-a726ba0b
Running AWS pre-requisite check...
Running Docker pre-requisite check...
Container build complete. Please use "docker images" to view the generated container images.
Detected that the Site is using Windows Authentication.
Generating powershell scripts into C:\Users\Admin\AppData\Local\app2container\iis-SmartStoreNET-a726ba0b\Artifacts required to setup Container host with Windows Authentication
Please look at C:\Users\Admin\AppData\Local\app2container\iis-SmartStoreNET-a726ba0b\Artifacts\WindowsAuthSetupInstructions.md for setup instructions on Windows Authentication.
A deployment file has been generated under C:\Users\Admin\AppData\Local\app2container\iis-SmartStoreNET-a726ba0b
Please edit the same as needed and generate deployment artifacts using "app2container generate-deployment"

The first PowerShellscript, DomainJoinAddToSecGroup.ps1, joins the container host and adds it to an Active Directory security group. The second script, CreateCredSpecFile.ps1, creates a Group Managed Service Account (gMSA), grants access to the Active Directory security group, generates the credential spec for this gMSA, and stores it locally on the container host. You can execute these PowerShellscripts on the ECS host. The following is an example usage of the scripts:

PS C:\Windows\system32> .\DomainJoinAddToSecGroup.ps1 -ADDomainName Dominion.com -ADDNSIp 10.0.0.1 -ADSecurityGroup myIISContainerHosts -CreateADSecurityGroup:$true
PS C:\Windows\system32> .\CreateCredSpecFile.ps1 -GMSAName MyGMSAForIIS -CreateGMSA:$true -ADSecurityGroup myIISContainerHosts

Before executing the app2container generate-deployment command, edit the deployment.json file to change the value of dockerSecurityOption to the name of the CredentialSpec file that the CreateCredSpecFile script generated. For example,
"dockerSecurityOption": "credentialspec:file://dominion_mygmsaforiis.json"

Effectively, any access to network resource made by the IIS server inside the container for the site will now use the above gMSA to authenticate. The final step is to authorize this gMSA account on the network resources that the IIS server will access. A common example is authorizing this gMSA inside the SQL Server.

Finally, if the application must connect to a database to be fully functional and you run the container in Amazon ECS, ensure that the application container created from the Docker image generated by the tool has connectivity to the same database. You can refer to this documentation for options on migrating: MS SQL Server from Windows to Linux on AWS, Database Migration Service, and backup and restore your MS SQL Server to Amazon RDS.

Now Available
AWS App2Container is offered free. You only pay for the actual usage of AWS services like Amazon EC2, ECS, EKS, and S3 etc based on their usage. For details, please refer to App2Container FAQs and documentations. Give this a try, and please send us feedback either through your usual AWS Support contacts, on the AWS Forum for ECS, AWS Forum for EKS, or on the container roadmap on Github.

Channy;

Amazon RDS Proxy – Now Generally Available

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/amazon-rds-proxy-now-generally-available/

At AWS re:Invent 2019, we launched the preview of Amazon RDS Proxy, a fully managed, highly available database proxy for Amazon Relational Database Service (RDS) that makes applications more scalable, more resilient to database failures, and more secure. Following the preview of MySQL engine, we extended to the PostgreSQL compatibility. Today, I am pleased to announce that we are now generally available for both engines.

Many applications, including those built on modern serverless architectures using AWS Lambda, Fargate, Amazon ECS, or EKS can have a large number of open connections to the database server, and may open and close database connections at a high rate, exhausting database memory and compute resources.

Amazon RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency, application scalability, and security. With RDS Proxy, failover times for Amazon Aurora and RDS databases are reduced by up to 66%, and database credentials, authentication, and access can be managed through integration with AWS Secrets Manager and AWS Identity and Access Management (IAM).

Amazon RDS Proxy can be enabled for most applications with no code change, and you don’t need to provision or manage any additional infrastructure and only pay per vCPU of the database instance for which the proxy is enabled.

Amazon RDS Proxy – Getting started
You can get started with Amazon RDS Proxy in just a few clicks by going to the AWS management console and creating an RDS Proxy endpoint for your RDS databases. In the navigation pane, choose Proxies and Create proxy. You can also see the proxy panel below.

To create your proxy, specify the Proxy identifier, a unique name of your choosing, and choose the database engine – either MySQL or PostgreSQL. Choose the encryption setting if you want the proxy to enforce TLS / SSL for all connection between application and proxy, and specify a time period that a client connection can be idle before the proxy can close it.

A client connection is considered idle when the application doesn’t submit a new request within the specified time after the previous request completed. The underlying connection between the proxy and database stays open and is returned to the connection pool. Thus, it’s available to be reused for new client connections.

Next, choose one RDS DB instance or Aurora DB cluster in Database to access through this proxy. The list only includes DB instances and clusters with compatible database engines, engine versions, and other settings.

Specify Connection pool maximum connections, a value between 1 and 100. This setting represents the percentage of the max_connections value that RDS Proxy can use for its connections. If you only intend to use one proxy with this DB instance or cluster, you can set it to 100. For details about how RDS Proxy uses this setting, see Connection Limits and Timeouts.

Choose at least one Secrets Manager secret associated with the RDS DB instance or Aurora DB cluster that you intend to access with this proxy, and select an IAM role that has permission to access the Secrets Manager secrets you chose. If you don’t have an existing secret, please click Create a new secret before setting up the RDS proxy.

After setting VPC Subnets and a security group, please click Create proxy. If you more settings in details, please refer to the documentation.

You can see the new RDS proxy after waiting a few minutes and then point your application to the RDS Proxy endpoint. That’s it!

You can also create an RDS proxy easily via AWS CLI command.

aws rds create-db-proxy \
    --db-proxy-name channy-proxy \
    --role-arn iam_role \
    --engine-family { MYSQL|POSTGRESQL } \
    --vpc-subnet-ids space_separated_list \
    [--vpc-security-group-ids space_separated_list] \
    [--auth ProxyAuthenticationConfig_JSON_string] \
    [--require-tls | --no-require-tls] \
    [--idle-client-timeout value] \
    [--debug-logging | --no-debug-logging] \
    [--tags comma_separated_list]

How RDS Proxy works
Let’s see an example that demonstrates how open connections continue working during a failover when you reboot a database or it becomes unavailable due to a problem. This example uses a proxy named channy-proxy and an Aurora DB cluster with DB instances instance-8898 and instance-9814. When the failover-db-cluster command is run from the Linux command line, the writer instance that the proxy is connected to changes to a different DB instance. You can see that the DB instance associated with the proxy changes while the connection remains open.

$ mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p
Enter password:
...
mysql> select @@aurora_server_id;
+--------------------+
| @@aurora_server_id |
+--------------------+
| instance-9814 |
+--------------------+
1 row in set (0.01 sec)

mysql>
[1]+ Stopped mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p
$ # Initially, instance-9814 is the writer.
$ aws rds failover-db-cluster --db-cluster-id cluster-56-2019-11-14-1399
JSON output
$ # After a short time, the console shows that the failover operation is complete.
$ # Now instance-8898 is the writer.
$ fg
mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p

mysql> select @@aurora_server_id;
+--------------------+
| @@aurora_server_id |
+--------------------+
| instance-8898 |
+--------------------+
1 row in set (0.01 sec)

mysql>
[1]+ Stopped mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p
$ aws rds failover-db-cluster --db-cluster-id cluster-56-2019-11-14-1399
JSON output
$ # After a short time, the console shows that the failover operation is complete.
$ # Now instance-9814 is the writer again.
$ fg
mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p

mysql> select @@aurora_server_id;
+--------------------+
| @@aurora_server_id |
+--------------------+
| instance-9814 |
+--------------------+
1 row in set (0.01 sec)
+---------------+---------------+
| Variable_name | Value |
+---------------+---------------+
| hostname | ip-10-1-3-178 |
+---------------+---------------+
1 row in set (0.02 sec)

With RDS Proxy, you can build applications that can transparently tolerate database failures without needing to write complex failure handling code. RDS Proxy automatically routes traffic to a new database instance while preserving application connections.

You can review the demo for an overview of RDS Proxy and the steps you need take to access RDS Proxy from a Lambda function.

If you want to know how your serverless applications maintain excellent performance even at peak loads, please read this blog post. For a deeper dive into using RDS Proxy for MySQL with serverless, visit this post.

The following are a few things that you should be aware of:

  • Currently, RDS Proxy is available for the MySQL and PostgreSQL engine family. This engine family includes RDS for MySQL 5.6 and 5.7, PostgreSQL 10.11 and 11.5.
  • In an Aurora cluster, all of the connections in the connection pool are handled by the Aurora primary instance. To perform load balancing for read-intensive workloads, you still use the reader endpoint directly for the Aurora cluster.
  • Your RDS Proxy must be in the same VPC as the database. Although the database can be publicly accessible, the proxy can’t be.
  • Proxies don’t support compressed mode. For example, they don’t support the compression used by the --compress or -C options of the mysql command.

Now Available!
Amazon RDS Proxy is generally available in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London) , Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney) and Asia Pacific (Tokyo) regions for Aurora MySQL, RDS for MySQL, Aurora PostgreSQL, and RDS for PostgreSQL, and it includes support for Aurora Serverless and Aurora Multi-Master.

Take a look at the product page, pricing, and the documentation to learn more. Please send us feedback either in the AWS forum for Amazon RDS or through your usual AWS support contacts.

Channy;

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close