Field Notes: Monitoring the Java Virtual Machine Garbage Collection on AWS Lambda

Post Syndicated from Steffen Grunwald original https://aws.amazon.com/blogs/architecture/field-notes-monitoring-the-java-virtual-machine-garbage-collection-on-aws-lambda/

When you want to optimize your Java application on AWS Lambda for performance and cost the general steps are: Build, measure, then optimize! To accomplish this, you need a solid monitoring mechanism. Amazon CloudWatch and AWS X-Ray are well suited for this task since they already provide lots of data about your AWS Lambda function. This includes overall memory consumption, initialization time, and duration of your invocations. To examine the Java Virtual Machine (JVM) memory you require garbage collection logs from your functions. Instances of an AWS Lambda function have a short lifecycle compared to a long-running Java application server. It can be challenging to process the logs from tens or hundreds of these instances.

In this post, you learn how to emit and collect data to monitor the JVM garbage collector activity. Having this data, you can visualize out-of-memory situations of your applications in a Kibana dashboard like in the following screenshot. You gain actionable insights into your application’s memory consumption on AWS Lambda for troubleshooting and optimization.

The lifecycle of a JVM application on AWS Lambda

Let’s first revisit the lifecycle of the AWS Lambda Java runtime and its JVM:

  1. A Lambda function is invoked.
  2. AWS Lambda launches an execution context. This is a temporary runtime environment based on the configuration settings you provide, like permissions, memory size, and environment variables.
  3. AWS Lambda creates a new log stream in Amazon CloudWatch Logs for each instance of the execution context.
  4. The execution context initializes the JVM and your handler’s code.

You typically see the initialization of a fresh execution context when a Lambda function is invoked for the first time, after it has been updated, or it scales up in response to more incoming events.

AWS Lambda maintains the execution context for some time in anticipation of another Lambda function invocation. In effect, the service freezes the execution context after a Lambda function completes. It thaws the execution context when the Lambda function is invoked again if AWS Lambda chooses to reuse it.

During invocations, the JVM also maintains garbage collection as usual. Outside of invocations, the JVM and its maintenance processes like garbage collection are also frozen.

Garbage collection and indicators for your application’s health

The purpose of JVM garbage collection is to clean up objects in the JVM heap, which is the space for an application’s objects. It finds objects which are unreachable and deletes them. This frees heap space for other objects.

You can make the JVM log garbage collection activities to get insights into the health of your application. One example for this is the free heap after each garbage collection. If this metric keeps shrinking, it is an indicator for a memory leak – eventually turning into an OutOfMemoryError. If there is not enough of free heap, the JVM might be too busy with garbage collection instead of running your application code. Otherwise, a heap that is too big does indicate that there’s potential to decrease the memory configuration of your AWS Lambda function. This keeps garbage collection pauses low and provides a consistent response time.

The garbage collection logging can be configured via an environment variable as part of the AWS Lambda function configuration. The environment variable JAVA_TOOL_OPTIONS is considered by both the Java 8 and 11 JVMs. You use it to pass options that you would usually add to the command line when launching the JVM. The options to configure garbage collection logging and the output is specific to the Java version.

Java 11 uses the Unified Logging System (JEP 158 and JEP 271) which has been introduced in Java 9. Logging can be configured with the environment variable:

JAVA_TOOL_OPTIONS=-Xlog:gc+metaspace,gc+heap,gc:stdout:time,tags

The Serial Garbage Collector will output the logs:

[<TIMESTAMP>][gc] GC(4) Pause Full (Allocation Failure) 9M->9M(11M) 3.941ms (D)
[<TIMESTAMP>][gc,heap] GC(3) DefNew: 3063K->234K(3072K) (A)
[<TIMESTAMP>][gc,heap] GC(3) Tenured: 6313K->9127K(9152K) (B)
[<TIMESTAMP>][gc,metaspace] GC(3) Metaspace: 762K->762K(52428K) (C)
[<TIMESTAMP>][gc] GC(3) Pause Young (Allocation Failure) 9M->9M(21M) 23.559ms (D)

Prior to Java 9, including Java 8, you configure the garbage collection logging as follows:

JAVA_TOOL_OPTIONS=-XX:+PrintGCDetails -XX:+PrintGCDateStamps

The Serial garbage collector output in Java 8 is structured differently:

<TIMESTAMP>: [GC (Allocation Failure)
    <TIMESTAMP>: [DefNew: 131042K->131042K(131072K), 0.0000216 secs] (A)
    <TIMESTAMP>: [Tenured: 235683K->291057K(291076K), 0.2213687 secs] (B)
    366725K->365266K(422148K), (D)
    [Metaspace: 3943K->3943K(1056768K)], (C)
    0.2215370 secs]
    [Times: user=0.04 sys=0.02, real=0.22 secs]
<TIMESTAMP>: [Full GC (Allocation Failure)
    <TIMESTAMP>: [Tenured: 297661K->36658K(297664K), 0.0434012 secs] (B)
    431575K->36658K(431616K), (D)
    [Metaspace: 3943K->3943K(1056768K)], 0.0434680 secs] (C)
    [Times: user=0.02 sys=0.00, real=0.05 secs]

Independent of the Java version, the garbage collection activities are logged to standard out (stdout) or standard error (stderr). Logs appear in the AWS Lambda function’s log stream of Amazon CloudWatch Logs. The log contains the size of memory used for:

  • A: the young generation
  • B: the old generation
  • C: the metaspace
  • D: the entire heap

The notation is before-gc -> after-gc (committed heap). Read the JVM Garbage Collection Tuning Guide for more details.

Visualizing the logs in Amazon Elasticsearch Service

It is hard to fully understand the garbage collection log by just reading it in Amazon CloudWatch Logs. You must visualize it to gain more insight. This section describes the solution to achieve this.

Solution Overview

Java Solution Overview

Amazon CloudWatch Logs have a feature to stream CloudWatch Logs data to Amazon Elasticsearch Service via an AWS Lambda function. The AWS Lambda function for log transformation is subscribed to the log group of your application’s AWS Lambda function. The subscription filters for a pattern that matches the one of the garbage collection log entries. The log transformation function processes the log messages and puts it to a search cluster. To make the data easy to digest for the search cluster, you add code to transform and convert the messages to JSON. Having the data in a search cluster, you can visualize it with Kibana dashboards.

Get Started

To start, launch the solution architecture described above as a prepackaged application from the AWS Serverless Application Repository. It contains all resources ready to visualize the garbage collection logs for your Java 11 AWS Lambda functions in a Kibana dashboard. The search cluster consists of a single t2.small.elasticsearch instance with 10GB of EBS storage. It is protected with Amazon Cognito User Pools so you only need to add your user(s). The T2 instance types do not support encryption of data at rest.

Read the source code for the application in the aws-samples repository.

1. Spin up the application from the AWS Serverless Application Repository:

launch stack button

2. As soon as the application is deployed completely, the outputs of the AWS CloudFormation stack provide the links for the next steps. You will find two URLs in the AWS CloudFormation console called createUserUrl and kibanaUrl.

search stack

3. Use the createUserUrl link from the outputs, or navigate to the Amazon Cognito user pool in the console to create a new user in the pool.

a. Enter an email address as username and email. Enter a temporary password of your choice with at least 8 characters.

b. Leave the phone number empty and uncheck the checkbox to mark the phone number as verified.

c. If necessary, you can check the checkboxes to send an invitation to the new user or to make the user verify the email address.

d. Choose Create user.

create user dialog of Amazon Cognito User Pools

4. Access the Kibana dashboard with the kibanaUrl link from the AWS CloudFormation stack outputs, or navigate to the Kibana link displayed in the Amazon Elasticsearch Service console.

a. In Kibana, choose the Dashboard icon in the left menu bar

b. Open the Lambda GC Activity dashboard.

You can test that new events appear by using the Kibana Developer Console:

POST gc-logs-2020.09.03/_doc
{
  "@timestamp": "2020-09-03T15:12:34.567+0000",
  "@gc_type": "Pause Young",
  "@gc_cause": "Allocation Failure",
  "@heap_before_gc": "2",
  "@heap_after_gc": "1",
  "@heap_size_gc": "9",
  "@gc_duration": "5.432",
  "@owner": "123456789012",
  "@log_group": "/aws/lambda/myfunction",
  "@log_stream": "2020/09/03/[$LATEST]123456"
}

5. When you go to the Lambda GC Activity dashboard you can see the new event. You must select the right timeframe with the Show dates link.

Lambda GC activity

The dashboard consists of six tiles:

  • In the Filters you optionally select the log group and filter for a specific AWS Lambda function execution context by the name of its log stream.
  • In the GC Activity Count by Execution Context you see a heatmap of all filtered execution contexts by garbage collection activity count.
  • The GC Activity Metrics display a graph for the metrics for all filtered execution contexts.
  • The GC Activity Count shows the amount of garbage collection activities that are currently displayed.
  • The GC Duration show the sum of the duration of all displayed garbage collection activities.
  • The GC Activity Raw Data at the bottom displays the raw items as ingested into the search cluster for a further drill down.

Configure your AWS Lambda function for garbage collection logging

1. The application that you want to monitor needs to log garbage collection activities. Currently the solution supports logs from Java 11. Add the following environment variable to your AWS Lambda function to activate the logging.

JAVA_TOOL_OPTIONS=-Xlog:gc:stderr:time,tags

The environment variables must reflect this parameter like the following screenshot:

environment variables

2. Go to the streamLogs function in the AWS Lambda console that has been created by the stack, and subscribe it to the log group of the function you want to monitor.

streamlogs function

3. Select Add Trigger.

4. Select CloudWatch Logs as Trigger Configuration.

5. Input a Filter name of your choice.

6. Input "[gc" (including quotes) as the Filter pattern to match all garbage collection log entries.

7. Select the Log Group of the function you want to monitor. The following screenshot subscribes to the logs of the application’s function resize-lambda-ResizeFn-[...].

add trigger

8. Select Add.

9. Execute the AWS Lambda function you want to monitor.

10. Refresh the dashboard in Amazon Elasticsearch Service and see the datapoint added manually before appearing in the graph.

Troubleshooting examples

Let’s look at an example function and draw some useful insights from the Java garbage collection log. The following diagrams show the Sample Amazon S3 function code for Java from the AWS Lambda documentation running in a Java 11 function with 512 MB of memory.

  • An S3 event from a new uploaded image triggers this function.
  • The function loads the image from S3, resizes it, and puts the resized version to S3.
  • The file size of the example image is close to 2.8MB.
  • The application is called 100 times with a pause of 1 second.

Memory leak

For the demonstration of a memory leak, the function has been changed to keep all source images in memory as a class variable. Hence the memory of the function keeps growing when processing more images:

GC activity metrics

In the diagram, the heap size drops to zero at timestamp 12:34:00. The Amazon CloudWatch Logs of the function reveal an error before the next call to your code in the same AWS Lambda execution context with a fresh JVM:

Java heap space: java.lang.OutOfMemoryError
java.lang.OutOfMemoryError: Java heap space
 at java.desktop/java.awt.image.DataBufferByte.<init>(Unknown Source)
[...]

The JVM crashed and was restarted because of the error. You leverage primarily the Amazon CloudWatch Logs of your function to detect errors. The garbage collection log and its visualization provide additional information for root cause analysis:

Did the JVM run out of memory because a single image to resize was too large?

Or was the memory issue growing over time?

The latter could be an indication that you have a memory leak in your code.

The Heap size is too small

For the demonstration of a heap that was chosen too small, the memory leak from the preceding image has been resolved, but the function was configured to 128MB of memory. From the baseline of the heap to the maximum heap size, there are only approximately 5 MB used.

GC activity metrics

This will result in a high management overhead of your JVM. You should experiment with a higher memory configuration to find the optimal performance also taking cost into account. Check out AWS Lambda power tuning open source tool to do this in an automated fashion.

Finetuning the initial heap size

If you review the development of the heap size at the start of an execution context, this indicates that the heap size is continuously increased. Each heap size change is an expensive operation consuming time of your function. Over time, the heap size is changed as well. The garbage collector logs 502 activities, which take almost 17 seconds overall.

GC activity metrics

This on-demand scaling is useful on a local workstation where the physical memory is shared with other applications. On AWS Lambda, the configured memory is dedicated to your function, so you can use it to its full extent.

You can do so by setting the minimum and maximum heap size to a fixed value by appending the -Xms and -Xmx parameters to the environment variable we introduced before.

The heap is not the only part of the JVM that consumes memory, so you must experiment with this setting and closely monitor the performance.

Start with the heap size that you observe to be working from the garbage collection log. If you set the heap size too large, your function will not initialize at all or break unexpectedly. Remember that the ability to tweak JVM parameters might change with future service features.

Let’s set 400 MB of the 512 MB memory and examine the results:

JAVA_TOOL_OPTIONS=-Xlog:gc:stderr:time,tags -Xms400m -Xmx400m

GC activity metrics

The preceding dashboard shows that the overall garbage collection duration was reduced by about 95%. The garbage collector had 80% fewer activities.

The garbage collection log entries displayed in the dashboard reveal that exclusively minor garbage collection (Pause Young) activities were triggered instead of major garbage collections (Pause Full). This is expected as the images are immediately discarded after the download, resize, upload operation. The effect on the overall function durations of 100 invocations, is a 5% decrease on average in this specific case.

Lambda duration

Cost estimation and clean up

Cost for the processing and transformation of your function’s Amazon CloudWatch Logs incurs when your function is called. This cost depends on your application and how often garbage collection activities are triggered. Read an estimate of the monthly cost for the search cluster. If you do not need the garbage collection monitoring anymore, delete the subscription filter from the log group of your AWS Lambda function(s). Also, delete the stack of the solution above in the AWS CloudFormation console to clean up resources.

Conclusion

In this post, we examined further sources of data to gain insights about the health of your Java application. We also demonstrated a pipeline to ingest, transform, and visualize this information continuously in a Kibana dashboard. As a next step, launch the application from the AWS Serverless Application Repository and subscribe it to your applications’ logs. Feel free to submit enhancements to the application in the aws-samples repository or provide feedback in the comments.

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

Can AI and Automation Deliver a COVID-19 Antiviral While It Still Matters?

Post Syndicated from Megan Scudellari original https://spectrum.ieee.org/artificial-intelligence/medical-ai/can-ai-and-automation-deliver-a-covid19-antiviral-while-it-still-matters

Within moments of meeting each other at a conference last year, Nathan Collins and Yann Gaston-Mathé began devising a plan to work together. Gaston-Mathé runs a startup that applies automated software to the design of new drug candidates. Collins leads a team that uses an automated chemistry platform to synthesize new drug candidates.

“There was an obvious synergy between their technology and ours,” recalls Gaston-Mathé, CEO and cofounder of Paris-based Iktos.

In late 2019, the pair launched a project to create a brand-new antiviral drug that would block a specific protein exploited by influenza viruses. Then the COVID-19 pandemic erupted across the world stage, and Gaston-Mathé and Collins learned that the viral culprit, SARS-CoV-2, relied on a protein that was 97 percent similar to their influenza protein. The partners pivoted.

Their companies are just two of hundreds of biotech firms eager to overhaul the drug-discovery process, often with the aid of artificial intelligence (AI) tools. The first set of antiviral drugs to treat COVID-19 will likely come from sifting through existing drugs. Remdesivir, for example, was originally developed to treat Ebola, and it has been shown to speed the recovery of hospitalized COVID-19 patients. But a drug made for one condition often has side effects and limited potency when applied to another. If researchers can produce an ­antiviral that specifically targets SARS-CoV-2, the drug would likely be safer and more effective than a repurposed drug.

There’s one big problem: Traditional drug discovery is far too slow to react to a pandemic. Designing a drug from scratch typically takes three to five years—and that’s before human clinical trials. “Our goal, with the combination of AI and automation, is to reduce that down to six months or less,” says Collins, who is chief strategy officer at SRI Biosciences, a division of the Silicon Valley research nonprofit SRI International. “We want to get this to be very, very fast.”

That sentiment is shared by small biotech firms and big pharmaceutical companies alike, many of which are now ramping up automated technologies backed by supercomputing power to predict, design, and test new antivirals—for this pandemic as well as the next—with unprecedented speed and scope.

“The entire industry is embracing these tools,” says Kara Carter, president of the International Society for Antiviral Research and executive vice president of infectious disease at Evotec, a drug-discovery company in Hamburg. “Not only do we need [new antivirals] to treat the SARS-CoV-2 infection in the population, which is probably here to stay, but we’ll also need them to treat future agents that arrive.”

There are currently about 200 known viruses that infect humans. Although viruses represent less than 14 percent of all known human pathogens, they make up two-thirds of all new human pathogens discovered since 1980.

Antiviral drugs are fundamentally different from vaccines, which teach a person’s immune system to mount a defense against a viral invader, and antibody treatments, which enhance the body’s immune response. By contrast, anti­virals are chemical compounds that directly block a virus after a person has become infected. They do this by binding to specific proteins and preventing them from functioning, so that the virus cannot copy itself or enter or exit a cell.

The SARS-CoV-2 virus has an estimated 25 to 29 proteins, but not all of them are suitable drug targets. Researchers are investigating, among other targets, the virus’s exterior spike protein, which binds to a receptor on a human cell; two scissorlike enzymes, called proteases, that cut up long strings of viral proteins into functional pieces inside the cell; and a polymerase complex that makes the cell churn out copies of the virus’s genetic material, in the form of single-stranded RNA.

But it’s not enough for a drug candidate to simply attach to a target protein. Chemists also consider how tightly the compound binds to its target, whether it binds to other things as well, how quickly it metabolizes in the body, and so on. A drug candidate may have 10 to 20 such objectives. “Very often those objectives can appear to be anticorrelated or contradictory with each other,” says Gaston-Mathé.

Compared with antibiotics, antiviral drug discovery has proceeded at a snail’s pace. Scientists advanced from isolating the first antibacterial molecules in 1910 to developing an arsenal of powerful antibiotics by 1944. By contrast, it took until 1951 for researchers to be able to routinely grow large amounts of virus particles in cells in a dish, a breakthrough that earned the inventors a Nobel Prize in Medicine in 1954.

And the lag between the discovery of a virus and the creation of a treatment can be heartbreaking. According to the World Health Organization, 71 million people worldwide have chronic hepatitis C, a major cause of liver cancer. The virus that causes the infection was discovered in 1989, but effective antiviral drugs didn’t hit the market until 2014.

While many antibiotics work on a range of microbes, most antivirals are highly specific to a single virus—what those in the business call “one bug, one drug.” It takes a detailed understanding of a virus to develop an antiviral against it, says Che Colpitts, a virologist at Queen’s University, in Canada, who works on antivirals against RNA viruses. “When a new virus emerges, like SARS-CoV-2, we’re at a big disadvantage.”

Making drugs to stop viruses is hard for three main reasons. First, viruses are the Spartans of the pathogen world: They’re frugal, brutal, and expert at evading the human immune system. About 20 to 250 nanometers in diameter, viruses rely on just a few parts to operate, hijacking host cells to reproduce and often destroying those cells upon departure. They employ tricks to camouflage their presence from the host’s immune system, including preventing infected cells from sending out molecular distress beacons. “Viruses are really small, so they only have a few components, so there’s not that many drug targets available to start with,” says Colpitts.

Second, viruses replicate quickly, typically doubling in number in hours or days. This constant copying of their genetic material enables viruses to evolve quickly, producing mutations able to sidestep drug effects. The virus that causes AIDS soon develops resistance when exposed to a single drug. That’s why a cocktail of antiviral drugs is used to treat HIV infection.

Finally, unlike bacteria, which can exist independently outside human cells, viruses invade human cells to propagate, so any drug designed to eliminate a virus needs to spare the host cell. A drug that fails to distinguish between a virus and a cell can cause serious side effects. “Discriminating between the two is really quite difficult,” says Evotec’s Carter, who has worked in antiviral drug discovery for over three decades.

And then there’s the money barrier. Developing antivirals is rarely profitable. Health-policy researchers at the London School of Economics recently estimated that the average cost of developing a new drug is US $1 billion, and up to $2.8 billion for cancer and other specialty drugs. Because antivirals are usually taken for only short periods of time or during short outbreaks of disease, companies rarely recoup what they spent developing the drug, much less turn a profit, says Carter.

To change the status quo, drug discovery needs fresh approaches that leverage new technologies, rather than incremental improvements, says Christian Tidona, managing director of BioMed X, an independent research institute in Heidelberg, Germany. “We need breakthroughs.”

Iktos’s AI platform was created by a medicinal chemist and an AI expert. To tackle SARS-CoV-2, the company used generative models—deep-learning algorithms that generate new data—to “imagine” molecular structures with a good chance of disabling a key coronavirus protein.

For a new drug target, the software proposes and evaluates roughly 1 million compounds, says Gaston-Mathé. It’s an iterative process: At each step, the system generates 100 virtual compounds, which are tested in silico with predictive models to see how closely they meet the objectives. The test results are then used to design the next batch of compounds. “It’s like we have a very, very fast chemist who is designing compounds, testing compounds, getting back the data, then designing another batch of compounds,” he says.

The computer isn’t as smart as a human chemist, Gaston-Mathé notes, but it’s much faster, so it can explore far more of what people in the field call “chemical space”—the set of all possible organic compounds. Unexplored chemical space is huge: Biochemists estimate that there are at least 1063 possible druglike molecules, and that 99.9 percent of all possible small molecules or compounds have never been synthesized.

Still, designing a chemical compound isn’t the hardest part of creating a new drug. After a drug candidate is designed, it must be synthesized, and the highly manual process for synthesizing a new chemical hasn’t changed much in 200 years. It can take days to plan a synthesis process and then months to years to optimize it for manufacture.

That’s why Gaston-Mathé was eager to send Iktos’s AI-generated designs to Collins’s team at SRI Biosciences. With $13.8 million from the Defense Advanced Research Projects Agency, SRI Biosciences spent the last four years automating the synthesis process. The company’s automated suite of three technologies, called SynFini, can produce new chemical compounds in just hours or days, says Collins.

First, machine-learning software devises possible routes for making a desired molecule. Next, an inkjet printer platform tests the routes by printing out and mixing tiny quantities of chemical ingredients to see how they react with one another; if the right compound is produced, the platform runs tests on it. Finally, a tabletop chemical plant synthesizes milligrams to grams of the desired compound.

Less than four months after Iktos and SRI Biosciences announced their collaboration, they had designed and synthesized a first round of antiviral candidates for SARS-CoV-2. Now they’re testing how well the compounds work on actual samples of the virus.

Theirs isn’t the only collaboration applying new tools to drug discovery. In late March, Alex Zhavoronkov, CEO of Hong Kong–based Insilico Medicine, came across a YouTube video showing three virtual-reality avatars positioning colorful, sticklike fragments in the side of a bulbous blue protein. The three researchers were using VR to explore how compounds might bind to a SARS-CoV-2 enzyme. Zhavoronkov contacted the startup that created the simulation—Nanome, in San Diego—and invited it to examine Insilico’s ­AI-generated molecules in virtual reality.

Insilico runs an AI platform that uses biological data to train deep-learning algorithms, then uses those algorithms to identify molecules with druglike features that will likely bind to a protein target. A four-day training sprint in late January yielded 100 molecules that appear to bind to an important SARS-CoV-2 protease. The company recently began synthesizing some of those molecules for laboratory testing.

Nanome’s VR software, meanwhile, allows researchers to import a molecular structure, then view and manipulate it on the scale of individual atoms. Like human chess players who use computer programs to explore potential moves, chemists can use VR to predict how to make molecules more druglike, says Nanome CEO Steve McCloskey. “The tighter the interface between the human and the computer, the more information goes both ways,” he says.

Zhavoronkov sent data about several of Insilico’s compounds to Nanome, which re-created them in VR. Nanome’s chemist demonstrated chemical tweaks to potentially improve each compound. “It was a very good experience,” says Zhavoronkov.

Meanwhile, in March, Takeda Pharmaceutical Co., of Japan, invited Schrödinger, a New York–based company that develops chemical-simulation software, to join an alliance working on antivirals. Schrödinger’s AI focuses on the physics of how proteins interact with small molecules and one another.

The software sifts through billions of molecules per week to predict a compound’s properties, and it optimizes for multiple desired properties simultaneously, says Karen Akinsanya, chief biomedical scientist and head of discovery R&D at Schrödinger. “There’s a huge sense of urgency here to come up with a potent molecule, but also to come up with molecules that are going to be well tolerated” by the body, she says. Drug developers are seeking compounds that can be broadly used and easily administered, such as an oral drug rather than an intravenous drug, she adds.

Schrödinger evaluated four protein targets and performed virtual screens for two of them, a computing-intensive process. In June, Google Cloud donated the equivalent of 16 million hours of Nvidia GPU time for the company’s calculations. Next, the alliance’s drug companies will synthesize and test the most promising compounds identified by the virtual screens.

Other companies, including Amazon Web Services, IBM, and Intel, as well as several U.S. national labs are also donating time and resources to the Covid-19 High Performance Computing Consortium. The consortium is supporting 87 projects, which now have access to 6.8 million CPU cores, 50,000 GPUs, and 600 petaflops of computational resources.

While advanced technologies could transform early drug discovery, any new drug candidate still has a long road after that. It must be tested in animals, manufactured in large batches for clinical trials, then tested in a series of trials that, for antivirals, lasts an average of seven years.

In May, the BioMed X Institute in Germany launched a five-year project to build a Rapid Antiviral Response Platform, which would speed drug discovery all the way through manufacturing for clinical trials. The €40 million ($47 million) project, backed by drug companies, will identify ­outside-the-box proposals from young scientists, then provide space and funding to develop their ideas.

“We’ll focus on technologies that allow us to go from identification of a new virus to 10,000 doses of a novel potential therapeutic ready for trials in less than six months,” says BioMed X’s Tidona, who leads the project.

While a vaccine will likely arrive long before a bespoke antiviral does, experts expect COVID-19 to be with us for a long time, so the effort to develop a direct-acting, potent antiviral continues. Plus, having new antivirals—and tools to rapidly create more—can only help us prepare for the next pandemic, whether it comes next month or in another 102 years.

“We’ve got to start thinking differently about how to be more responsive to these kinds of threats,” says Collins. “It’s pushing us out of our comfort zones.”

This article appears in the October 2020 print issue as “Automating Antivirals.”

AI Tool to Diagnose Autism Could Give Concerned Parents a Fast Diagnosis

Post Syndicated from Megan Scudellari original https://spectrum.ieee.org/the-human-os/biomedical/diagnostics/cognoa-ai-autism-diagnostic-seeks-fda-clearance

This week, a California-based company announced it will seek FDA clearance for a first-of-its-kind autism spectrum disorder (ASD) diagnostic tool. Cognoa’s technology uses artificial intelligence to make an ASD diagnosis within weeks of signs of concern—far faster than the current standard of care. If cleared by the FDA, it would be the first tool enabling primary care pediatricians to diagnose autism.

The approach is “innovative,” says Robin Goin-Kochel, a clinical autism researcher at Baylor College of Medicine and associate director for research at Texas Children’s Hospital’s Autism Center, who is not affiliated with Cognoa. The field absolutely needs a way to “minimize the time between first concerns about development or behavior and eventual ASD diagnosis,” she adds.

Cognoa’s tool is the latest application of AI to healthcare, a fast-moving field we’ve been tracking at IEEE Spectrum. In many situations, AI tools seek to replace doctors in the prediction or diagnosis of a condition. In this case, however, the application of AI could enable more doctors to make a diagnosis of autism, thereby opening a critical bottleneck in children’s healthcare.

Documented Death from a Ransomware Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/09/documented-death-from-a-ransomware-attack.html

A Dusseldorf woman died when a ransomware attack against a hospital forced her to be taken to a different hospital in another city.

I think this is the first documented case of a cyberattack causing a fatality. UK hospitals had to redirect patients during the 2017 WannaCry ransomware attack, but there were no documented fatalities from that event.

The police are treating this as a homicide.

Our Journey to Continuous Delivery at Grab (Part 1)

Post Syndicated from Grab Tech original https://engineering.grab.com/our-journey-to-continuous-delivery-at-grab

This blog post is a two-part presentation of the effort that went into improving the continuous delivery processes for backend services at Grab in the past two years. In the first part, we take stock of where we started two years ago and describe the software and tools we created while introducing some of the integrations we’ve done to automate our software delivery in our staging environment.


Continuous Delivery is the principle of delivering software often, every day.

As a backend engineer at Grab, nothing matters more than the ability to innovate quickly and safely. Around the end of 2018, Grab’s transportation and deliveries backend architecture consisted of roughly 270 services (the majority being microservices). The deployment process was lengthy, required careful inputs and clear communication. The care needed to push changes in production and the risk associated with manual operations led to the introduction of a Slack bot to coordinate deployments. The bot ensures that deployments occur only during off-peak and within work hours:

Overview of the Grab Delivery Process
Overview of the Grab Delivery Process

Once the build was completed, engineers who desired to deploy their software to the Staging environment would copy release versions from the build logs, and paste them in a Jenkins job’s parameter. Tests needed to be manually triggered from another dedicated Jenkins job.

Prior to production deployments, engineers would generate their release notes via a script and update them manually in a wiki document. Deployments would be scheduled through interactions with a Slack bot that controls release notes and deployment windows. Production deployments were made once again by pasting the correct parameters into two dedicated Jenkins jobs, one for the canary (a.k.a. one-box) deployment and the other for the full deployment, spread one hour apart. During the monitoring phase, engineers would continuously observe metrics reported on our dashboards.

In spite of the fragmented process and risky manual operations impacting our velocity and stability, around 614 builds were running each business day and changes were deployed on our staging environment at an average rate of 300 new code releases per business day, while production changes averaged a rate of 28 new code releases per business day.

Our Deployment Funnel, Towards the End of 2018
Our Deployment Funnel, Towards the End of 2018

These figures meant that, on average, it took 10 business days between each service update in production, and only 10% of the staging deployments were eventually promoted to production.

Automating Continuous Deployments at Grab

With an increased focus on Engineering efficiency, in 2018 we started an internal initiative to address frictions in deployments that became known as Conveyor. To build Conveyor with a small team of engineers, we had to rely on an already mature platform which exhibited properties that are desirable to us to achieve our mission.

Hands-off deployments

Deployments should be an afterthought. Engineers should be as removed from the process as possible, and whenever possible, decisions should be taken early, during the code review process. The machine will do the heavy lifting, and only when it can’t decide for itself, should the engineer be involved. Notifications can be leveraged to ensure that engineers are only informed when something goes wrong and a human decision is required.

Hands-off Deployment Principle
Hands-off Deployment Principle

Confidence in Deployments

Grab’s focus on gathering internal Engineering NPS feedback helped us collect valuable metrics. One of the metrics we cared about was our engineers’ confidence in their production deployments. A team’s entire deployment process to production could last for more than a day and may extend up to a week for teams with large infrastructures running critical services. The possibility of losing progress in deployments when individual steps may last for hours is detrimental to the improvement of Engineering efficiency in the organisation. The deployment automation platform is the bedrock of that confidence. If the platform itself fails regularly or does provide a path of upgrade that is transparent to end-users, any features built on top of it would suffer from these downtimes and ultimately erode confidence in deployments.

Tailored To Most But Extensible For The Few

Our backend engineering teams are working on diverse stacks, and so are their deployment processes. Right from the start, we wanted our product to benefit the largest population of engineers that had adopted the same process, so as to maximize returns on our investments. To ease adoption, we decided to tailor a deployment pipeline such that:

  1. It would model the exact sequence of manual processes followed by this population of engineers.
  2. Switching to use that pipeline should require as little work as possible by service teams.

However, in cases where this model would not fit a team’s specific process, our deployment platform should be open and extensible and support new customizations even when they are not originally supported by the product’s ecosystem.

Cloud-Agnosticity

While we were going to target a specific process and team, to ensure that our solution would stand the test of time, we needed to ensure that our solution would support the variety of environments currently used in production. This variety was also likely to increase, and we wanted a platform that would mature together with the rest of our ecosystem.

Overview Of Conveyor

Setting Sail With Spinnaker

Conveyor is based on Spinnaker, an open-source, multi-cloud continuous delivery platform. We’ve chosen Spinnaker over other platforms because it is a mature deployment platform with no single point of failure, supports complex workflows (referred to as pipelines in Spinnaker), and already supports a large array of cloud providers. Since Spinnaker is open-source and extensible, it allowed us to add the features we needed for the specificity of our ecosystem.

To further ease adoption within our organization, we built a tailored  user interface and created our own domain-specific language (DSL) to manage its pipelines as code.

Outline of Conveyor's Architecture
Outline of Conveyor’s Architecture

Onboarding To A Simpler Interface

Spinnaker comes with its own interface, it has all the features an engineer would want from an advanced continuous delivery system. However, Spinnaker interface is vastly different from Jenkins and makes for a steep learning curve.

To reduce our barrier to adoption, we decided early on to create a simple interface for our users. In this interface, deployment pipelines take the center stage of our application. Pipelines are objects managed by Spinnaker, they model the different steps in the workflow of each deployment. Each pipeline is made up of stages that can be assembled like lego-bricks to form the final pipeline. An instance of a pipeline is called an execution.

Conveyor dashboard. Sensitive information like authors and service names are redacted.
Conveyor Dashboard

With this interface, each engineer can focus on what matters to them immediately: the pipeline they have started, or those started by other teammates working on the same services as they are. Conveyor also provides a search bar (on the top) and filters (on the left) that work in concert to explore all pipelines executed at Grab.

We adopted a consistent set of colours to model all information in our interface:

  • blue: represent stages that are currently running;
  • red: stages that have failed or important information;
  • yellow: stages that require human interaction;
  • and finally, in green: stages that were successfully completed.

Conveyor also provides a task and notifications area, where all stages requiring human intervention are listed in one location. Manual interactions are often no more than just YES or NO questions:

Conveyor tasks. Sensitive information like author/service names is redacted.
Conveyor Tasks

Finally, in addition to supporting automated deployments, we greatly simplified the start of manual deployments. Instead of being required to copy/paste information, each parameter can be selected on the interface from a set of predefined items, sorted chronologically, and presented with contextual information to help engineers in their decision.

Several parameters are required for our deployments and their values are selected from the UI to ensure correctness.

Simplified manual deployments
Simplified Manual Deployments

Ease Of Adoption With Our Pipeline-As-Code DSL

Ease of adoption for the team is not simply about the learning curve of the new tools. We needed to make it easy for teams to configure their services to deploy with Conveyor. Since we focused on automating tasks that were already performed manually, we needed only to configure the layer that would enable the integration.

We set on creating a pipeline-as-code implementation when none were widely being developed in the Spinnaker community. It’s interesting to see that two years on, this idea has grown in parallel in the community, with the birth of other pipeline-as-code implementations. Our pipeline-as-code is referred to as the Pipeline DSL, and its configuration is located inside each team’s repository. Artificer is the name of our Pipeline DSL interpreter and it runs with every change inside our monorepository:

Artificer: Our Pipeline DSL
Artificer: Our Pipeline DSL

Pipelines are being updated at every commit if necessary.

Creating a conveyor.jsonnet file inside with the service’s directory of our monorepository with the few lines below is all that’s required for Artificer to do its work and get the benefits of automation provided by Conveyor’s pipeline:

local default = import 'default.libsonnet';
[
 {
 name: "service-name",
 group: [
 "group-name",
 ]
 }
]

Sample minimal conveyor.jsonnet configuration to onboard services.

In this file, engineers simply specify the name of their service and the group that a user should belong to, to have deployment rights for the service.

Once the build is completed, teams can log in to Conveyor and start manual deployments of their services with our pipelines. Three pipelines are provided by default: the integration pipeline used for tests and developments, the staging pipeline used for pre-production tests, and the production pipeline for production deployment.

Thanks to the simplicity of this minimal configuration file, we were able to generate these configuration files for all existing services of our monorepository. This resulted in the automatic onboarding of a large number of teams and was a major contributing factor to the adoption of Conveyor throughout our organisation.

Our Journey To Engineering Efficiency (for backend services)

The sections below relate some of the improvements in engineering efficiency we’ve delivered since Conveyor’s inception. They were not made precisely in this order but for readability, they have been mapped to each step of the software development lifecycle.

Automate Deployments at Build Time

Continuous Integration Job
Continuous Integration Job

Continuous delivery begins with a pushed code commit in our trunk-based development flow. Whenever a developer pushes changes onto their development branch or onto the trunk, a continuous integration job is triggered on Jenkins. The products of this job (binaries, docker images, etc) are all uploaded into our artefact repositories. We’ve made two additions to our continuous integration process.

The first modification happens at the step “Upload & Register artefacts”. At this step, each artefact created is now registered in Conveyor with its associated metadata. When and if an engineer needs to trigger a deployment manually, Conveyor can display the list of versions to choose from, eliminating the need for error-prone manual inputs:

 Staging
Staging

Each selectable version shows contextual information: title, author, version and link to the code change where it originated. During registration, the commit time is also recorded and used to order entries chronologically in the interface. To ensure this integration is not a single point of failure for deployments, manual input is still available optionally.

The second modification implements one of the essential feature continuous delivery: your deployments should happen often, automatically. Engineers are now given the possibility to start automatic deployments once continuous integration has successfully completed, by simply modifying their project’s continuous integration settings:

 "AfterBuild": [
  {
      "AutoDeploy": {
      "OnDiff": false,
      "OnLand": true
    }
    "TYPE": "conveyor"
  }
 ],

Sample settings needed to trigger auto-deployments. ‘Diff’ refers to code review submissions, and ‘Land’ refers to merged code changes.

Staging Pipeline

Before deploying a new artefact to a service in production, changes are validated on the staging environment. During the staging deployment, we verify that canary (one-box) deployments and full deployments with automated smoke and functional tests suites.

Staging Pipeline
Staging Pipeline

We start by acquiring a deployment lock for this service and this environment. This prevents another deployment of the same service on the same environment to happen concurrently, other deployments will be waiting in a FIFO queue until the lock is released.

The stage “Compute Changeset” ensures that the deployment is not a rollback. It verifies that the new version deployed does not correspond to a rollback by comparing the ancestry of the commits provided during the artefact registration at build time: since we automate deployments after the build process has completed, cases of rollback may occur when two changes are created in quick succession and the latest build completes earlier than the older one.

After the stage “Deploy Canary” has completed, smoke test run. There are three kinds of tests executed at different stages of the pipeline: smoke, functional and security tests. Smoke tests directly reach the canary instance’s endpoint, by-passing load-balancers. If the smoke tests fail,  the canary is immediately rolled back and this deployment is terminated.

All tests are generated from the same builds as the artefact being tested and their versions must match during testing. To ensure that the right version of the test run and distinguish between the different kind of tests to perform, we provide additional metadata that will be passed by Conveyor to the tests system, known internally as Gandalf:

local default = import 'default.libsonnet';
[
  {
    name: "service-name",
    group: [
    "group-name",
    ],
    gandalf_smoke_tests: [
    {
        path: "repo.internal/path/to/my/smoke/tests"
      }
      ],
      gandalf_functional_tests: [
      {
        path: "repo.internal/path/to/my/functional/tests"
      }
      gandalf_security_tests: [
      {
        path: "repo.internal/path/to/my/security/tests"
      }
      ]
    }
]

Sample conveyor.jsonnet configuration with integration tests added.

Additionally, in parallel to the execution of the smoke tests, the canary is also being monitored from the moment its deployment has completed and for a predetermined duration. We leverage our integration with Datadog to allow engineers to select the alerts to monitor. If an alert is triggered during the monitoring period, and while the tests are executed, the canary is again rolled back, and the pipeline is terminated. Engineers can specify the alerts by adding them to the conveyor.jsonnet configuration file together with the monitoring duration:

local default = import 'default.libsonnet';
[
 {
   name: "service-name",
   group: [
   "group-name",
   ],
    gandalf_smoke_tests: [
    {
      path: "repo.internal/path/to/my/smoke/tests"
   }
   ],
   gandalf_functional_tests: [
   {
   path: "repo.internal/path/to/my/functional/tests"
  }
     gandalf_security_tests: [
     {
     path: "repo.internal/path/to/my/security/tests"
     }
     ],
     monitor: {
     stg: {
     duration_seconds: 300,
     alarms: [
     {
   type: "datadog",
   alert_id: 12345678
   },
   {
   type: "datadog",
   alert_id: 23456789
      }
      ]
      }
    }
  }
]

Sample conveyor.jsonnet configuration with alerts in staging added.

When the smoke tests and monitor pass and the deployment of new artefacts is completed, the pipeline execution triggers functional and security tests. Unlike smoke tests, functional & security tests run only after that step, as they communicate with the cluster through load-balancers, impersonating other services.

Before releasing the lock, release notes are generated to inform engineers of the delta of changes between the version they just released and the one currently running in production. Once the lock is released, the stage “Check Policies” verifies that the parameters and variable of the deployment obeys a specific set of criteria, for example: if its service metadata is up-to-date in our service inventory, or if the base image used during deployment is sufficiently recent.

Here’s how the policy stage, the engine, and the providers interact with each other:

Check Policy Stage
Check Policy Stage

In Spinnaker, each event of a pipeline’s execution updates the pipeline’s state in the database. The current state of the pipeline can be fetched by its API as a single JSON document, describing all information related to its execution: including its parameters, the contextual information related to each stage or even the response from the various interfacing components. The role of our “Policy Check” stage is to query this JSON representation of the pipeline, to extract and transform the variables which are forwarded to our policy engine for validation. Our policy engine gathers judgements passed by different policy providers. If the validation by the policy engine fails, the deployment is not rolled back this time; however, promotion to production is not possible and the pipeline is immediately terminated.

The journey through staging deployment finally ends with the stage “Register Deployment”. This stage registers that a successful deployment was made in our staging environment as an artefact. Similarly to the policy check above, certain parameters of the deployment are picked up and consolidated into this document. We use this kind of artefact as proof for upcoming production deployment.

Continuing Our Journey to Engineering Efficiency

With the advancements made in continuous integration and deployment to staging, Conveyor has reduced the efforts needed by our engineers to just three clicks in its interface, when automated deployment is used. Even when the deployment is triggered manually, Conveyor gives the assurance that the parameters selected are valid and it does away with copy/pasting and human interactions across heterogeneous tools.

In the sequel to this blog post, we’ll dive into the improvements that we’ve made to our production deployments and introduce a crucial concept that led to the creation of our proof for successful staging deployment. Finally, we’ll cover the impact that Conveyor had on the continuous delivery of our backend services, by comparing our deployment velocity when we started two years ago versus where we are today.


All these improvements in efficiency for our engineers would never have been possible without the hard work of all team members involved in the project, past and present: Evan Sebastian, Tanun Chalermsinsuwan, Aufar Gilbran, Deepak Ramakrishnaiah, Repon Kumar Roy (Kowshik), Su Han, Voislav Dimitrijevikj, Qijia Wang, Oscar Ng, Jacob Sunny, Subhodip Mandal, and many others who have contributed and collaborated with them.


Join us

Grab is more than just the leading ride-hailing and mobile payments platform in Southeast Asia. We use data and technology to improve everything from transportation to payments and financial services across a region of more than 620 million people. We aspire to unlock the true potential of Southeast Asia and look for like-minded individuals to join us on this ride.

If you share our vision of driving South East Asia forward, apply to join our team today.

Embedding computational thinking skills in our learning resources

Post Syndicated from Oliver Quinlan original https://www.raspberrypi.org/blog/computational-thinking-skills-in-our-free-learning-resources/

Learning computing is fun, creative, and exploratory. It also involves understanding some powerful ideas about how computers work and gaining key skills for solving problems using computers. These ideas and skills are collected under the umbrella term ‘computational thinking’.

When we create our online learning projects for young people, we think as much about how to get across these powerful computational thinking concepts as we do about making the projects fun and engaging. To help us do this, we have put together a computational thinking framework, which you can read right now.

What is computational thinking? A brief summary

Computational thinking is a set of ideas and skills that people can use to design systems that can be run on a computer. In our view, computational thinking comprises:

  • Decomposition
  • Algorithms
  • Patterns and generalisations
  • Abstraction
  • Evaluation
  • Data

All of these aspects are underpinned by logical thinking, the foundation of computational thinking.

What does computational thinking look like in practice?

In principle, the processes a computer performs can also be carried out by people. (To demonstrate this, computing educators have created a lot of ‘unplugged’ activities in which learners enact processes like computers do.) However, when we implement processes so that they can be run on a computer, we benefit from the huge processing power that computers can marshall to do certain types of activities.

A group of young people and educators smiling while engaging with a computer

Computers need instructions that are designed in very particular ways. Computational thinking includes the set of skills we use to design instructions computers can carry out. This skill set represents the ways we can logically approach problem solving; as computers can only solve problems using logical processes, to write programs that run on a computer, we need to use logical thinking approaches. For example, writing a computer program often requires the task the program revolves around to be broken down into smaller tasks that a computer can work through sequentially or in parallel. This approach, called decomposition, can also help people to think more clearly about computing problems: breaking down a problem into its constituent parts helps us understand the problem better.

Male teacher and male students at a computer

Understanding computational thinking supports people to take advantage of the way computers work to solve problems. Computers can run processes repeatedly and at amazing speeds. They can perform repetitive tasks that take a long time, or they can monitor states until conditions are met before performing a task. While computers sometimes appear to make decisions, they can only select from a range of pre-defined options. Designing systems that involve repetition and selection is another way of using computational thinking in practice.

Our computational thinking framework

Our team has been thinking about our approach to computational thinking for some time, and we have just published the framework we have developed to help us with this. It sets out the key areas of computational thinking, and then breaks these down into themes and learning objectives, which we build into our online projects and learning resources.

To develop this computational thinking framework, we worked with a group of academics and educators to make sure it is robust and useful for teaching and learning. The framework was also influenced by work from organisations such as Computing At School (CAS) in the UK, and the Computer Science Teachers’ Association (CSTA) in the USA.

We’ve been using the computational thinking framework to help us make sure we are building opportunities to learn about computational thinking into our learning resources. This framework is a first iteration, which we will review and revise based on experience and feedback.

We’re always keen to hear feedback from you in the community about how we shape our learning resources, so do let us know what you think about them and the framework in the comments.

The post Embedding computational thinking skills in our learning resources appeared first on Raspberry Pi.

[$] Python 3.9 is around the corner

Post Syndicated from coogle original https://lwn.net/Articles/831783/rss

Python
3.9.0rc2
was released on September 17, with the final version scheduled
for October 5, roughly a year after the release of Python 3.8. Python 3.9
will come with new operators for dictionary unions, a new parser, two string
operations meant to eliminate some longstanding confusion, as well as
improved time-zone handling and type hinting. Developers may need to do some
porting for code coming from Python 3.8 or earlier, as the new release has
removed several previously-deprecated features still lingering from Python
2.7.

Досиетата FinCen: Държавната ББР обслужва сметки свързани с бандитизма и оръжия за Ислямска държава

Post Syndicated from Атанас Чобанов original https://bivol.bg/fincen-files-bbr.html

сряда 23 септември 2020


Офшорна фирма със сметка в Пощенска банка плаща 31 милиона долара на друга офшорна фирма със сметка в държавната Българска Банка за Развитие (ББР). Първата офшорка е на лице, свързано с трафик на оръжия за Ислямска държава, а другата е на печално известен български оръжеен търговец, партньор на Боян Петракиев – “Барона”. Тези данни се откриват в Досиетата FinCen – изтекли топ-секретни доклади за съмнителни финансови трансакции, събирани от американската Мрежа за противодействие на финансовите престъпления към Министерството на финансите на САЩ.

Въпросният превод е извършен на 1 ноември 2016 г. Фирмата със сметка в Пощенска банка се казва Heptagon Global Trading и е регистрирана в Дубай. Тя изпраща 31 милиона долара по сметка на сейшелската фирма Osiris Global Trade, която е в Българска банка за развитие. Тъй като преводът е в долари, той е минал през кореспондентската банка The Bank of New York Mellon. Тя е докладвала за съмнителния превод на FinCen месец и половина по-късно в началото на 2017 г.

Частта от доклада SAR, в който е описан превода за 31 милиона долара между две български банки.

В подробния доклад за сделката е отбелязано, че Heptagon Global Trading е регистирана в Дубай и има уеб сайт, който обаче не дава информация за собственика. За Osiris Global Trade, регистрирана на Сейшелите се предполага, че е куха фирма. Не са открити данни за собствениците.

Проучване на Биволъ съвместно с партньори от Международния консорциум на разследващите журналисти (ICIJ) успя да установи кой стои зад тези две фирми.

От търговец на оръжия, попаднали в Ислямска държава…

Heptagon Global Trading е на американския оръжеен търговец Хелмут Г. Мертенс. Тя е имала филиал в САЩ, който е закрит през септември 2017 г.

Хелмут Мертенс. Снимка от публичния му профил в WhatsApp

Хелмут Мертенс има интересна биография. Той е син на немския есесовец Герхард Мертенс, който през 1943 г. участва в акцията на нацистките командоси на Отто Скорцени, освободили от плен Бенито Мусолини. След войната Мертенс-баща е вербуван от ЦРУ заедно с Отто Скорцени и двамата основават фирмата за търговия с оръжия Merex AG, използвана за тайни доставки на немско оръжие с благословията на немските служби. Бившият шеф на разузнаването на Чили Мануел Контрерас твърди в мемоарите си, че Мертенс  снабдявал режима на Пиночет с оръжия и хеликоптери.

Мертенс – баща почива през 1993 г. във Флорида. Неговият син Хелмут Мертенс наследява бизнеса с оръжие и също е замесен в скандални доставки. Неговата фирма United International Supplies, основана още през 1987 г. търгува с оръжия от Източна Европа. Доклад на изследователската група Conflict Armament Research от 2017 г. (финансиран от ЕС) идентифицира румънски гранатомети закупени от тази фирма през 2003 г. и впоследствие използвани от Ислямска държава.

Телефонният номер на Мертенс може да бъде открит в публични регистри и той поддържа профил в приложението WhatsApp. Опитите на Биволъ да се свърже с него и да му задада въпроси за българската сделка бяха неуспешни. От Пощенска банка, където е отворена сметката на Heptagon Global Trading отговориха на нашите въпроси лаконично: “фактите и обстоятелствата, засягащи наличностите и операциите по сметки, представляват банкова тайна по смисъла на чл.62, ал.2 от Закона за кредитните институции и такава информация може да бъде разкрита само при изпълнение на специфичните изисквания на закона.

Факт е, че в България няма законово ограничение да се откриват сметки на чуждестранни фирми регистрирани в офшорни дестинации. Но във всички случаи фирма с такъв предмет на дейност, оперираща с десетки милиони долари и със собственик свързван с трафик на оръжие, би трябвало да бъде сериозно проучена както от банката, така и от нашите служби.

…за търговец на оръжия свързан с “Барона”

Фирмата Osiris Global Trade, която е получила 31 милиона долара от Мертенс е със сметка в Българска банка за развитие и е собственост на Александър Любомиров Димитров.

До края на 2019 г. “Осирис Глобъл Трейд Лимитид” е собственик на българската фирма “Оро Инвестмънтс”. В декларацията за действителните собственици на това дружество е посочен Александър Димитров.

Същият Александър Димитров е собственик и на фирмата “Алгънс”, която има лиценз за търговия с оръжие. Специалитетът на “Алгънс” е да изкупува залежали партиди оръжия и муниции съветски модел и да ги пласира на международния пазар. Този бизнес процъфтява покрай програмата на Пентагона за въоръжаване на сирийските бунтовници.

България е един от големите доставчици на оръжия по програмата на Командването за специални операции към Министерството на отбраната на САЩ (SOCOM), както показа разследването “Убийствен удар“, получило няколко международни награди за разследваща журналистика.

Но през юни 2015 г. една от гранатите доставени от “Алгънс” се взривява на полигона в Анево, нает от фирмата. Взривът убива американския инструктор Франсис Норвило от Тексас, баща на две деца.

Тръгвайки по следите на този инцидент американският сайт Buzzfeed разкри, че “Алгънс” е партньор на американската фирма Purple Shovel, получила американски обществени поръчки за 26.7 милиона щатски долара за доставка на оръжия за Сирия.

Но също така разследването разкрива, че Димитров има връзка с Боян Петракиев – „Барона“, едно от знаковите лица на организираната престъпност у нас. Преди да се захване с оръжейния бизнес, Димитров е съдружник с Барона във фирмата “СИБ Метал”, която изкупува скрап.

 

 

Изданието се спира подробно на връзката между “Барона” и Димитров и повдига въпроса защо американската държава харчи бюджетни средства за сделки със съмнителни лица, и за оръжия със съмнително качество.

Пред Buzzfeed Петракиев разказал, че той е бил този, който е запознал Димитров с „влиятелни хора“, и че точно неговите връзки са помогнали на Димитров да се превърне в успешен оръжеен бизнесмен.

„Преди да му помогна Димитров беше нищо,“ – Боян Петракиев – Барона.

„Петракиев е известен престъпник с дълго криминално досие, което датира от десетилетия,“ коментира за Buzzfeed комисар Явор Колев, началник на отдел Трансгранична организирана престъпност към българското Министерство на вътрешните работи . „ В средите на правоприлагането той се счита за прочут представител на организираната престъпност в България.“

В схемата с оръжията плащани от Пентагона се появява и Osiris Global Trade, както показва съдебен документ от дело заведено в САЩ. От него се разбира, че Purple Shovel има дългове за милиони долари както към “Алгънс”, така и към Osiris Global Trade.

 

Скандалът със загиналия американец и публикациите в американската преса от 2015 г. очевидно не смущават бизнеса на Димитров. В края на 2016 той получава 31 милиона долара от друг американски търговец на оръжие.

Биволъ се опита да се свърже с Александър Димитров, но въпросите изпратени на фирмения имейл на “Алгънс” останаха без отговор.

Наш репортер посети и адреса, на който са регистрирани “Алгънс” и “Оро Инвестмънтс” на ул. Поп Богомил 6, етаж 1, апартамент 3. Сградата е скромна и там няма обозначен офис на нито една от фирмите. Самият Димитров отсъстваше, но стана ясно, че съседите често го виждат там. Шефът на успешната оръжейна фирма не е инвестирал в представителен офис, но се придвижва с луксозно “Мазерати”.

Очевидно е от огромен обществен интерес фактът, че в държавната банка, която трябва да насърчава малкия и средния бизнес, се въртят десетки милиони долари на фирми с нееднозначна репутация. От Българска банка за развитие обаче не отговориха на въпросите ни изпратени по електронна поща.

В търсене на отговори Биволъ се свърза директно с пиара на Министерството на финансите, което е принципал на ББР. Но всички опити да получим информация как е възможно офшорна фирма като Osiris Global Trade да има доларова сметка в държавната банка бяха неуспешни. По това време финансов министър е Владислав Горанов.

Банков превод от близо 50 милиона лева едва ли остава незабелязан за службите, отговарящи за националната сигурност, още повече в строго регламентирания и наблюдаван бизнес с оръжия. Но въпросите ни изпратени до ДАНС и конкретно адресирани до Дирекцията “Финансово разузнаване” останаха без отговор.

Дали тази дискретност се дължи на „влиятелните хора“, които стоят зад Димитров, и на които той е бил препоръчан не от кого да е, а от знаково лице на българската организирана престъпност? Които и да са те, явно влиянието им стига до върховете на държавата, след като на ортака на “Барона” му е дадена възможност да банкира в Българска банка за развитие. И разбира се, важно е да се разбере колко още такива офшорки и съмнителни субекти въртят доларови преводи за милиони през държавната банка.

Авторът Атанас Чобанов е част от групата от 400 журналисти от 88 държави, които в продължение на 16 месеца работиха над секретните документи на FinCen, предоставени на сайта Buzzfeed и Международния консорциум на разследващите журналисти (ICIJ).

[$] Accurate timestamps for the ftrace ring buffer

Post Syndicated from corbet original https://lwn.net/Articles/831207/rss

The function
tracer (ftrace) subsystem
has become an essential part of the kernel’s
introspection tooling. Like many kernel subsystems, ftrace uses a ring buffer to
quickly
communicate events to user space; those events include a timestamp to
indicate when they occurred. Until recently, the design of the ring buffer
has led to the creation of inaccurate timestamps when events are generated
from interrupt handlers. That problem has now been solved; read on for an
in-depth discussion of how this issue came about and the form of its
solution.

[$] Accurate timestamps for the Ftrace ring buffer

Post Syndicated from corbet original https://lwn.net/Articles/831207/rss

The function
tracer (ftrace) subsystem
has become an essential part of the kernel’s
introspection tooling. Like many kernel subsystems, ftrace uses a ring buffer to
quickly
communicate events to user space; those events include a timestamp to
indicate when they occurred. Until recently, the design of the ring buffer
has led to the creation of inaccurate timestamps when events are generated
from interrupt handlers. That problem has now been solved; read on for an
in-depth discussion of how this issue came about and the form of its
solution.

Linux Journal is Back

Post Syndicated from ris original https://lwn.net/Articles/832184/rss

Linux Journal has returned
under the ownership of Slashdot Media. “As Linux enthusiasts and long-time fans of Linux Journal, we were disappointed to hear about Linux Journal closing it’s doors last year. It took some time, but fortunately we were able to get a deal done that allows us to keep Linux Journal alive now and indefinitely. It’s important that amazing resources like Linux Journal never disappear.

Improving security as part of accelerated data center migrations

Post Syndicated from Stephen Bowie original https://aws.amazon.com/blogs/security/improving-security-as-part-of-accelerated-data-center-migrations/

Approached correctly, cloud migrations are a great opportunity to improve the security and stability of your applications. Many organizations are looking for guidance on how to meet their security requirements while moving at the speed that the cloud enables. They often try to configure everything perfectly in the data center before they migrate their first application. At AWS Managed Services (AMS), we’ve observed that successful migrations establish a secure foundation in the cloud landing zone then iterate from there. We think it’s important to establish a secure foundation in your cloud landing zone, and then refine and improve your security as you grow.

Customers who take a pragmatic, risk-based approach are able to innovate and move workloads more quickly to the cloud. The organizations that migrate fastest start by understanding the shared responsibility model. In the shared responsibility model, Amazon Web Services (AWS) takes responsibility for delivering security controls that might have been the responsibility of customers operating within their legacy data center. Customers can concentrate their activities on the security controls they remain responsible for. The modern security capabilities provided by AWS make this easier.

The most efficient way to migrate is to move workloads to the cloud as early as possible. After the workloads are moved, you can experiment with security upgrades and new security capabilities available in the cloud. This lets you migrate faster and consistently evolve your security approach. The sooner you focus on applying foundational security in the cloud, the sooner you can begin refining and getting comfortable with cloud security and making improvements to your existing workloads.

For example, we recently helped a customer migrate servers that weren’t sufficiently hardened to the Center for Internet Security (CIS) benchmarks. The customer could have attempted hardening on premises before their migration. That would have required spinning up dedicated infrastructure resources in their data center—a complex and costly, resource-intensive proposition.

Instead, we migrated their application to the cloud as it was, took snapshots of the servers, and ran the snapshots on an easy-to-deploy, low-cost instance of Amazon Elastic Compute Cloud (Amazon EC2). Using the snapshots, we ran scripts to harden those servers and brought their security scores up to over 90 percent against the CIS benchmarks.

Using this method to migrate let the customer migrate their existing system to the cloud quickly, then test hardening methods against the snapshots. If the application hadn’t run properly after hardening, the customer could have continued running on the legacy OS while fixing the issues at their own pace. Fortunately, the application ran seamlessly on the hardened snapshot of the OS. The customer switched to the hardened infrastructure without incurring downtime and with none of the risks or costs of trying to do it in their data center.

Migrations are great opportunities to uplift the security of your infrastructure and applications. It’s often more efficient to try migrating and break something rather than attempting to get everything right before starting. For example, dependence on legacy protocols, such as Server Message Block (SMB) v1, should be fixed by the customer or their migration partner as part of the initial migration. The same is true for servers missing required endpoint security agents. AWS Professional Services and AMS help customers identify these risks during migrations, and help them to isolate and mitigate them as an integral part of the migration.

The key is to set priorities appropriately. Reviewing control objectives early in the process is essential. Many on-premises data centers operate on security policies that are 20 years old or more. Legacy policies often clash with current security best practices, or lack the ability to take advantage of security capabilities that are native to the cloud. Mapping objectives to cloud capabilities can provide opportunities to meet or exceed existing security policies by using new controls and tools. It can also help identify what’s critical to fix right away.

In many cases, controls can be retired because cloud security makes them irrelevant. For example, in AMS, privileged credentials, such as Local Administrator and sudo passwords are either randomized or made unusable via policy. This removes the need to manage and control those types of credentials. Using AWS Directory Service for Microsoft Active Directory reduces the risk exposure of domain controllers for the resource forest and automates activities, such as patching, that would otherwise require privileged access. By using AWS Systems Manager to automate common operational tasks, 96 percent of our operations are performed via automation. This significantly reduced the need for humans to access infrastructure. This is one of the Well Architected design principles.

It’s also important to address the people and process aspects of security. Although the cloud can improve your security posture, you should implement current security best practices to help mitigate new risks that might emerge in the future. Migration is a great opportunity to refresh and practice your security response process, and take advantage of the increased agility and automation of security capabilities in the cloud. At AMS, we welcome every opportunity to simulate security events with our customers as part of a joint game day, allowing our teams to practice responding to security events together.

Or as John Brigden, Vice President of AMS, recently said in a blog post, “Traditional, centralized IT prioritized security and control over speed and flexibility. Outsourced IT could exacerbate this problem by adding layers of bureaucracy to the system. The predictable result was massive growth in shadow IT. Cloud-native, role-based solutions such as AWS Identity and Access Manager (IAM), Amazon CloudWatch, and AWS CloudTrail work together to enable enterprise governance and security with appropriate flexibility and control for users.”

In most cases, if it’s possible to migrate even a small application to the cloud early, it will be more efficient and less costly than waiting until all security issues have been addressed before migrating. To learn how using AMS to operate in the cloud can deliver a 243 percent return on investment, download the Forrester Total Economic Impact™ study.

You can use native AWS and third-party security services to inspect and harden your infrastructure. Most importantly, you can get a feel for security operations in the cloud—how things change, how they stay the same, and what is no longer a concern. When it comes to accelerating your migration securely, let the cloud do the heavy lifting.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Migration & Transfer forums or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Stephen Bowie

Based in Seattle, Stephen leads the AMS Security team, a global team of engineers who live and breathe security, striving around the clock to keep our customers safe. Stephen’s 20-year career in security includes time with Deloitte, Microsoft, and Cutter & Buck. Outside of work, he is happiest sailing, travelling, or watching football with his family.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close