“My problem with contact tracing apps is that they have absolutely no value,” Bruce Schneier, a privacy expert and fellow at the Berkman Klein Center for Internet & Society at Harvard University, told BuzzFeed News. “I’m not even talking about the privacy concerns, I mean the efficacy. Does anybody think this will do something useful? … This is just something governments want to do for the hell of it. To me, it’s just techies doing techie things because they don’t know what else to do.”
I haven’t blogged about this because I thought it was obvious. But from the tweets and emails I have received, it seems not.
This is a classic identification problem, and efficacy depends on two things: false positives and false negatives.
False positives: Any app will have a precise definition of a contact: let’s say it’s less than six feet for more than ten minutes. The false positive rate is the percentage of contacts that don’t result in transmissions. This will be because of several reasons. One, the app’s location and proximity systems — based on GPS and Bluetooth — just aren’t accurate enough to capture every contact. Two, the app won’t be aware of any extenuating circumstances, like walls or partitions. And three, not every contact results in transmission; the disease has some transmission rate that’s less than 100% (and I don’t know what that is).
False negatives: This is the rate the app fails to register a contact when an infection occurs. This also will be because of several reasons. One, errors in the app’s location and proximity systems. Two, transmissions that occur from people who don’t have the app (even Singapore didn’t get above a 20% adoption rate for the app). And three, not every transmission is a result of that precisely defined contact — the virus sometimes travels further.
Assume you take the app out grocery shopping with you and it subsequently alerts you of a contact. What should you do? It’s not accurate enough for you to quarantine yourself for two weeks. And without ubiquitous, cheap, fast, and accurate testing, you can’t confirm the app’s diagnosis. So the alert is useless.
Similarly, assume you take the app out grocery shopping and it doesn’t alert you of any contact. Are you in the clear? No, you’re not. You actually have no idea if you’ve been infected.
The end result is an app that doesn’t work. People will post their bad experiences on social media, and people will read those posts and realize that the app is not to be trusted. That loss of trust is even worse than having no app at all.
It has nothing to do with privacy concerns. The idea that contact tracing can be done with an app, and not human health professionals, is just plain dumb.
EDITED TO ADD: This Brookings essay makes much the same point.
Google and Apple have announced a joint project to create a privacy-preserving COVID-19 contact tracing app. (Details, such as we have them, are here.) It’s similar to the app being developed at MIT, and similar to others being described and developed elsewhere. It’s nice seeing the privacy protections; they’re well thought out.
I was going to write a long essay about the security and privacy concerns, but Ross Anderson beat me to it. (Note that some of his comments are UK-specific.)
First, it isn’t anonymous. Covid-19 is a notifiable disease so a doctor who diagnoses you must inform the public health authorities, and if they have the bandwidth they call you and ask who you’ve been in contact with. They then call your contacts in turn. It’s not about consent or anonymity, so much as being persuasive and having a good bedside manner.
I’m relaxed about doing all this under emergency public-health powers, since this will make it harder for intrusive systems to persist after the pandemic than if they have some privacy theater that can be used to argue that the whizzy new medi-panopticon is legal enough to be kept running.
Second, contact tracers have access to all sorts of other data such as public transport ticketing and credit-card records. This is how a contact tracer in Singapore is able to phone you and tell you that the taxi driver who took you yesterday from Orchard Road to Raffles has reported sick, so please put on a mask right now and go straight home. This must be controlled; Taiwan lets public-health staff access such material in emergencies only.
Third, you can’t wait for diagnoses. In the UK, you only get a test if you’re a VIP or if you get admitted to hospital. Even so the results take 1-3 days to come back. While the VIPs share their status on twitter or facebook, the other diagnosed patients are often too sick to operate their phones.
Fourth, the public health authorities need geographical data for purposes other than contact tracing – such as to tell the army where to build more field hospitals, and to plan shipments of scarce personal protective equipment. There are already apps that do symptom tracking but more would be better. So the UK app will ask for the first three characters of your postcode, which is about enough to locate which hospital you’d end up in.
Fifth, although the cryptographers – and now Google and Apple – are discussing more anonymous variants of the Singapore app, that’s not the problem. Anyone who’s worked on abuse will instantly realise that a voluntary app operated by anonymous actors is wide open to trolling. The performance art people will tie a phone to a dog and let it run around the park; the Russians will use the app to run service-denial attacks and spread panic; and little Johnny will self-report symptoms to get the whole school sent home.
To me, the real problems aren’t around privacy and security. The efficacy of any app-based contact tracing is still unproven. A “contact” from the point of view of an app isn’t the same as an epidemiological contact. And the ratio of infections to contacts is high. We would have to deal with the false positives (being close to someone else, but separated by a partition or other barrier) and the false negatives (not being close to someone else, but contracting the disease through a mutually touched object). And without cheap, fast, and accurate testing, the information from any of these apps isn’t very useful. So I agree with Ross that this is primarily an exercise in that false syllogism: Something must be done. This is something. Therefore, we must do it. It’s techies proposing tech solutions to what is primarily a social problem.
EDITED TO ADD: Susan Landau on contact tracing apps and how they’re being oversold. And Farzad Mostashari, former coordinator for health IT at the Department of Health and Human Services, on contact tracing apps.
As long as 1) every contact does not result in an infection, and 2) a large percentage of people with the disease are asymptomatic and don’t realize they have it, I can’t see how this sort of app is valuable. If we had cheap, fast, and accurate testing for everyone on demand…maybe. But I still don’t think so.
EDITED TO ADD (4/15): More details from Apple and Google.
This post is contributed by Lulu Zhao | Software Development Engineer II, AWS
AWS X-Ray helps developers and DevOps engineers quickly understand how an application and its underlying services are performing. When it’s integrated with AWS App Mesh, the combination makes for a powerful analytical tool.
X-Ray helps to identify and troubleshoot the root causes of errors and performance issues. It’s capable of analyzing and debugging distributed applications, including those based on a microservices architecture. It offers insights into the impact and reach of errors and performance problems.
In this post, I demonstrate how to integrate it with App Mesh.
Overview
App Mesh is a service mesh based on the Envoy proxy that makes it easy to monitor and control microservices. App Mesh standardizes how your microservices communicate, giving you end-to-end visibility and helping to ensure high application availability.
With App Mesh, it’s easy to maintain consistent visibility and network traffic control for services built across multiple types of compute infrastructure. App Mesh configures each service to export monitoring data and implements consistent communications control logic across your application.
A service mesh is like a communication layer for microservices. All communication between services happens through the mesh. Customers use App Mesh to configure a service mesh that contains virtual services, virtual nodes, virtual routes, and corresponding routes.
However, it’s challenging to visualize the way that request traffic flows through the service mesh while attempting to identify latency and other types of performance issues. This is particularly true as the number of microservices increases.
It’s in exactly this area where X-Ray excels. To show a detailed workflow inside a service mesh, I implemented a tracing extension called X-Ray tracer inside Envoy. With it, I ensure that I’m tracing all inbound and outbound calls that are routed through Envoy.
Traffic routing with color app
The following example shows how X-Ray works with App Mesh. I used the Color App, a simple demo application, to showcase traffic routing.
This app has two Go applications that are included in the AWS X-Ray Go SDK: color-gateway and color-teller. The color-gateway application is exposed to external clients and responds to http://service-name:port/color, which retrieves color from color-teller. I deployed color-app using Amazon ECS. This image illustrates how color-gateway routes traffic into a virtual router and then into separate nodes using color-teller.
The following image shows client interactions with App Mesh in an X-Ray service map after requests have been made to the color-gateway and to color-teller.
Integration
There are two types of service nodes:
AWS::AppMesh::Proxy is generated by the X-Ray tracing extension inside Envoy.
The service graph arrows show the request workflow, which you may find helpful as you try to understand the relationships between services.
To send Envoy-generated segments into X-Ray, install the X-Ray daemon. The following code example shows the ECS task definition used to install the daemon into the container.
After the Color app successfully launched, I made a request to color-gateway to fetch a color.
First, the Envoy proxy appmesh/colorgateway-vn in front of default-gateway received the request and routed it to the server default-gateway.
Then, default-gateway made a request to server default-colorteller-white to retrieve the color.
Instead of directly calling the color-teller server, the request went to the default-gateway Envoy proxy and the proxy routed the call to color-teller.
That’s the advantage of using the Envoy proxy. Envoy is a self-contained process that is designed to run in parallel with all application servers. All of the Envoy proxies form a transparent communication mesh through which each application sends and receives messages to and from localhost while remaining unaware of the broader network topology.
For App Mesh integration, the X-Ray tracer records the mesh name and virtual node name values and injects them into the segment JSON document. Here is an example:
To enable X-Ray tracing through App Mesh inside Envoy, you must set two environment variable configurations:
ENABLE_ENVOY_XRAY_TRACING
XRAY_DAEMON_PORT
The first one enables X-Ray tracing using 127.0.0.1:2000 as the default daemon endpoint to which generated segments are sent. If the daemon you installed listens on a different port, you can specify a port value to override the default X-Ray daemon port by using the second configuration.
Conclusion
Currently, AWS X-Ray supports SDKs written in multiple languages (including Java, Python, Go, .NET, and .NET Core, Node.js, and Ruby) to help you implement your services. For more information, see Getting Started with AWS X-Ray.
Linus has released the 4.17 kernel, which will indeed be called “4.17”. “No, I didn’t call it 5.0, even though all the git object count numerology was in place for that. It will happen in the not _too_ distant future, and I’m told all the release scripts on kernel.org are ready for it, but I didn’t feel there was any real reason for it.”
Headline features in this release include improved load estimation in the CPU scheduler, raw BPF tracepoints, lazytime support in the XFS filesystem, full in-kernel TLS protocol support, histogram triggers for tracing, mitigations for the latest Spectre variants, and, of course, the removal of support for eight unloved processor architectures.
The CFP will close on July 30th. Notification of acceptance and non-acceptance will go out within 7 days of the closing of the CFP.
All topics relevant to foundational open-source Linux technologies are welcome. In particular, however, we are looking for proposals including, but not limited to, the following topics:
Low-level container executors and infrastructure
IoT and embedded OS infrastructure
BPF and eBPF filtering
OS, container, IoT image delivery and updating
Building Linux devices and applications
Low-level desktop technologies
Networking
System and service management
Tracing and performance measuring
IPC and RPC systems
Security and Sandboxing
While our focus is definitely more on the user-space side of things, talks about kernel projects are welcome, as long as they have a clear and direct relevance for user-space.
Much has been written on LWN about dynamically instrumenting kernel code. These features are also available to user-space code with a special kind of probe known as a User Statically-Defined Tracing (USDT) probe. These probes provide a low-overhead way of instrumenting user-space code and provide a convenient way to debug applications running in production. In this final article of the BPF and BCC series we’ll look at where USDT probes come from and how you can use them to understand the behavior of your own applications.
Previous attempts to track tainted coins had used either the “poison” or the “haircut” method. Suppose I open a new address and pay into it three stolen bitcoin followed by seven freshly-mined ones. Then under poison, the output is ten stolen bitcoin, while under haircut it’s ten bitcoin that are marked 30% stolen. After thousands of blocks, poison tainting will blacklist millions of addresses, while with haircut the taint gets diffused, so neither is very effective at tracking stolen property. Bitcoin due-diligence services supplant haircut taint tracking with AI/ML, but the results are still not satisfactory.
We discovered that, back in 1816, the High Court had to tackle this problem in Clayton’s case, which involved the assets and liabilities of a bank that had gone bust. The court ruled that money must be tracked through accounts on the basis of first-in, first out (FIFO); the first penny into an account goes to satisfy the first withdrawal, and so on.
Ilia Shumailov has written software that applies FIFO tainting to the blockchain and the results are impressive, with a massive improvement in precision. What’s more, FIFO taint tracking is lossless, unlike haircut; so in addition to tracking a stolen coin forward to find where it’s gone, you can start with any UTXO and trace it backwards to see its entire ancestry. It’s not just good law; it’s good computer science too.
This post was contributed by Christoph Kassen, AWS Solutions Architect
With the emergence of microservices architectures, the number of services that are part of a web application has increased a lot. It’s not unusual anymore to build and operate hundreds of separate microservices, all as part of the same application.
Think of a typical e-commerce application that displays products, recommends related items, provides search and faceting capabilities, and maintains a shopping cart. Behind the scenes, many more services are involved, such as clickstream tracking, ad display, targeting, and logging. When handling a single user request, many of these microservices are involved in responding. Understanding, analyzing, and debugging the landscape is becoming complex.
AWS X-Ray provides application-tracing functionality, giving deep insights into all microservices deployed. With X-Ray, every request can be traced as it flows through the involved microservices. This provides your DevOps teams the insights they need to understand how your services interact with their peers and enables them to analyze and debug issues much faster.
With microservices architectures, every service should be self-contained and use the technologies best suited for the problem domain. Depending on how the service is built, it is deployed and hosted differently.
One of the most popular choices for packaging and deploying microservices at the moment is containers. The application and its dependencies are clearly defined, the container can be built on CI infrastructure, and the deployment is simplified greatly. Container schedulers, such as Kubernetes and Amazon Elastic Container Service (Amazon ECS), greatly simplify deploying and running containers at scale.
Running X-Ray on Kubernetes
Kubernetes is an open-source container management platform that automates deployment, scaling, and management of containerized applications.
This post shows you how to run X-Ray on top of Kubernetes to provide application tracing capabilities to services hosted on a Kubernetes cluster. Additionally, X-Ray also works for applications hosted on Amazon ECS, AWS Elastic Beanstalk, Amazon EC2, and even when building services with AWS Lambda functions. This flexibility helps you pick the technology you need while still being able to trace requests through all of the services running within your AWS environment.
The complete code, including a simple Node.js based demo application is available in the corresponding aws-xray-kubernetes GitHub repository, so you can quickly get started with X-Ray.
The sample application within the repository consists of two simple microservices, Service-A and Service-B. The following architecture diagram shows how each service is deployed with two Pods on the Kubernetes cluster:
Requests are sent to the Service-A from clients.
Service-A then contacts Service-B.
The requests are serviced by Service-B.
Service-B adds a random delay to each request to show different response times in X-Ray.
To test out the sample applications on your own Kubernetes cluster use the Dockerfiles provided in the GitHub repository, build the two containers, push them to a container registry and apply the yaml configuration with kubectl to your Kubernetes cluster.
Prerequisites
If you currently do not have a cluster running within your AWS environment, take a look at Amazon Elastic Container Service for Kubernetes (Amazon EKS), or use the instructions from the Manage Kubernetes Clusters on AWS Using Kops blog post to spin up a self-managed Kubernetes cluster.
Security Setup for X-Ray
The nodes in the Kubernetes cluster hosting web application Pods need IAM permissions so that the Pods hosting the X-Ray daemon can send traces to the X-Ray service backend.
The easiest way is to set up a new IAM policy allowing all worker nodes within your Kubernetes cluster to write data to X-Ray. In the IAM console or AWS CLI, create a new policy like the following:
Adjust the AWS account ID within the resource. Give the policy a descriptive name, such as k8s-nodes-XrayWriteAccess.
Next, attach the policy to the instance profile for the Kubernetes worker nodes. Therefore, select the IAM role assigned to your worker instances (check the EC2 console if you are unsure) and attach the IAM policy created earlier to it. You can attach the IAM permissions directly from the command line with the following command:
aws iam attach-role-policy – role-name k8s-nodes – policy-arn arn:aws:iam::000000000000:policy/k8s-nodes-XrayWriteAccess
Build the X-Ray daemon Docker image
The X-Ray daemon is available as a single, statically compiled binary that can be downloaded directly from the AWS website.
The first step is to create a Docker container hosting the X-Ray daemon binary and exposing port 2000 via UDP. The daemon is either configured via command line parameters or a configuration file. The most important option is to set the listen port to the correct IP address so that tracing requests from application Pods can be accepted.
To build your own Docker image containing the X-Ray daemon, use the Dockerfile shown below.
# Use Amazon Linux Version 1
FROM amazonlinux:1
# Download latest 2.x release of X-Ray daemon
RUN yum install -y unzip && \
cd /tmp/ && \
curl https://s3.dualstack.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-linux-2.x.zip > aws-xray-daemon-linux-2.x.zip && \
unzip aws-xray-daemon-linux-2.x.zip && \
cp xray /usr/bin/xray && \
rm aws-xray-daemon-linux-2.x.zip && \
rm cfg.yaml
# Expose port 2000 on udp
EXPOSE 2000/udp
ENTRYPOINT ["/usr/bin/xray"]
# No cmd line parameters, use default configuration
CMD ['']
This container image is based on Amazon Linux which results in a small container image. To build and tag the container image run docker build -t xray:latest.
Create an Amazon ECR repository
Create a repository in Amazon Elastic Container Registry (Amazon ECR) to hold your X-Ray Docker image. Your Kubernetes cluster uses this repository to pull the image from upon deployment of the X-Ray Pods.
Use the following CLI command to create your repository or alternatively the AWS Management Console:
Make sure that you configure the kubectl tool properly for your cluster to be able to deploy the X-Ray Pod onto your Kubernetes cluster. After the X-Ray Pods are deployed to your Kubernetes cluster, applications can send tracing information to the X-Ray daemon on their host. The biggest advantage is that you do not need to provide X-Ray as a sidecar container alongside your application. This simplifies the configuration and deployment of your applications and overall saves resources on your cluster.
To deploy the X-Ray daemon as Pods onto your Kubernetes cluster, run the following from the cloned GitHub repository:
kubectl apply -f xray-k8s-daemonset.yaml
This deploys and maintains an X-Ray Pod on each worker node, which is accepting tracing data from your microservices and routing it to one of the X-Ray pods. When deploying the container using a DaemonSet, the X-Ray port is also exposed directly on the host. This way, clients can connect directly to the daemon on your node. This avoids unnecessary network traffic going across your cluster.
Connecting to the X-Ray daemon
To integrate application tracing with your applications, use the X-Ray SDK for one of the supported programming languages:
Java
Node.js
.NET (Framework and Core)
Go
Python
The SDKs provide classes and methods for generating and sending trace data to the X-Ray daemon. Trace data includes information about incoming HTTP requests served by the application, and calls that the application makes to downstream services using the AWS SDK or HTTP clients.
By default, the X-Ray SDK expects the daemon to be available on 127.0.0.1:2000. This needs to be changed in this setup, as the daemon is not part of each Pod but hosted within its own Pod.
The deployed X-Ray DaemonSet exposes all Pods via the Kubernetes service discovery, so applications can use this endpoint to discover the X-Ray daemon. If you deployed to the default namespace, the endpoint is:
xray-service.default
Applications now need to set the daemon address either with the AWS_XRAY_DAEMON_ADDRESS environment variable (preferred) or directly within the SDK setup code:
To set up the environment variable, include the following information in your Kubernetes application deployment description YAML. That exposes the X-Ray service address via the environment variable, where it is picked up automatically by the SDK.
Sending tracing information from your application is straightforward with the X-Ray SDKs. The example code below serves as a starting point to instrument your application with traces. Take a look at the two sample applications in the GitHub repository to see how to send traces from Service A to Service B. The diagram below visualizes the flow of requests between the services.
Because your application is running within containers, enable both the EC2Plugin and ECSPlugin, which gives you information about the Kubernetes node hosting the Pod as well as the container name. Despite the name ECSPlugin, this plugin gives you additional information about your container when running your application on Kubernetes.
var app = express();
//...
var AWSXRay = require('aws-xray-sdk');
AWSXRay.config([XRay.plugins.EC2Plugin, XRay.plugins.ECSPlugin]);
app.use(AWSXRay.express.openSegment('defaultName')); //required at the start of your routes
app.get('/', function (req, res) {
res.render('index');
});
app.use(AWSXRay.express.closeSegment()); //required at the end of your routes / first in error handling routes
For more information about all options and possibilities to instrument your application code, see the X-Ray documentation page for the corresponding SDK information.
The picture below shows the resulting service map that provides insights into the flow of requests through the microservice landscape. You can drill down here into individual traces and see which path each request has taken.
From the service map, you can drill down into individual requests and see where they originated from and how much time was spent in each service processing the request.
You can also view details about every individual segment of the trace by clicking on it. This displays more details.
On the Resources tab, you will see the Kubernetes Pod picked up by the ECSPlugin, which handled the request, as well as the instance that Pod was running on.
Summary
In this post, I shared how to deploy and run X-Ray on an existing Kubernetes cluster. Using tracing gives you deep insights into your applications to ease analysis and spot potential problems early. With X-Ray, you get these insights for all your applications running on AWS, no matter if they are hosted on Amazon ECS, AWS Lambda, or a Kubernetes cluster.
The collective thoughts of the interwebz
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.