Protecting Personal Data in Grab’s Imagery

Post Syndicated from Grab Tech original https://engineering.grab.com/protecting-personal-data-in-grabs-imagery

Image Collection Using KartaView

Starting a few years ago, we realised the strong demand to better understand the streets where our drivers and clients go, with the purpose to better fulfil their needs and also to be able to quickly adapt ourselves to the rapidly changing environment in the Southeast Asia cities.

One way to fulfil that demand was to create an image collection platform named KartaView which is Grab Geo’s platform for geotagged imagery. It empowers collection, indexing, storage, retrieval of imagery, and map data extraction.

KartaView is a public, partially open-sourced product, used both internally and externally by the OpenStreetMap community and other users. As of 2021, KartaView has public imagery in over 100 countries with various coverage degrees, and 60+ cities of Southeast Asia. Check it out at www.kartaview.com.

Figure 1 - KartaView platform
Figure 1 – KartaView platform

Why Image Blurring is Important

Many incidental people and licence plates are in the collected images, whose privacy is a serious concern. We deeply respect all of them and consequently, we are using image obfuscation as the most effective anonymisation method for ensuring privacy protection.

Because manually annotating the regions in the picture where faces and licence plates are located is impractical, this problem should be solved using machine learning and engineering techniques. Hence we detect and blur all faces and licence plates which could be considered as personal data.

Figure 2 - Sample blurred picture
Figure 2 – Sample blurred picture

In our case, we have a wide range of picture types: regular planar, very wide and 360 pictures in equirectangular format collected with 360 cameras. Also, because we are collecting imagery globally, the vehicle types, licence plates, and human environments are quite diverse in appearance, and are not handled well by off-the-shelf blurring software. So we built our own custom blurring solution which yielded higher accuracy and better cost-efficiency overall with respect to blurring of personal data.

Figure 3 - Example of equirectangular image where personal data has to be blurred
Figure 3 – Example of equirectangular image where personal data has to be blurred

Behind the scenes, in KartaView, there are a set of cool services which are able to derive useful information from the pictures like image quality, traffic signs, roads, etc. A big part of them are using deep learning algorithms which potentially can be negatively affected by running them over blurred pictures. In fact, based on the assessment we have done so far, the impact is extremely low, similar to the one reported in a well known study of face obfuscation in ImageNet [9].

Outline of Grab’s Blurring Process

Roughly, the processing steps are the following:

  1. Transform each picture into a set of planar images. In this way, we further process all pictures, whatever the format they had, in the same way.
  2. Use an object detector able to detect all faces and licence plates in a planar image having a standard field of view.
  3. Transform the coordinates of the detected regions into original coordinates and blur those regions.
Figure 4 - Picture’s processing steps [8]
Figure 4 – Picture’s processing steps [8]

In the following section, we are going to describe in detail the interesting aspects of the second step, sharing the challenges and how we were solving them. Let’s start with the first and most important part, the dataset.

Dataset

Our current dataset consists of images from a wide range of cameras, including normal perspective cameras from mobile phones, wide field of view cameras and also 360 degree cameras.

It is the result of a series of data collections contributed by Grab’s data tagging teams, which may contain 2 classes of dataset that are of interest for us: FACE and LICENSE_PLATE.

The data was collected using Grab internal tools, stored in queryable databases, making it a system that gives the possibility to revisit the data and correct it if necessary, but also making it possible for data engineers to select and filter the data of interest.

Dataset Evolution

Each iteration of the dataset was made to address certain issues discovered while having models used in a production environment and observing situations where the model lacked in performance.

Dataset v1 Dataset v2 Dataset v3
Nr. images 15226 17636 30538
Nr. of labels 64119 86676 242534

If the first version was basic, containing a rough tagging strategy we quickly noticed that it was not detecting some special situations that appeared due to the pandemic situation: people wearing masks.

This led to another round of data annotation to include those scenarios.
The third iteration addressed a broader range of issues:

  • Small regions of interest (objects far away from the camera)
  • Objects in very dark backgrounds
  • Rotated objects or even upside down
  • Variation of the licence plate design due to images from different countries and regions
  • People wearing masks
  • Faces in the mirror – see below the mirror of the motorcycle
  • But the main reason was because of a scenario where the recording, at the start or end (but not only), had close-ups of the operator who was checking the camera. This led to images with large regions of interest containing the camera operator’s face – too large to be detected by the model.

An investigation in the dataset structure, by splitting the data into bins based on the bbox sizes (in pixels), made something clear: the dataset was unbalanced.

We made bins for tag sizes with a stride of 100 pixels and went up to the max present in the dataset which accounted for 1 sample of size 2000 pixels. The majority of the labels were small in size and the higher we would go with the size, the less tags we would have. This made it clear that we would need more targeted annotations for our dataset to try to balance it.

All these scenarios required the tagging team to revisit the data multiple times and also change the tagging strategy by including more tags that were considered at a certain limit. It also required them to pay more attention to small details that may have been missed in a previous iteration.

Data Splitting

To better understand the strategy chosen for splitting the data we need to also understand the source of the data. The images come from different devices that are used in different geo locations (different countries) and are from a continuous trip recording. The annotation team used an internal tool to visualise the trips image by image and mark the faces and licence plates present in them. We would then have access to all those images and their respective metadata.

The chosen ratios for splitting are:

  • Train 70%
  • Validation 10%
  • Test 20%
Number of train images 12733
Number of validation images 1682
Number of test images 3221
Number of labeled classes in train set 60630
Number of labeled classes in validation set 7658
Number of of labeled classes in test set 18388

The split is not so trivial as we have some requirements and need to complete some conditions:

  • An image can have multiple tags from one or both classes but must belong to just one subset.
  • The tags should be split as close as possible to the desired ratios.
  • As different images can belong to the same trip in a close geographical relation we need to force them in the same subset, thus avoiding similar tags in train and test subsets, resulting in incorrect evaluations.

Data Augmentation

The application of data augmentation plays a crucial role while training the machine learning model. There are mainly three ways in which data augmentation techniques can be applied. They are:

  1. Offline data augmentation – enriching a dataset by physically multiplying some of its images and applying modifications to them.
  2. Online data augmentation – on the fly modifications of the image during train time with configurable probability for each modification.
  3. Combination of both offline and online data augmentation.

In our case, we are using the third option which is the combination of both.

The first method that contributes to offline augmentation is a method called image view splitting. This is necessary for us due to different image types: perspective camera images, wide field of view images, 360 degree images in equirectangular format. All these formats and field of views with their respective distortions would complicate the data and make it hard for the model to generalise it and also handle different image types that could be added in the future.

For this we defined the concept of image views which are an extracted portion (view) of an image with some predefined properties. For example, the perspective projection of 75 by 75 degrees field of view patches from the original image.

Here we can see a perspective camera image and the image views generated from it:

Figure 5 - Original image
Figure 5 – Original image
Figure 6 - Two image views generated
Figure 6 – Two image views generated

The important thing here is that each generated view is an image on its own with the associated tags. They also have an overlapping area so we have a possibility to contain the same tag in two views but from different perspectives. This brings us to an indirect outcome of the first offline augmentation.

The second method for offline augmentation is the oversampling of some of the images (views). As mentioned above, we faced the problem of an unbalanced dataset, specifically we were missing tags that occupied high regions of the image, and even though our tagging teams tried to annotate as many as they could find, these were still scarce.

As our object detection model is an anchor-based detector, we did not even have enough of them to generate the anchor boxes correctly. This could be clearly seen in the accuracy of the previous trained models, as they were performing poorly on bins of big sizes.

By randomly oversampling images that contained big tags, up to a minimum required number, we managed to have better anchors and increase the recall for those scenarios. As described below, the chosen object detector for blurring was YOLOv4 which offers a large variety of online augmentations. The online augmentations used are saturation, exposure, hue, flip and mosaic.

Model

As of summer of 2021, the “to go” solution for object detection in images are convolutional neural networks (CNN), being a mature solution able to fulfil the needs efficiently.

Architecture

Most CNN based object detectors have three main parts: Backbone, Neck and (Dense or Sparse Prediction) Heads. From the input image, the backbone extracts features which can be combined in the neck part to be used by the prediction heads to predict object bounding-boxes and their labels.

Figure 7 - Anatomy of one and two-stage object detectors [1]
Figure 7 – Anatomy of one and two-stage object detectors [1]

The backbone is usually a CNN classification network pretrained on some dataset, like ImageNet-1K. The neck combines features from different layers in order to produce rich representations for both large and small objects. Since the objects to be detected have varying sizes, the topmost features are too coarse to represent smaller objects, so the first CNN based object detectors were fairly weak in detecting small sized objects. The multi-scale, pyramid hierarchy is inherent to CNNs so [2] introduced the Feature Pyramid Network which at marginal costs combines features from multiple scales and makes predictions on them. This or improved variants of this technique is used by most detectors nowadays. The head part does the predictions for bounding boxes and their labels.

YOLO is part of the anchor-based one-stage object detectors family being developed originally in Darknet, an open source neural network framework written in C and CUDA. Back in 2015 it was the first end-to-end differentiable network of this kind that offered a joint learning of object bounding boxes and their labels.

One reason for the big success of newer YOLO versions is that the authors carefully merged new ideas into one architecture, the overall speed of the model being always the north star.

YOLOv4 introduces several changes to its v3 predecessor:

  • Backbone – CSPDarknet53: YOLOv3 Darknet53 backbone was modified to use Cross Stage Partial Network (CSPNet [5]) strategy, which aims to achieve richer gradient combinations by letting the gradient flow propagate through different network paths.
  • Multiple configurable augmentation and loss function types, so called “Bag of freebies”, which by changing the training strategy can yield higher accuracy without impacting the inference time.
  • Configurable necks and different activation functions, they call “Bag of specials”.

Insights

For this task, we found that YOLOv4 gave a good compromise between speed and accuracy as it has doubled the speed of a more accurate two-stage detector while maintaining a very good overall precision/recall. For blurring, the main metric for model selection was the overall recall, while precision and intersection over union (IoU) of the predicted box comes second as we want to catch all personal data even if some are wrong. Having a multitude of possibilities to configure the detector architecture and train it on our own dataset we conducted several experiments with different configurations for backbones, necks, augmentations and loss functions to come up with our current solution.

We faced challenges in training a good model as the dataset posed a large object/box-level scale imbalance, small objects being over-represented in the dataset. As described in [3] and [4], this affects the scales of the estimated regions and the overall detection performance. In [3] several solutions are proposed for this out of which the SPP [6] blocks and PANet [7] neck used in YOLOv4 together with heavy offline data augmentation increased the performance of the actual model in comparison to the former ones.

As we have evaluated the model; it still has some issues:

  • Occlusion of the object, either by the camera view, head accessories or other elements:

These cases would need extra annotation in the dataset, just like the faces or licence plates that are really close to the camera and occupy a large region of interest in the image.

  • As we have a limited number of annotations of close objects to the camera view, the model has incorrectly learnt this, sometimes producing false positives in these situations:

Again, one solution for this would be to include more of these scenarios in the dataset.

What’s Next?

Grab spends a lot of effort ensuring privacy protection for its users so we are always looking for ways to further improve our related models and processes.

As far as efficiency is concerned, there are multiple directions to consider for both the dataset and the model. There are two main factors that drive the costs and the quality: further development of the dataset for additional edge cases (e.g. more training data of people wearing masks) and the operational costs of the model.

As the vast majority of current models require a fully labelled dataset, this puts a large work effort on the Data Entry team before creating a new model. Our dataset increased 4x for it’s third version, still there is room for improvement as described in the Dataset section.

As Grab extends its operation in more cities, new data is collected that has to be processed, this puts an increased focus on running detection models more efficiently.

Directions to pursue to increase our efficiency could be the following:

  • As plenty of unlabelled data is available from imagery collection, a natural direction to explore is self-supervised visual representation learning techniques to derive a general vision backbone with superior transferring performance for our subsequent tasks as detection, classification.
  • Experiment with optimisation techniques like pruning and quantisation to get a faster model without sacrificing too much on accuracy.
  • Explore new architectures: YOLOv5, EfficientDet or Swin-Transformer for Object Detection.
  • Introduce semi-supervised learning techniques to improve our model performance on the long tail of the data.

References

  1. Alexey Bochkovskiy et al.. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv:2004.10934v1
  2. Tsung-Yi Lin et al. Feature Pyramid Networks for Object Detection. arXiv:1612.03144v2
  3. Kemal Oksuz et al.. Imbalance Problems in Object Detection: A Review. arXiv:1909.00169v3
  4. Bharat Singh, Larry S. Davis. An Analysis of Scale Invariance in Object Detection – SNIP. arXiv:1711.08189v2
  5. Chien-Yao Wang et al. CSPNet: A New Backbone that can Enhance Learning Capability of CNN. arXiv:1911.11929v1
  6. Kaiming He et al. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. arXiv:1406.4729v4
  7. Shu Liu et al. Path Aggregation Network for Instance Segmentation. arXiv:1803.01534v4
  8. http://blog.nitishmutha.com/equirectangular/360degree/2017/06/12/How-to-project-Equirectangular-image-to-rectilinear-view.html
  9. Kaiyu Yang et al. Study of Face Obfuscation in ImageNet: arxiv.org/abs/2103.06191
  10. Zhenda Xie et al. Self-Supervised Learning with Swin Transformers. arXiv:2105.04553v2

Join Us

Grab is the leading superapp platform in Southeast Asia, providing everyday services that matter to consumers. More than just a ride-hailing and food delivery app, Grab offers a wide range of on-demand services in the region, including mobility, food, package and grocery delivery services, mobile payments, and financial services across 428 cities in eight countries.

Powered by technology and driven by heart, our mission is to drive Southeast Asia forward by creating economic empowerment for everyone. If this mission speaks to you, join our team today!

Welcome to Cloudflare Impact Week

Post Syndicated from Matthew Prince original https://blog.cloudflare.com/welcome-to-cloudflare-impact-week/

Welcome to Cloudflare Impact Week

Welcome to Cloudflare Impact Week

If I’m completely honest, Cloudflare didn’t start out as a mission-driven company. When Lee, Michelle, and I first started thinking about starting a company in 2009 we saw an opportunity as the world was shifting from on-premise hardware and software to services in the cloud. It seemed inevitable to us that the same shift would come to security, performance, and reliability services. And, getting ahead of that trend, we could build a great business.

Welcome to Cloudflare Impact Week
Matthew Prince, Michelle Zatlyn, and Lee Holloway, Cloudflare’s cofounders, in 2009.

One problem we had was that we knew in order to have a great business we needed to win large organizations with big IT budgets as customers. And, in order to do that, we needed to have the data to build a service that would keep them safe. But we only could get data on security threats once we had customers. So we had a chicken and egg problem.

Our solution was to provide a basic version of Cloudflare’s services for free. We reasoned that individual developers and small businesses would sign up for the free service. We’d learn a lot about security threats and performance and reliability opportunities based on their traffic data. And, from that, we would build a service we could sell to large businesses.

And, generally, Cloudflare’s business model made sense. We found that, for the most part, small companies got a low volume of cyber attacks, and so we could charge them a relatively small amount. Large businesses faced more attacks, so we could charge them more.

But what surprised us, and we only discovered because we were providing a free version of our service, was that there was a certain set of small organizations with very limited resources that received very large attacks. Servicing them was what made Cloudflare the mission-driven company we are today.

The Committee to Protect Journalists

If you ever want to be depressed, sign up for the newsletter of the Committee to Protect Journalists (CPJ). They’re the organization that, when a journalist is kidnapped or killed anywhere in the world, negotiates their release or, far too often, recovers their body.

I’d met the director of the organization at an event in early 2012. Not long after, he called me and asked if I wanted to meet three Cloudflare customers who were in town. I didn’t, I have to confess, but Michelle pushed me to take the meeting.

On a rainy San Francisco afternoon the director of CPJ brought three African journalists to our office. All three of them hugged me. One was from Ethiopia, another was from Angola, and the third they wouldn’t tell us his name or where he was from because he was “currently being hunted by death squads.”

For the next 90 minutes, I listened to stories of how the journalists were covering corruption in their home countries, how their work put them constantly in harm’s way, how powerful forces worked to silence them, how cyberattacks had been a constant struggle, and how, today, they depended on Cloudflare’s free service to keep their work online. That last bit hit me like a ton of bricks.

After our meeting finished, and we saw the journalists out, with Cloudflare T-shirts and other swag in hand, I turned to Michelle and said, “Whoa. What have we gotten ourselves into?”

Becoming Mission Driven

I’ve thought about that meeting often since. It was the moment I realized that Cloudflare had a mission beyond just being a good business. The Internet was a critically important resource for those three journalists and many others like them. At the same time, forces that sought to limit their work would use cyberattacks to shut them down. While we hadn’t set out to ensure everyone had world-class cybersecurity, regardless of their ability to pay, now it seemed critically important.

With that realization, Cloudflare’s mission came naturally: we aim to help build a better Internet. One where you don’t need to be a giant company to be fast and reliable. And where even a journalist, working on their own against daunting odds, can be secure online.

This is why we’ve prioritized projects that give back to the Internet. We launched Project Galileo, which provides our enterprise-grade services to organizations performing politically or artistically important work. We launched the Athenian Project to help protect elections against cyber attacks. We launched Project Fair Shot to make sure the organizations distributing the COVID-19 vaccine had the technical resources they needed to do so equitably.

Welcome to Cloudflare Impact Week

And, even on the technical side, we work hard to make the Internet better even when there’s no clear economic benefit to us, or even when it’s against our economic benefit. We don’t monetize user data because it seems clear to us that a better Internet is a more private Internet. We enabled encryption for everyone even though, when we did it, it was the biggest differentiator between our free and paid plans and the number one reason people upgraded. But clearly a better Internet was an encrypted Internet, and it seemed silly that someone should have to pay extra for a little bit of math.

Our First Impact Week

This week we kick off Cloudflare’s first Impact Week. We originally conceived the idea of the week as a way to highlight some of the things we were doing as a company around our environmental, social, and governance (ESG) initiatives. But, as is the nature of innovation weeks at Cloudflare, as soon as we announced it internally our team started proposing new products and features to take some of our existing initiatives even further.

So, over the course of the week, in addition to talking about how we’ve built our network to consume less power we’ll also be demonstrating how we’re increasingly using hyper power-efficient Arm-based servers to achieve even higher levels of efficiency in order to lessen the environmental impact of running the Internet. We’ll launch a new Workers option for developers who want to be more environmentally conscious. And we’ll announce an initiative in partnership with other leading Internet companies that we hope, if broadly adopted, could cut down as much as 25% of global web traffic and the corresponding energy wasted to serve it.

We’ll also focus on how we can bring the Internet to more people. While broadband has been a revolution where it’s available, rural and underserved-urban communities around the world still suffer from slow Internet speeds and limited ISP choice. We can’t completely solve that problem (yet) but we’ll be announcing an initiative that will help with some critical aspects.

Finally, as Cloudflare becomes a larger part of the Internet, we’ll be announcing programs both to monitor the network’s health, affirm our commitments to human rights, and extend our protections of critical societal functions like protecting elections.

When I first was trying to convince Michelle that we should start a business together, I pitched her a bunch of ideas. Most of them involved finding a clever way to extract rents from some group or another, often for not much benefit to society at large. Sitting in an Ethiopian restaurant in Central Square, I remember so clearly her saying to me, “Matthew, those are all great business ideas. But they’re not for me. I want to do something where I can be proud of the work we’re doing and the positive impact we’ve made.”

That sentence made me go back to the drawing board. The next business idea I pitched to her turned out to be Cloudflare. Today, Cloudflare’s mission remains helping build a better Internet. And, as we kick off Impact Week, we are proud to continue to live that mission in everything we do.

Абе, тоя на луди ли ни пра‘и, бе!?

Post Syndicated from original https://bivol.bg/%D0%B0%D0%B1%D0%B5-%D1%82%D0%BE%D1%8F-%D0%BD%D0%B0-%D0%BB%D1%83%D0%B4%D0%B8-%D0%BB%D0%B8-%D0%BD%D0%B8-%D0%BF%D1%80%D0%B0%D0%B8-%D0%B1%D0%B5.html

неделя 25 юли 2021


През 2020-а EUROACTIV са отчели 19 (словом, деветнадесет) случая на осъдени за корупция в България. Деветнадесет. До един на кокошкари. Пюрета, шкембеджийници, казани за варене на ракия, детски пюрета, компоти, всякакви фотки…

Cloudflare’s Handling of an RCE Vulnerability in cdnjs

Post Syndicated from Jonathan Ganz original https://blog.cloudflare.com/cloudflares-handling-of-an-rce-vulnerability-in-cdnjs/

Cloudflare's Handling of an RCE Vulnerability in cdnjs

Cloudflare's Handling of an RCE Vulnerability in cdnjs

cdnjs provides JavaScript, CSS, images, and fonts assets for websites to reference with more than 4,000 libraries available. By utilizing cdnjs, websites can load faster with less strain on one’s own origin server as files are served directly from Cloudflare’s edge. Recently, a blog post detailed a vulnerability in the way cdnjs’ backend automatically keeps the libraries up to date.

This vulnerability allowed the researcher to execute arbitrary code, granting the ability to modify assets. This blog post details how Cloudflare responded to this report, including the steps we took to block exploitation, investigate potential abuse, and remediate the vulnerability.

This vulnerability is not related to Cloudflare CDN. The cdnjs project is a platform that leverages Cloudflare’s services, but the vulnerability described below relates to cdnjs’ platform only. To be clear, no existing libraries were modified using this exploit. The researcher published a new package which demonstrated the vulnerability and our investigation concluded that the integrity of all assets hosted on cdnjs remained intact.

Disclosure Timeline

As outlined in RyotaK’s blog post, the incident began on 2021-04-06. At around 1100 GMT, RyotaK published a package to npm exploiting the vulnerability. At 1129 GMT, cdnjs processed this package, resulting in a leak of credentials. This triggered GitHub alerting which notified Cloudflare of the exposed secrets.

Cloudflare disabled the auto-update service and revoked all credentials within an hour. In the meantime, our security team received RyotaK’s remote code execution report through HackerOne. A new version of the auto-update tool which prevents exploitation of the vulnerability RyotaK reported was released within 24 hours.

Having taken action immediately to prevent exploitation, we then proceeded to redesign the auto-update pipeline. Work to completely redesign it was completed on 2021-06-03.

Blocking Exploitation

Before RyotaK reported the vulnerability via HackerOne, Cloudflare had already taken action. When GitHub notified us that credentials were leaked, one of our engineers took immediate action and revoked them all. Additionally, the GitHub token associated with this service was automatically revoked by GitHub.

The second step was to bring the vulnerable service offline to prevent further abuse while we investigated the incident. This prevented exploitation but also made it impossible for legitimate developers to publish updates to their libraries. We wanted to release a fixed version of the pipeline used for retrieving and hosting new library versions so that developers could continue to benefit from caching. However, we understood that a stopgap was not a long term fix, and we decided to review the entire current solution to identify a better design that would improve the overall security of cdnjs.

Investigation

Any sort of investigation requires access to logs and all components of our pipeline generate extensive logs that prove valuable for forensics efforts. Logs produced by the auto-update process are collected in a GitHub repository and sent to our logging pipeline. We also collect and retain logs from cdnjs’ Cloudflare account. Our security team began reviewing this information as soon as we received RyotaK’s initial report. Based on access logs, API token usage, and file modification metadata, we are confident that only RyotaK exploited this vulnerability during his research and only on test files. To rule out abuse, we reviewed the list of source IP addresses that accessed the Workers KV token prior to revoking it and only found one, which belongs to the cdnjs auto-update bot.

The cdnjs team also reviewed files that were pushed to the cdnjs/cdnjs GitHub repository around that time and found no evidence of any other abuse across cdnjs.

Remediating the Vulnerability

Around half of the libraries on cdnjs use npm to auto-update. The primary vector in this attack was the ability to craft a .tar.gz archive with a symbolic link and publish it to the npm registry. When our pipeline extracted the content it would follow symlinks and overwrite local files using the pipeline user privileges. There are two fundamental issues at play here: an attacker can perform path traversal on the host processing untrusted files, and the process handling the compressed file is overly privileged.

We addressed the path traversal issue by checking that the destination of each file in the tarball will be contained within the target directory that the update process has designated for that package. If the file’s full canonical path doesn’t begin with the destination directory’s full path, we log this as a warning and skip extracting that file. This works fairly well, but as noted in the comment above this check, if the compressed file uses UTF-8 encoding for filenames, this check may not properly canonicalize the path. If this canonicalization does not occur, the path may contain path traversal, even though it starts with the correct destination path.

To ensure that other vulnerabilities in cdnjs’ publication pipeline cannot be exploited, we configured an AppArmor profile for it. This limits the capabilities of the service, so even if an attacker successfully instructed the process to perform an action, the operating system (kernel / security feature) will not allow any action outside of what it is allowed to do.

For illustration, here’s an example:

/path/to/bin {
  network,
  signal,
  /path/to/child ix,
  /tmp/ r,
  /tmp/cache** rw,
  ...
}

In this example, we only allow the binary (/path/to/bin) to:

  • access all networking
  • use all signals
  • execute /path/to/child (which will inherit the AppArmor profile)
  • read from /tmp
  • read+write under /tmp/cache.

Any attempt to access anything else will be denied. You can find the complete list of capabilities and more information on AppArmor’s manual page.

In the case of cdnjs’ autoupdate tool, we limit execution of applications to a very specific set, and we limit where files can be written.

Fixing the path traversal and implementing the AppArmor profile prevents similar issues from being exploited. However, having a single layer of defense wasn’t enough. We decided to completely redesign the auto-update process entirely to isolate each step, as well as each library it processes, thus preventing this entire class of attacks.

Redesigning the system

The main idea behind the redesign of the pipeline was to move away from the monolithic auto-update process. Instead, various operations are done using microservices or daemons which have well-defined scopes. Here’s an overview of the steps:

Cloudflare's Handling of an RCE Vulnerability in cdnjs

First, to detect new library versions, two daemons (for both npm and git based updates) are regularly running. Once a new version has been detected, the files will be downloaded as an archive and placed into the incoming storage bucket.

Writing a new version in the incoming bucket triggers a function that adds all the information we need to update the library. The function also generates a signed URL allowing for writing in the outgoing bucket, but only in a specific folder for a given library, reducing the blast radius. Finally, a message is placed into a queue to indicate that the new version of the given library is ready to be published.

A daemon listens for incoming messages and spawns an unprivileged Docker container to handle dangerous operations (archive extraction, minifications, and compression). After the sandbox exits, the daemon will use the signed URL to store the processed files in the outgoing storage bucket.

Finally, multiple daemons are triggered when the finalized package is written to the outgoing bucket. These daemons publish the assets to cdnjs.cloudflare.com and to the main cdnjs repository. The daemons also publish the version specific URL, cryptographic hash, and other information to Workers KV, cdnjs.com, and the API.

In this revised design, exploiting a similar vulnerability would happen in the sandbox (Docker container) context. The attacker would have access to container files, but nothing else. The container is minimal, ephemeral, has no secrets included, and is dedicated to a single library update, so it cannot affect other libraries’ files.

Our Commitment to Security

Beyond maintaining a vulnerability disclosure program, we regularly perform internal security reviews and hire third-party firms to audit the software we develop. But it is through our vulnerability disclosure program that we receive some of the most interesting and creative reports. Each report has helped us improve the security of our services. In this case, we worked with RyotaK to not only address the vulnerability, but to also ensure that their blog post was detailed and accurate. We invite those that find a security issue in any of Cloudflare’s services to report it to us through HackerOne.

Choosing Your VPC Endpoint Strategy for Amazon S3

Post Syndicated from Jeff Harman original https://aws.amazon.com/blogs/architecture/choosing-your-vpc-endpoint-strategy-for-amazon-s3/

This post was co-written with Anusha Dharmalingam, former AWS Solutions Architect.

Must your Amazon Web Services (AWS) application connect to Amazon Simple Storage Service (S3) buckets, but not traverse the internet to reach public endpoints? Must the connection scale to accommodate bandwidth demands? AWS offers a mechanism called VPC endpoint to meet these requirements. This blog post provides guidance for selecting the right VPC endpoint type to access Amazon S3. A VPC endpoint enables workloads in an Amazon VPC to connect to supported public AWS services or third-party applications over the AWS network. This approach is used for workloads that should not communicate over public networks.

When a workload architecture uses VPC endpoints, the application benefits from the scalability, resilience, security, and access controls native to AWS services. Amazon S3 can be accessed using an interface VPC endpoint powered by AWS PrivateLink or a gateway VPC endpoint. To determine the right endpoint for your workloads, we’ll discuss selection criteria to consider based on your requirements.

VPC endpoint overview

A VPC endpoint is a virtual scalable networking component you create in a VPC and use as a private entry point to supported AWS services and third-party applications. Currently, two types of VPC endpoints can be used to connect to Amazon S3: interface VPC endpoint and gateway VPC endpoint.

When you configure an interface VPC endpoint, an elastic network interface (ENI) with a private IP address is deployed in your subnet. An Amazon EC2 instance in the VPC can communicate with an Amazon S3 bucket through the ENI and AWS network. Using the interface endpoint, applications in your on-premises data center can easily query S3 buckets over AWS Direct Connect or Site-to-Site VPN. Interface endpoint supports a growing list of AWS services. Consult our documentation to find AWS services compatible with interface endpoints powered by AWS PrivateLink.

Gateway VPC endpoints use prefix lists as the IP route target in a VPC route table. This routes traffic privately to Amazon S3 or Amazon DynamoDB. An EC2 instance in a VPC without internet access can still directly read from and/or write to an Amazon S3 bucket. Amazon DynamoDB and Amazon S3 are the services currently accessible via gateway endpoints.

Your internal security policies may have strict rules against communication between your VPC and the internet. To maintain compliance with these policies, you can use VPC endpoint to connect to AWS public services like Amazon S3. To control user or application access to the VPC endpoint and the resources it supports, you can use an AWS Identity and Access Management (AWS IAM) resource policy. This will separately secure the VPC endpoint and accessible resources.

Selecting gateway or interface VPC endpoints

With both interface endpoint and gateway endpoint available for Amazon S3, here are some factors to consider as you choose one strategy over the other.

  • Cost: Gateway endpoints for S3 are offered at no cost and the routes are managed through route tables. Interface endpoints are priced at $0.01/per AZ/per hour. Cost depends on the Region, check current pricing. Data transferred through the interface endpoint is charged at $0.01/per GB (depending on Region).
  • Access pattern: S3 access through gateway endpoints is supported only for resources in a specific VPC to which the endpoint is associated. S3 gateway endpoints do not currently support access from resources in a different Region, different VPC, or from an on-premises (non-AWS) environment. However, if you’re willing to manage a complex custom architecture, you can use proxies. In all those scenarios, where access is from resources external to VPC, S3 interface endpoints access S3 in a secure way.
  • VPC endpoint architecture: Some customers use centralized VPC endpoint architecture patterns. This is where the interface endpoints are all managed in a central hub VPC for accessing the service from multiple spoke VPCs. This architecture helps reduce the complexity and maintenance for multiple interface VPC endpoints across different VPCs. When using an S3 interface endpoint, you must consider the amount of network traffic that would flow through your network from spoke VPCs to hub VPC. If the network connectivity between spoke and hub VPCs are set up using transit gateway, or VPC peering, consider the data processing charges (currently $0.02/GB). If VPC peering is used, there is no charge for data transferred between VPCs in the same Availability Zone. However, data transferred between Availability Zones or between Regions will incur charges as defined in our documentation.

In scenarios where you must access S3 buckets securely from on-premises or from across Regions, we recommend using an interface endpoint. If you chose a gateway endpoint, install a fleet of proxies in the VPC to address transitive routing.

Figure 1. VPC endpoint architecture

Figure 1. VPC endpoint architecture

  • Bandwidth considerations: When setting up an interface endpoint, choose multiple subnets across multiple Availability Zones to implement high availability. The number of ENIs should equal to number of subnets chosen. Interface endpoints offer a throughput of 10 Gbps per ENI with a burst capability of 40 Gbps. If your use case requires higher throughput, contact AWS Support.

Gateway endpoints are route table entries that route your traffic directly from the subnet where traffic is originating to the S3 service. Traffic does not flow through an intermediate device or instance. Hence, there is no throughput limit for the gateway endpoint itself. The initial setup for gateway endpoints consists in specifying the VPC route tables you would like to use to access the service. Route table entries for the destination (prefix list) and target (endpoint ID) are automatically added to the route tables.

The two architectural options for creating and managing endpoints are:

Single VPC architecture

Using a single VPC, we can configure:

  • Gateway endpoints for VPC resources to access S3
  • VPC interface endpoint for on-premises resources to access S3

The following architecture shows the configuration on how both can be set up in a single VPC for access. This is useful when access from within AWS is limited to a single VPC while still enabling external (non-AWS) access.

Figure 2. Single VPC architecture

Figure 2. Single VPC architecture

DNS configured on-premises will point to the VPC interface endpoint IP addresses. It will forward all traffic from on-premises to S3 through the VPC interface endpoint. The route table configured in the subnet will ensure that any S3 traffic originating from the VPC will flow to S3 using gateway endpoints.

Multi-VPC centralized architecture

In a hub and spoke architecture that centralizes S3 access for multi-Region, cross-VPC, and on-premises workloads, we recommend using an interface endpoint in the hub VPC. The same pattern would also work in multi-account/multi-region design where multiple VPCs require access to centralized buckets.

Note: Firewall appliances that monitor east-west traffic will experience increased load with the Multi-VPC centralized architecture. It may be necessary to use the single VPC endpoint design to reduce impact to firewall appliances.

Figure 3. Multi-VPC centralized architecture

Figure 3. Multi-VPC centralized architecture

Conclusion

Based on preceding considerations, you can choose to use a combination of gateway and interface endpoints to meet your specific needs. Depending on the account structure and VPC setup, you can support both types of VPC endpoints in a single VPC by using a shared VPC architecture.

With AWS, you can choose between two VPC endpoint types (gateway endpoint or interface endpoint) to securely access your S3 buckets using a private network. In this blog, we showed you how to select the right VPC endpoint using criteria like VPC architecture, access pattern, and cost. To learn more about VPC endpoints and improve the security of your architecture, read Securely Access Services Over AWS PrivateLink.

Големият разговор не е за Гешев

Post Syndicated from Емилия Милчева original https://toest.bg/golemiyat-razgovor-ne-e-za-geshev/

Какво да се прави с каскета на Иван Гешев и човека под него, не е въпрос на правосъдната система. Политически въпрос е, с всички произтичащи от това споразумения, в т.ч. и исторически компромис, който може да се наложи. В новия парламент има реални шансове най-сетне да се случи истинска, а не половинчата, като онази от 2015 г., съдебна реформа. Изглеждаше химера, но всъщност ѝ дойде времето.

Източник

Илинденско-Преображенско правителство

Post Syndicated from Емилия Милчева original https://toest.bg/ilindensko-preobrazhensko-pravitelstvo/

Единственото сигурно за новото правителство е, че ще го има. Още не знаем кога – и кой ще е министър-председателят, но ще го научим в първите дни на август. Няма да е партиен лидер. Никой не предсказва „дълъг живот“ на бъдещия кабинет, но и не врачуват колко ще е къс. Независимо обаче дали ще управлява няколко месеца, година или повече, коефициентът на полезно действие на 98-мото българско…

Източник

Как (не) се боледува и умира по клинична пътека

Post Syndicated from Светла Енчева original https://toest.bg/kak-ne-se-boleduva-i-umira-po-klinichna-pateka/

Скандалът между служебния здравен министър Стойчо Кацаров и лекари от болницата за спешна помощ „Пирогов“ раздели общественото мнение на две. Да припомним накратко за какво става дума. Здравният министър уволни директора на „Пирогов“ Асен Балтов малко повече от месец след като Министерството на здравеопазването изнесе информация за стотици „фалшиви хоспитализации“. Става дума за пациенти…

Източник

Бъдещето на Пловдив – между бетонни кутийки и град за хората

Post Syndicated from Георги Велев original https://toest.bg/budeshteto-na-plovdiv/

Повечето хора нямат представа какво е значението на термина „общ устройствен план“ (ОУП) и по какъв начин този стратегически документ влияе върху развитието на дадено населено място. По подобен начин въпросът дали е разумно Общината да тегли заем, за да финансира инфраструктурни, социални или културни инициативи, и по какви критерии да бъдат избрани обектите, остава извън полезрението на…

Източник

По буквите: Белоу, Гърбич, Остър

Post Syndicated from Марин Бодаков original https://toest.bg/po-bukvite-bellow-grbic-auster/

От човек до човек, с нова книга в ръка – ходенето по буквите продължава. Два пъти месечно Марин Бодаков представя по три нови литературни заглавия. И пита с какво точно тези книги ни променят. „Херцог“ от Сол Белоу превод от английски Боряна Даракчиева, София: изд. „Лист“, 2021 „Херцог“ е роман за мъжката криза на средната възраст, за невъзможния баланс между живота в света на идеите и живота в…

Източник

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close