Developer Sabotages Open-Source Software Package

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/03/developer-sabotages-open-source-software-package.html

This is a big deal:

A developer has been caught adding malicious code to a popular open-source package that wiped files on computers located in Russia and Belarus as part of a protest that has enraged many users and raised concerns about the safety of free and open source software.

The application, node-ipc, adds remote interprocess communication and neural networking capabilities to other open source code libraries. As a dependency, node-ipc is automatically downloaded and incorporated into other libraries, including ones like Vue.js CLI, which has more than 1 million weekly downloads.

[…]

The node-ipc update is just one example of what some researchers are calling protestware. Experts have begun tracking other open source projects that are also releasing updates calling out the brutality of Russia’s war. This spreadsheet lists 21 separate packages that are affected.

One such package is es5-ext, which provides code for the ECMAScript 6 scripting language specification. A new dependency named postinstall.js, which the developer added on March 7, checks to see if the user’s computer has a Russian IP address, in which case the code broadcasts a “call for peace.”

It constantly surprises non-computer people how much critical software is dependent on the whims of random programmers who inconsistently maintain software libraries. Between log4j and this new protestware, it’s becoming a serious vulnerability. The White House tried to start addressing this problem last year, requiring a “software bill of materials” for government software:

…the term “Software Bill of Materials” or “SBOM” means a formal record containing the details and supply chain relationships of various components used in building software. Software developers and vendors often create products by assembling existing open source and commercial software components. The SBOM enumerates these components in a product. It is analogous to a list of ingredients on food packaging. An SBOM is useful to those who develop or manufacture software, those who select or purchase software, and those who operate software. Developers often use available open source and third-party software components to create a product; an SBOM allows the builder to make sure those components are up to date and to respond quickly to new vulnerabilities. Buyers can use an SBOM to perform vulnerability or license analysis, both of which can be used to evaluate risk in a product. Those who operate software can use SBOMs to quickly and easily determine whether they are at potential risk of a newly discovered vulnerability. A widely used, machine-readable SBOM format allows for greater benefits through automation and tool integration. The SBOMs gain greater value when collectively stored in a repository that can be easily queried by other applications and systems. Understanding the supply chain of software, obtaining an SBOM, and using it to analyze known vulnerabilities are crucial in managing risk.

It’s not a solution, but it’s a start.

EDITED TO ADD (3/22): Brian Krebs on protestware.

Cloud Pentesting, Pt. 1: Breaking Down the Basics

Post Syndicated from Eric Mortaro original https://blog.rapid7.com/2022/03/21/cloud-pentesting-pt-1-breaking-down-the-basics/

Cloud Pentesting, Pt. 1: Breaking Down the Basics

The concept of cloud computing has been around for awhile, but it seems like as of late — at least in the penetration testing field — more and more customers are looking to get a pentest done in their cloud deployment. What does that mean? How does that look? What can be tested, and what’s out of scope? Why would I want a pentest in the cloud? Let’s start with the basics here, to hopefully shed some light on what this is all about, and then we’ll get into the thick of it.

Cloud computing is the idea of using software and services that run on the internet as a way for an organization to deploy their once on-premise systems. This isn’t a new concept — in fact, the major vendors, such as Amazon’s AWS, Microsoft’s Azure, and Google’s Cloud Platform, have all been around for about 15 years. Still, cloud sometimes seems like it’s being talked about as if it was invented just yesterday, but we’ll get into that a bit more later.

So, cloud computing means using someone else’s computer, in a figurative or quite literal sense. Simple enough, right?  

Wrong! There are various ways that companies have started to utilize cloud providers, and these all impact how pentests are carried out in cloud environments. Let’s take a closer look at the three primary cloud configurations.

Traditional cloud usage

Some companies have simply lifted infrastructure and services straight from their own on-premise data centers and moved them into the cloud. This looks a whole lot like setting up one virtual private cloud (VPC), with numerous virtual machines, a flat network, and that’s it! While this might not seem like a company is using their cloud vendor to its fullest potential, they’re still reaping the benefits of never having to manage uptime of physical hardware, calling their ISP late at night because of an outage, or worrying about power outages or cooling.

But one inherent problem remains: The company still requires significant staff to maintain the virtual machines and perform operating system updates, software versioning, cipher suite usage, code base fixes, and more. This starts to look a lot like the typical vulnerability management (VM) program, where IT and security continue to own and maintain infrastructure. They work to patch and harden endpoints in the cloud and are still in line for changes to be committed to the cloud infrastructure.

Cloud-native usage

The other side of cloud adoption is a more mature approach, where a company has devoted time and effort toward transitioning their once on-premise infrastructure to a fully utilized cloud deployment. While this could very well include the use of the typical VPC, network stack, virtual machines, and more, the more mature organization will utilize cloud-native deployments. These could include storage services such as S3, function services, or even cloud-native Kubernetes.

Cloud-native users shift the priorities and responsibilities of IT and security teams so that they no longer act as gatekeepers to prevent the scaling up or out of infrastructure utilized by product teams. In most of these environments, the product teams own the ability to make commitments in the cloud without IT and security input. Meanwhile, IT and security focus on proper controls and configurations to prevent security incidents. Patching is exchanged for rebuilds, and network alerting and physical server isolation are handled through automated responses, such as an alert with AWS Config that automatically changes the security group for a resource in the cloud and isolates it for further investigation.

These types of deployments start to more fully utilize the capabilities of the cloud, such as automated deployment through infrastructure-as-code solutions like AWS Cloud Formation. Gone are the days when an organization would deploy Kubernetes on top of a virtual machine to deploy containers. Now, cloud-native vendors provide this service with AWS’s Elastic Kubernetes Services, Microsoft’s Azure Kubernetes Services, and for obvious reasons, Google’s Kubernetes Engine. These and other types of cloud native deployments really help to ease the burden on the organization.

Hybrid cloud

Then there’s hybrid cloud. This is where a customer can set up their on-premise environment to also tie into their cloud environment, or visa versa. One common theme we see is with Microsoft Azure, where the Azure AD Connect sync is used to synchronize on-premise Active Directory to Azure AD. This can be very beneficial when the company is using other Software-as-a-Service (SaaS) components, such as Microsoft Office 365.  

There are various benefits to utilizing hybrid cloud deployments. Maybe there are specific components that a customer wants to keep in house and support on their own infrastructure. Or perhaps the customer doesn’t yet have experience with how to maintain Kubernetes but is utilizing Google Cloud Platform. The ability to deploy your own services is the key to flexibility, and the cloud helps provide that.

In part two, we’ll take a closer look at how these different cloud deployments impact pentesting in the cloud.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Beingessner: Rust’s Unsafe Pointer Types Need An Overhaul

Post Syndicated from original https://lwn.net/Articles/888693/

Aria Beingessner points out a set of
problems
with Rust’s conception of unsafe pointers and proposes some
fixes in this highly detailed post.

Rust currently says this code is totally cool and fine:

    // Masking off a tag someone packed into a pointer:
    let mut addr = my_ptr as usize;
    addr = addr & !0x1; 
    let new_ptr = addr as *mut T;
    *new_ptr += 10;

This is some pretty bog-standard code for messing with tagged pointers, what’s wrong with that?
[…]

For this to possibly work with Pointer Provenance and Alias Analysis, that
stuff must pervasively infect all integers on the assumption that they
might be pointers. This is a huge pain in the neck for people who are
trying to actually formally define Rust’s memory model, and for people who
are trying to build sanitizers for Rust that catch UB. And I assure you
it’s just as much a headache for all the LLVM and C(++) people too.

Security updates for Monday

Post Syndicated from original https://lwn.net/Articles/888686/

Security updates have been issued by Debian (bind9, chromium, libgit2, libpano13, paramiko, usbredir, and wordpress), Fedora (expat, kernel, openexr, thunderbird, and wordpress), openSUSE (chromium, frr, and weechat), Red Hat (java-1.7.1-ibm and java-1.8.0-ibm), SUSE (frr), and Ubuntu (imagemagick).

Application security: Cloudflare’s view

Post Syndicated from Michael Tremante original https://blog.cloudflare.com/application-security/

Application security: Cloudflare’s view

Application security: Cloudflare’s view

Developers, bloggers, business owners, and large corporations all rely on Cloudflare to keep their applications secure, available, and performant.

To meet these goals, over the last twelve years we have built a smart network capable of protecting many millions of Internet properties. As of March 2022, W3Techs reports that:

“Cloudflare is used by 80.6% of all the websites whose reverse proxy service we know. This is 19.7% of all websites”

Netcraft, another provider who crawls the web and monitors adoption puts this figure at more than 20M active sites in their latest Web Server Survey (February 2022):

“Cloudflare continues to make strong gains amongst the million busiest websites, where it saw the only notable increases, with an additional 3,200 sites helping to bring its market share up to 19.4%”

The breadth and diversity of the sites we protect, and the billions of browsers and devices that interact with them, gives us unique insight into the ever-changing application security trends on the Internet. In this post, we share some of those insights we’ve gathered from the 32 million HTTP requests/second that pass through our network.

Definitions

Before we examine the data, it is useful to define the terminology we use. Throughout this post, we will refer to the following terms:

  • Mitigated Traffic: any eyeball HTTP* request that had a “terminating” action applied to by the Cloudflare platform. These include actions such as BLOCK, CHALLENGE (such as captchas or JavaScript based challenges). This does not include requests that had the following actions applied: LOG, SKIP, ALLOW.
  • Bot Traffic/Automated Traffic: any HTTP request identified by Cloudflare’s Bot Management system as being generated by a bot. This includes requests scored between 1 and 29.
  • API Traffic: any HTTP request with a response content type of XML, JSON, gRPC, or similar. Where the response content type is not available, such as for mitigated requests, the equivalent Accept content type (specified by the user agent) is used instead. In this latter case API traffic won’t be fully accounted for, but for insight purposes it still provides a good representation.

Unless otherwise stated, the time frame evaluated in this post is the three-month period from December 1, 2021, to March 1, 2022.

Finally, please note that the data is calculated based only on traffic observed across the Cloudflare network and does not necessarily represent overall HTTP traffic patterns across the Internet.

*When referring to HTTP traffic we mean both HTTP and HTTPS.

Global Traffic Insights

The first thing we can look at is traffic mitigated across all HTTP requests proxied by the Cloudflare network. This will give us a good baseline view before drilling into specific traffic types, such as bot and API traffic.

8% of all Cloudflare HTTP traffic is mitigated

Cloudflare proxies ~32 million HTTP requests per second on average, with more than ~44 million HTTP requests per second at peak. Overall, ~2.5 million requests per second are mitigated by our global network and never reach our caches or the origin servers, ensuring our customers’ bandwidth and compute power is only used for clean traffic.

Site owners using Cloudflare gain access to tools to mitigate unwanted or malicious traffic and allow access to their applications only when a request is deemed clean. This can be done both using fully managed features, such as our DDoS mitigation, WAF managed ruleset or schema validation, as well as custom rules that allow users to define their own filters for blocking traffic.

If we look at the top five Cloudflare features (sources) that mitigated traffic, we get a clear picture of how much each Cloudflare feature is contributing towards helping keep customer sites and applications online and secure:

Application security: Cloudflare’s view

Tabular format for reference:

Source Percentage %
Layer 7 DDoS mitigation 66.0%
Custom WAF Rules 19.0%
Rate Limiting 10.5%
IP Threat Reputation 2.5%
Managed WAF Rules 1.5%

Looking at each mitigation source individually:

  • Layer 7 DDoS mitigation, perhaps unsurprisingly, is the largest contributor to mitigated HTTP requests by total count (66% overall). Cloudflare’s layer 7 DDoS rules are fully managed and don’t require user configuration: they automatically detect a vast array of HTTP DDoS attacks including those generated by the Meris botnet, Mirai botnet, known attack tools, and others. Volumetric DDoS attacks, by definition, create a lot of malicious traffic!
  • Custom WAF Rules contribute to more than 19% of mitigated HTTP traffic. These are user-configured rules defined using Cloudflare’s wirefilter syntax. We explore common rule patterns further down in this post.
  • Our Rate Limiting feature allows customers to define custom thresholds based on application preferences. It is often used as an additional layer of protection for applications against traffic patterns that are too low to be detected as a DDoS attack. Over the time frame analyzed, rate limiting contributed to 10.5% of mitigated HTTP requests.
  • IP Threat Reputation is exposed in the Cloudflare dashboard as Security Level. Based on behavior we observe across the network, Cloudflare automatically assigns a threat score to each IP address. When the threat score is above the specified threshold, we challenge the traffic. This accounts for 2.5% of all mitigated HTTP requests.
  • Our Managed WAF Rules are rules that are handcrafted by our internal security analyst team aimed at matching only against valid malicious payloads. They contribute to about 1.5% of all mitigated requests.

HTTP anomalies are the most common attack vector

If we drill into Managed WAF Rules, we get a clear picture of what type of attack vectors malicious users are attempting against the Internet properties we protect.

The vast majority (over 54%) of HTTP requests blocked by our Managed WAF Rules contain HTTP anomalies, such as malformed method names, null byte characters in headers, non-standard ports or content length of zero with a POST request.

Common attack types in this category are shown below. These have been grouped when relevant:

Rule Type Description
Missing User Agent These rules will block any request without a User-Agent header. All browsers and legitimate crawlers present this header when connecting to a site. Not having a user agent is a common signal of a malicious request.
Not GET, POST or HEAD Method Most applications only allow standard GET or POST requests (normally used for viewing pages or submitting forms). HEAD requests are also often sent from browsers for security purposes. Customers using our Managed Rules can easily block any other method – which normally results in blocking a large number of vulnerability scanners.
Missing Referer When users navigate applications, browsers use the Referer header to indicate where they are coming from. Some applications expect this header to always be present.
Non-standard port Customers can configure Cloudflare Managed Rules to block HTTP requests trying to access non-standard ports (such as 80 and 443). This is activity normally seen by vulnerability scanners.
Invalid UTF-8 encoding It is common for attackers to attempt to break an application server by sending “special” characters that are not valid in UTF-8 encoding.

More commonly known and referenced attack vectors such as XSS and SQLi only contribute to about 13% of total mitigated requests. More interestingly, attacks aimed at information disclosure are third most popular (10%) and software-specific CVE-based attacks account for about 12% of mitigated requests (more than SQLi alone) highlighting both the importance of needing to patch software quickly, and the likelihood of CVE proof-of-concepts (PoCs) being used to compromise applications, such as with the recent Log4J vulnerability. The top 10 attack vectors by percentage of mitigated requests are shown below:

Application security: Cloudflare’s view

Tabular format for reference:

Source Percentage %
HTTP Anomaly 54.5%
Vendor Specific CVE 11.8%
Information Disclosure 10.4%
SQLi 7.0%
XSS 6.1%
File Inclusion 3.3%
Fake Bots 3.0%
Command Injection 2.7%
Open Redirects 0.1%
Other 1.5%

Businesses still rely on IP address-based access lists to protect their assets

In the prior section, we noted that 19% of mitigated requests come from Custom WAF Rules. These are rules that Cloudflare customers have implemented using the wirefilter syntax. At time of writing, Cloudflare customers had a total of ~6.5 million Custom WAF rules deployed.

It is interesting to look at what rule fields customers are using to identify malicious traffic, as this helps us focus our efforts on what other fully automated mitigations could be implemented to improve the Cloudflare platform.

The most common field, found in approximately 64% of all custom rules, remains the source IP address or fields easily derived from the IP address, such as the client country location. Note that IP addresses are becoming less useful signals for security policies, but they are often the quickest and simplest type of filter to implement during an attack. Customers are also starting to adopt better approaches such as those offered in our Zero Trust portfolio to further reduce reliance on IP address-based fields.

The top 10 fields are shown below:

Application security: Cloudflare’s view

Tabular format for reference:

Field name Used in % of rules
ip 64.9%
ip_geoip_country 27.3%
http_request_uri 24.1%
http_user_agent 21.8%
http_request_uri_path 17.8%
http_referer 8.6%
cf_client_bot 8.3%
http_host 7.8%
ip_geoip_asnum 5.8%
cf_threat_score 4.4%

Beyond IP addresses, standard HTTP request fields (URI, User-Agent, Path, Referer) tend to be the most popular. Note, also, that across the entire rule corpus, the average rule combines at least three independent fields.

Bot Traffic Insights

Cloudflare has long offered a Bot Management solution to allow customers to gain insights into the automated traffic that might be accessing their application. Using Bot Management classification data, we can perform a deep dive into the world of bots.

38% of HTTP traffic is automated

Over the time period analyzed, bot traffic accounted for about 38% of all HTTP requests. This traffic includes bot traffic from hundreds of Verified Bots tracked by Cloudflare, as well as any request that received a bot score below 30, indicating a high likelihood that it is automated.

Overall, when bot traffic matches a security configuration, customers allow 41% of bot traffic to pass to their origins, blocking only 6.4% of automated requests. Remember that this includes traffic coming from Verified Bots like GoogleBot, which ultimately benefits site owners and end users. It’s a reminder that automation in and of itself is not necessarily detrimental to a site.  This is why we segment Verified Bot traffic, and why we give customers a granular bot score, rather than a binary “bot or not bot” indicator. Website operators want the flexibility to be precise with their response to different types of bot traffic, and we can see that they do in fact use this flexibility. Note that our self-serve customers can also decide how to handle bot traffic using our Super Bot Fight Mode feature.

Application security: Cloudflare’s view

Tabular data for reference:

Action on all bot traffic Percentage %
allow 40.9%
log 31.9%
bypass 19.0%
block 6.4%
jschallenge 0.5%

More than a third of non-verified bot HTTP traffic is mitigated

31% of all bot traffic observed by Cloudflare is not verified, and comes from thousands of custom-built automated tools like scanners, crawlers, and bots built by hackers. As noted above, automation does not necessarily mean these bots are performing malicious actions. If we look at customer responses to identified bot traffic, we find that 38.5% of HTTP requests from non-verified bots are mitigated. This is obviously a much more defensive configuration compared to overall bot traffic actions shown above:

Application security: Cloudflare’s view

Tabular data for reference:

Action on non-verified bot traffic Percentage %
block 34.0%
log 28.6%
allow 14.5%
bypass 13.2%
managed_challenge 3.7%

You’ll notice that almost 30% of customers log traffic rather than take immediate action. We find that many enterprise customers choose to not immediately block bot traffic, so they don’t give a feedback signal to attackers. Rather, they prefer to tag and monitor this traffic, and either drop at a later time or redirect to alternate content. As targeted attack vectors have evolved, responses to those attacks have had to evolve and become more sophisticated as well. Additionally, nearly 3% of non-verified bot traffic is automatically mitigated by our DDoS protection (connection_close). These requests tend to be part of botnets used to attack customer applications.

API Traffic Insights

Many applications built on the Internet today are not meant to be consumed by humans. Rather, they are intended for computer-to-computer communication. The common way to expose an application for this purpose is to build an Application Programming Interface (API) that can be accessed using HTTP.

Due to the underlying format of the data in transit, API traffic tends to be a lot more structured than standard web applications, causing all sorts of problems from a security standpoint. First, the structured data often causes Web Application Firewalls (WAFs) to generate a large number of false positives. Secondly, due to the nature of APIs, they often go unnoticed, and many companies end up exposing old and unmaintained APIs without knowing, often referred to as “shadow APIs”.

Below, we look at some differences in API trends compared to the global traffic insights shown above.

10% of API traffic is mitigated at the edge

A good portion of bot traffic is accessing API endpoints, and as discussed previously, API traffic is the fastest growing traffic type on the Cloudflare network, currently accounting for 55% of total requests.

API endpoints globally receive more malicious requests compared to standard web applications (10% vs 8%) potentially indicating that attackers are focusing more on APIs for their attack surface as opposed to standard web apps.

Our DDoS mitigation is still the top source of mitigated events for API endpoints, accounting for just over 63% of the total mitigated requests. More interestingly, Custom WAF rules account for 35% compared to 19% when looking at global traffic. Customers have, to date, been heavily using WAF Custom Rules to lock down and validate traffic to API endpoints, although we expect our API Gateway schema validation feature to soon surpass Custom WAF Rules in terms of mitigated traffic.

SQLi is the most common attack vector on API endpoints

If we look at our WAF Managed Rules mitigations on API traffic only, we see notable differences compared to global trends. These differences include much more equal distribution across different types of attacks, but more noticeably, SQL injection attacks in the top spot.

Command Injection attacks are also much more prominent (14.3%), and vectors such as Deserialization make an appearance, contributing to more than 1% of the total mitigated requests.

Application security: Cloudflare’s view

Tabular data for reference:

Source Percentage %
SQLi 34.5%
HTTP Anomaly 18.2%
Vendor Specific CVE 14.5%
Command Injection 14.3%
XSS 7.3%
Fake Bots 5.8%
File Inclusion 2.3%
Deserialization 1.2%
Information Disclosure 0.6%
Other 1.3%

Looking ahead

In this post we shared some initial insights around Internet application security trends based on traffic to Cloudflare’s network. Of course, we have only just scratched the surface. Moving forward, we plan to publish quarterly reports with dynamic filters directly on Cloudflare Radar and provide much deeper insights and investigations.

New cities on the Cloudflare global network: March 2022 edition

Post Syndicated from Mike Conlow original https://blog.cloudflare.com/mid-2022-new-cities/

New cities on the Cloudflare global network: March 2022 edition

If you follow the Cloudflare blog, you know that we love to add cities to our global map. With each new city we add, we help make the Internet faster, more reliable, and more secure. Today, we are announcing the addition of 18 new cities in Africa, South America, Asia, and the Middle East, bringing our network to over 270 cities globally. We’ll also look closely at how adding new cities improves Internet performance, such as our new locations in Israel, which reduced median response time (latency) from 86ms to 29ms (a 66% improvement) in a matter of weeks for subscribers of one Israeli Internet service provider (ISP).

The Cities

Without further ado, here are the 18 new cities in 10 countries we welcomed to our global network: Accra, Ghana; Almaty, Kazakhstan; Bhubaneshwar, India; Chiang Mai, Thailand; Joinville, Brazil; Erbil, Iraq; Fukuoka, Japan; Goiânia, Brazil; Haifa, Israel; Harare, Zimbabwe; Juazeiro do Norte, Brazil; Kanpur, India; Manaus, Brazil; Naha, Japan; Patna, India; São José do Rio Preto, Brazil; Tashkent, Uzbekistan; Uberlândia, Brazil.

Cloudflare’s ISP Edge Partnership Program

But let’s take a step back and understand why and how adding new cities to our list helps make the Internet better. First, we should reintroduce the Cloudflare Edge Partnership Program. Cloudflare is used as a reverse proxy by nearly 20% of all Internet properties, which means the volume of ISP traffic trying to reach us can be significant. In some cases, as we’ll see in Israel, the distance data needs to travel can also be significant, adding to latency and reducing Internet performance for the user. Our solution is partnering with ISPs to embed our servers inside their network. Not only does the ISP avoid lots of back haul traffic, but their subscribers also get much better performance because the website is served on-net, and close to them geographically. It is a win-win-win.

Consider a large Israeli ISP we did not peer with locally in Tel Aviv. Last year, if a subscriber wanted to reach a website on the Cloudflare network, their request had to travel on the Internet backbone – the large carriers that connect networks together on behalf of smaller ISPs – from Israel to Europe before reaching Cloudflare and going back. The map below shows where they were able to find Cloudflare content before our deployment went live: 48% in Frankfurt, 33% in London, and 18% in Amsterdam. That’s a long way!

New cities on the Cloudflare global network: March 2022 edition

In January and March 2022, we turned up deployments with the ISP  in Tel Aviv and Haifa. Now live, these two locations serve practically all requests from their subscribers locally within Israel. Instead of traveling 3,000 km to reach one of the millions of websites on our network, most requests from Israel now travel 65 km, or less. The improvement has been dramatic: now we’re serving 66% of requests in under 50ms; before the deployment we couldn’t serve any in under 50ms because the distance was too great. Now, 85% are served in under 100ms; before, we served 66% of requests in under 100ms.

New cities on the Cloudflare global network: March 2022 edition

As we continue to put dots on the map, we’ll keep putting updates here on how Internet performance is improving. As we like to say, we’re just getting started.

If you’re an ISP that is interested in hosting a Cloudflare cache to improve performance and reduce back haul, get in touch on our Edge Partnership Program page. And if you’re a software, data, or network engineer – or just the type of person who is curious and wants to help make the Internet better – consider joining our team.

С пАток към светъл комунизъм!

Post Syndicated from original https://bivol.bg/%D1%81-%D0%BF%D0%B0%D1%82%D0%BE%D0%BA-%D0%BA%D1%8A%D0%BC-%D1%81%D0%B2%D0%B5%D1%82%D1%8A%D0%BB-%D0%BA%D0%BE%D0%BC%D1%83%D0%BD%D0%B8%D0%B7%D1%8A%D0%BC.html

понеделник 21 март 2022


Какви фирми строиха пАтока? Руски и белaруски. Колко лева броихме ние за пАтока? Към, а и над 3 милиарда. ПАтока има ли полза за нас? Май около 2187-а може и…

The 5.17 kernel has been released

Post Syndicated from original https://lwn.net/Articles/887679/

Linus has released the 5.17 kernel.

So we had an extra week of at the end of this release cycle, and
I’m happy to report that it was very calm indeed. We could
probably have skipped it with not a lot of downside, but we did get
a few last-minute reverts and fixes in and avoid some brown-paper
bugs that would otherwise have been stable fodder, so it’s all good

Some of the significant features in this release include
KCSAN
support for the arm64 architecture,
the bpf_loop() helper,
ID-mapped filesystem mounts,
the reference-count tracking
infrastructure
,
a switch to BLAKE2s for the random-number generator,
a rewritten
network filesystem caching layer
,
straight-line speculation mitigation,
and more.
See the LWN merge-window summaries
(part 1,
part 2) and
the KernelNewbies 5.17
page
for more details.

Unlocking QUIC’s proxying potential with MASQUE

Post Syndicated from Lucas Pardue original https://blog.cloudflare.com/unlocking-quic-proxying-potential/

Unlocking QUIC’s proxying potential with MASQUE

Unlocking QUIC’s proxying potential with MASQUE

In the last post, we discussed how HTTP CONNECT can be used to proxy TCP-based applications, including DNS-over-HTTPS and generic HTTPS traffic, between a client and target server. This provides significant benefits for those applications, but it doesn’t lend itself to non-TCP applications. And if you’re wondering whether or not we care about these, the answer is an affirmative yes!

For instance, HTTP/3 is based on QUIC, which runs on top of UDP. What if we wanted to speak HTTP/3 to a target server? That requires two things: (1) the means to encapsulate a UDP payload between client and proxy (which the proxy decapsulates and forward to the target in an actual UDP datagram), and (2) a way to instruct the proxy to open a UDP association to a target so that it knows where to forward the decapsulated payload. In this post, we’ll discuss answers to these two questions, starting with encapsulation.

Encapsulating datagrams

While TCP provides a reliable and ordered byte stream for applications to use, UDP instead provides unreliable messages called datagrams. Datagrams sent or received on a connection are loosely associated, each one is independent from a transport perspective. Applications that are built on top of UDP can leverage the unreliability for good. For example, low-latency media streaming often does so to avoid lost packets getting retransmitted. This makes sense, on a live teleconference it is better to receive the most recent audio or video rather than starting to lag behind while you’re waiting for stale data

QUIC is designed to run on top of an unreliable protocol such as UDP. QUIC provides its own layer of security, packet loss detection, methods of data recovery, and congestion control. If the layer underneath QUIC duplicates those features, they can cause wasted work or worse create destructive interference. For instance, QUIC congestion control defines a number of signals that provide input to sender-side algorithms. If layers underneath QUIC affect its packet flows (loss, timing, pacing, etc), they also affect the algorithm output. Input and output run in a feedback loop, so perturbation of signals can get amplified. All of this can cause congestion control algorithms to be more conservative in the data rates they use.

If we could speak HTTP/3 to a proxy, and leverage a reliable QUIC stream to carry encapsulated datagrams payload, then everything can work. However, the reliable stream interferes with expectations. The most likely outcome being slower end-to-end UDP throughput than we could achieve without tunneling. Stream reliability runs counter to our goals.

Fortunately, QUIC’s unreliable datagram extension adds a new DATAGRAM frame that, as its name plainly says, is unreliable. It has several uses; the one we care about is that it provides a building block for performant UDP tunneling. In particular, this extension has the following properties:

  • DATAGRAM frames are individual messages, unlike a long QUIC stream.
  • DATAGRAM frames do not contain a multiplexing identifier, unlike QUIC’s stream IDs.
  • Like all QUIC frames, DATAGRAM frames must fit completely inside a QUIC packet.
  • DATAGRAM frames are subject to congestion control, helping senders to avoid overloading the network.
  • DATAGRAM frames are acknowledged by the receiver but, importantly, if the sender detects a loss, QUIC does not retransmit the lost data.

The Datagram “Unreliable Datagram Extension to QUIC” specification will be published as an RFC soon. Cloudflare’s quiche library has supported it since October 2020.

Now that QUIC has primitives that support sending unreliable messages, we have a standard way to effectively tunnel UDP inside it. QUIC provides the STREAM and DATAGRAM transport primitives that support our proxying goals. Now it is the application layer responsibility to describe how to use them for proxying. Enter MASQUE.

MASQUE: Unlocking QUIC’s potential for proxying

Now that we’ve described how encapsulation works, let’s now turn our attention to the second question listed at the start of this post: How does an application initialize an end-to-end tunnel, informing a proxy server where to send UDP datagrams to, and where to receive them from? This is the focus of the MASQUE Working Group, which was formed in June 2020 and has been designing answers since. Many people across the Internet ecosystem have been contributing to the standardization activity. At Cloudflare, that includes Chris (as co-chair), Lucas (as co-editor of one WG document) and several other colleagues.

MASQUE started solving the UDP tunneling problem with a pair of specifications: a definition for how QUIC datagrams are used with HTTP/3, and a new kind of HTTP request that initiates a UDP socket to a target server. These have built on the concept of extended CONNECT, which was first introduced for HTTP/2 in RFC 8441 and has now been ported to HTTP/3. Extended CONNECT defines the :protocol pseudo-header that can be used by clients to indicate the intention of the request. The initial use case was WebSockets, but we can repurpose it for UDP and it looks like this:

:method = CONNECT
:protocol = connect-udp
:scheme = https
:path = /target.example.com/443/
:authority = proxy.example.com

A client sends an extended CONNECT request to a proxy server, which identifies a target server in the :path. If the proxy succeeds in opening a UDP socket, it responds with a 2xx (Successful) status code. After this, an end-to-end flow of unreliable messages between the client and target is possible; the client and proxy exchange QUIC DATAGRAM frames with an encapsulated payload, and the proxy and target exchange UDP datagrams bearing that payload.

Unlocking QUIC’s proxying potential with MASQUE

Anatomy of Encapsulation

UDP tunneling has a constraint that TCP tunneling does not – namely, the size of messages and how that relates to path MTU (Maximum Transmission Unit; for more background see our Learning Center article). The path MTU is the maximum size that is allowed on the path between client and server. The actual maximum is the smallest maximum across all elements at every hop and at every layer, from the network up to application. All it takes is for one component with a small MTU to reduce the path MTU entirely. On the Internet, 1,500 bytes is a common practical MTU. When considering tunneling using QUIC, we need to appreciate the anatomy of QUIC packets and frames in order to understand how they add bytes of overheard. This consumes bytes and subtracts from our theoretical maximum.

We’ve been talking in terms of HTTP/3 which normally has its own frames (HEADERS, DATA, etc) that have a common type and length overhead. However, there is no HTTP/3 framing when it comes to DATAGRAM, instead the bytes are placed directly into the QUIC frame. This frame is composed of two fields. The first field is a variable number of bytes, called the Quarter Stream ID field, which is an encoded identifier that supports independent multiplexed DATAGRAM flows. It does so by binding each DATAGRAM to the HTTP request stream ID. In QUIC, stream IDs use two bits to encode four types of stream. Since request streams are always of one type (client-initiated bidirectional, to be exact), we can divide their ID by four to save space on the wire. Hence the name Quarter Stream ID. The second field is payload, which contains the end-to-end message payload. Here’s how it might look on the wire.

Unlocking QUIC’s proxying potential with MASQUE

If you recall our lesson from the last post, DATAGRAM frames (like all frames) must fit completely inside a QUIC packet. Moreover, since QUIC requires that fragmentation is disabled, QUIC packets must fit completely inside a UDP datagram. This all combines to limit the maximum size of things that we can actually send: the path MTU determines the size of the UDP datagram, then we need to subtract the overheads of the UDP datagram header, QUIC packet header, and QUIC DATAGRAM frame header. For a better understanding of QUIC’s wire image and overheads, see Section 5 of RFC 8999 and Section 12.4 of RFC 9000.

If a sender has a message that is too big to fit inside the tunnel, there are only two options: discard the message or fragment it. Neither of these are good options. Clients create the UDP tunnel and are more likely to accurately calculate the real size of encapsulated UDP datagram payload, thus avoiding the problem. However, a target server is most likely unaware that a client is behind a proxy, so it cannot accommodate the tunneling overhead. It might send a UDP datagram payload that is too big for the proxy to encapsulate. This conundrum is common to all proxy protocols! There’s an art in picking the right MTU size for UDP-based traffic in the face of tunneling overheads. While approaches like path MTU discovery can help, they are not a silver bullet. Choosing conservative maximum sizes can reduce the chances of tunnel-related problems. However, this needs to be weighed against being too restrictive. Given a theoretical path MTU of 1,500, once we consider QUIC encapsulation overheads, tunneled messages with a limit between 1,200 and 1,300 bytes can be effective.This is especially important when we think about tunneling QUIC itself. RFC 9000 Section 8.1 details how clients that initiate new QUIC connections must send UDP datagrams of at least 1,200 bytes. If a proxy can’t support that, then QUIC will not work in a tunnel.

Nested tunneling for Improved Privacy Proxying

MASQUE gives us the application layer building blocks to support efficient tunneling of TCP or UDP traffic. What’s cool about this is that we can combine these blocks into different deployment architectures for different scenarios or different needs.

One example of this case is nested tunneling via multiple proxies, which can minimize the connection metadata available to each individual proxy or server (one example of this type of deployment is described in our recent post on iCloud Private Relay). In this kind of setup, a client might manage at least three logical connections. First, a QUIC connection between Client and Proxy 1. Second, a QUIC connection between Client and Proxy 2, which runs via a CONNECT tunnel in the first connection. Third, an end-to-end byte stream between Client and Server, which runs via a CONNECT tunnel in the second connection. A real TCP connection only exists between Proxy 2 and Server. If additional Client to Server logical connections are needed, they can be created inside the existing pair of QUIC connections.

Unlocking QUIC’s proxying potential with MASQUE

Towards a full tunnel with IP tunneling

Proxy support for UDP and TCP already unblocks a huge assortment of use cases, including TLS, QUIC, HTTP, DNS, and so on. But it doesn’t help protocols that use different IP protocols, like ICMP or IPsec Encapsulating Security Payload (ESP). Fortunately, the MASQUE Working Group has also been working on IP tunneling. This is a lot more complex than UDP tunneling, so they first spent some time defining a common set of requirements. The group has recently adopted a new specification to support IP proxying over HTTP. This behaves similarly to the other CONNECT designs we’ve discussed but with a few differences. Indeed, IP proxying support using HTTP as a substrate would unlock many applications that existing protocols like IPsec and WireGuard enable.

At this point, it would be reasonable to ask: “A complete HTTP/3 stack is a bit excessive when all I need is a simple end-to-end tunnel, right?” Our answer is, it depends! CONNECT-based IP proxies use TLS and rely on well established PKIs for creating secure channels between endpoints, whereas protocols like WireGuard use a simpler cryptographic protocol for key establishment and defer authentication to the application. WireGuard does not support proxying over TCP but can be adapted to work over TCP transports, if necessary. In contrast, CONNECT-based proxies do support TCP and UDP transports, depending on what version of HTTP is used. Despite these differences, these protocols do share similarities. In particular, the actual framing used by both protocols – be it the TLS record layer or QUIC packet protection for CONNECT-based proxies, or WireGuard encapsulation – are not interoperable but only slightly differ in wire format. Thus, from a performance perspective, there’s not really much difference.

In general, comparing these protocols is like comparing apples and oranges – they’re fit for different purposes, have different implementation requirements, and assume different ecosystem participants and threat models. At the end of the day, CONNECT-based proxies are better suited to an ecosystem and environment that is already heavily invested in TLS and the existing WebPKI, so we expect CONNECT-based solutions for IP tunnels to become the norm in the future. Nevertheless, it’s early days, so be sure to watch this space if you’re interested in learning more!

Looking ahead

The IETF has chartered the MASQUE Working Group to help design an HTTP-based solution for UDP and IP that complements the existing CONNECT method for TCP tunneling. Using HTTP semantics allows us to use features like request methods, response statuses, and header fields to enhance tunnel initialization. For example, allowing for reuse of existing authentication mechanisms or the Proxy-Status field. By using HTTP/3, UDP and IP tunneling can benefit from QUIC’s secure transport native unreliable datagram support, and other features. Through a flexible design, older versions of HTTP can also be supported, which helps widen the potential deployment scenarios. Collectively, this work brings proxy protocols to the masses.

While the design details of MASQUE specifications continue to be iterated upon, so far several implementations have been developed, some of which have been interoperability tested during IETF hackathons. This running code helps inform the continued development of the specifications. Details are likely to continue changing before the end of the process, but we should expect the overarching approach to remain similar. Join us during the MASQUE WG meeting in IETF 113 to learn more!

The first Asahi Linux alpha release

Post Syndicated from original https://lwn.net/Articles/888524/

The first
alpha release of Asahi Linux
, a distribution for Apple M1 silicon, has
been released.

Keep in mind that this is still a very early, alpha
release. It is intended for developers and power users; if you decide to
install it, we hope you will be able to help us out by filing detailed bug
reports and helping debug issues. That said, we welcome everyone to give it
a try – just expect things to be a bit rough.

A Primer on Proxies

Post Syndicated from Lucas Pardue original https://blog.cloudflare.com/a-primer-on-proxies/

A Primer on Proxies

A Primer on Proxies

Traffic proxying, the act of encapsulating one flow of data inside another, is a valuable privacy tool for establishing boundaries on the Internet. Encapsulation has an overhead, Cloudflare and our Internet peers strive to avoid turning it into a performance cost. MASQUE is the latest collaboration effort to design efficient proxy protocols based on IETF standards. We’re already running these at scale in production; see our recent blog post about Cloudflare’s role in iCloud Private Relay for an example.

In this blog post series, we’ll dive into proxy protocols.

To begin, let’s start with a simple question: what is proxying? In this case, we are focused on forward proxying — a client establishes an end-to-end tunnel to a target server via a proxy server. This contrasts with the Cloudflare CDN, which operates as a reverse proxy that terminates client connections and then takes responsibility for actions such as caching, security including WAF, load balancing, etc. With forward proxying, the details about the tunnel, such as how it is established and used, whether or not it provides confidentiality via authenticated encryption, and so on, vary by proxy protocol. Before going into specifics, let’s start with one of the most common tunnels used on the Internet: TCP.

Transport basics: TCP provides a reliable byte stream

The TCP transport protocol is a rich topic. For the purposes of this post, we will focus on one aspect: TCP provides a readable and writable, reliable, and ordered byte stream. Some protocols like HTTP and TLS require a reliable transport underneath them and TCP’s single byte stream is an ideal fit. The application layer reads or writes to this byte stream, but the details about how TCP sends this data “on the wire” are typically abstracted away.

Large application objects are written into a stream, then they are split into many small packets and they are sent in order to the network. At the receiver, packets are read from the network and combined back into an identical stream. Networks are not perfect and packets can be lost or reordered. TCP is clever at dealing with this and not worrying the application with details. It just works. A way to visualize this is to imagine a magic paper shredder that can both shred documents and convert shredded papers back to whole documents. Then imagine you and your friend bought a pair of these and decided that it would be fun to send each other shreds.

The one problem with TCP is that when a lost packet is detected at a receiver, the sender needs to retransmit it. This takes time to happen and can mean that the byte stream reconstruction gets delayed. This is known as TCP head-of-line blocking. Applications regularly use TCP via a socket API that abstracts away protocol details; they often can’t tell if there are delays because the other end is slow at sending or if the network is slowing things down via packet loss.

A Primer on Proxies

Proxy Protocols

Proxying TCP is immensely useful for many applications, including, though certainly not limited to HTTPS, SSH, and RDP. In fact, Oblivious DoH, which is a proxy protocol for DNS messages, could very well be implemented using a TCP proxy, though there are reasons why this may not be desirable. Today, there are a number of different options for proxying TCP end-to-end, including:

  • SOCKS, which runs in cleartext and requires an expensive connection establishment step.
  • Transparent TCP proxies, commonly referred to as performance enhancing proxies (PEPs), which must be on path and offer no additional transport security, and, definitionally, are limited to TCP protocols.
  • Layer 4 proxies such as Cloudflare Spectrum, which might rely on side carriage metadata via something like the PROXY protocol.
  • HTTP CONNECT, which transforms HTTPS connections into opaque byte streams.

While SOCKS and PEPs are viable options for some use cases, when choosing which proxy protocol to build future systems upon, it made most sense to choose a reusable and general-purpose protocol that provides well-defined and standard abstractions. As such, the IETF chose to focus on using HTTP as a substrate via the CONNECT method.

The concept of using HTTP as a substrate for proxying is not new. Indeed, HTTP/1.1 and HTTP/2 have supported proxying TCP-based protocols for a long time. In the following sections of this post, we’ll explain in detail how CONNECT works across different versions of HTTP, including HTTP/1.1, HTTP/2, and the recently standardized HTTP/3.

HTTP/1.1 and CONNECT

In HTTP/1.1, the CONNECT method can be used to establish an end-to-end TCP tunnel to a target server via a proxy server. This is commonly applied to use cases where there is a benefit of protecting the traffic between the client and the proxy, or where the proxy can provide access control at network boundaries. For example, a Web browser can be configured to issue all of its HTTP requests via an HTTP proxy.

A client sends a CONNECT request to the proxy server, which requests that it opens a TCP connection to the target server and desired port. It looks something like this:

CONNECT target.example.com:80 HTTP/1.1
Host: target.example.com

If the proxy succeeds in opening a TCP connection to the target, it responds with a 2xx range status code. If there is some kind of problem, an error status in the 5xx range can be returned. Once a tunnel is established there are two independent TCP connections; one on either side of the proxy. If a flow needs to stop, you can simply terminate them.

HTTP CONNECT proxies forward data between the client and the target server. The TCP packets themselves are not tunneled, only the data on the logical byte stream. Although the proxy is supposed to forward data and not process it, if the data is plaintext there would be nothing to stop it. In practice, CONNECT is often used to create an end-to-end TLS connection where only the client and target server have access to the protected content; the proxy sees only TLS records and can’t read their content because it doesn’t have access to the keys.

A Primer on Proxies

Finally, it’s worth noting that after a successful CONNECT request, the HTTP connection (and the TCP connection underpinning it) has been converted into a tunnel. There is no more possibility of issuing other HTTP messages, to the proxy itself, on the connection.

HTTP/2 and CONNECT

HTTP/2 adds logical streams above the TCP layer in order to support concurrent requests and responses on a single connection. Streams are also reliable and ordered byte streams, operating on top of TCP. Returning to our magic shredder analogy: imagine you wanted to send a book. Shredding each page one after another and rebuilding the book one page at a time is slow, but handling multiple pages at the same time might be faster. HTTP/2 streams allow us to do that. But, as we all know, trying to put too much into a shredder can sometimes cause it to jam.

A Primer on Proxies

In HTTP/2, each request and response is sent on a different stream. To support this, HTTP/2 defines frames that contain the stream identifier that they are associated with. Requests and responses are composed of HEADERS and DATA frames which contain HTTP header fields and HTTP content, respectively. Frames can be large. When they are sent on the wire they might span multiple TLS records or TCP segments. Side note: the HTTP WG has been working on a new revision of the document that defines HTTP semantics that are common to all HTTP versions. The terms message, header fields, and content all come from this description.

HTTP/2 concurrency allows applications to read and write multiple objects at different rates, which can improve HTTP application performance, such as web browsing. HTTP/1.1 traditionally dealt with this concurrency by opening multiple TCP connections in parallel and striping requests across these connections. In contrast, HTTP/2 multiplexes frames belonging to different streams onto the single byte stream provided by one TCP connection. Reusing a single connection has benefits, but it still leaves HTTP/2 at risk of TCP head-of-line blocking. For more details, refer to Perf Planet blog.

HTTP/2 also supports the CONNECT method. In contrast to HTTP/1.1, CONNECT requests do not take over an entire HTTP/2 connection. Instead, they convert a single stream into an end-to-end tunnel. It looks something like this:

:method = CONNECT
:authority = target.example.com:443

If the proxy succeeds in opening a TCP connection, it responds with a 2xx (Successful) status code. After this, the client sends DATA frames to the proxy, and the content of these frames are put into TCP packets sent to the target. In the return direction, the proxy reads from the TCP byte stream and populates DATA frames. If a tunnel needs to stop, you can simply terminate the stream; there is no need to terminate the HTTP/2 connection.

By using HTTP/2, a client can create multiple CONNECT tunnels in a single connection. This can help reduce resource usage (saving the global count of TCP connections) and allows related tunnels to be logically grouped together, ensuring that they “share fate” when either client or proxy need to gracefully close. On the proxy-to-server side there are still multiple independent TCP connections.

A Primer on Proxies

One challenge of multiplexing tunnels on concurrent streams is how to effectively prioritize them. We’ve talked in the past about prioritization for web pages, but the story is a bit different for CONNECT. We’ve been thinking about this and captured considerations in the new Extensible Priorities draft.

QUIC, HTTP/3 and CONNECT

QUIC is a new secure and multiplexed transport protocol from the IETF. QUIC version 1 was published as RFC 9000 in May 2021 and, the next day, we enabled it for all Cloudflare customers.

QUIC is composed of several foundational features. You can think of these like individual puzzle pieces that interlink to form a transport service. This service needs one more piece, an application mapping, to bring it all together.

A Primer on Proxies

Similar to HTTP/2, QUIC version 1 provides reliable and ordered streams. But QUIC streams live at the transport layer and they are the only type of QUIC primitive that can carry application data. QUIC has no opinion on how streams get used. Applications that wish to use QUIC must define that themselves.

QUIC streams can be long (up to 2^62 – 1 bytes). Stream data is sent on the wire in the form of STREAM frames. All QUIC frames must fit completely inside a QUIC packet. QUIC packets must fit entirely in a UDP datagram; fragmentation is prohibited. These requirements mean that a long stream is serialized to a series of QUIC packets sized roughly to the path MTU (Maximum Transmission Unit). STREAM frames provide reliability via QUIC loss detection and recovery. Frames are acknowledged by the receiver and if the sender detects a loss (via missing acknowledgments), QUIC will retransmit the lost data. In contrast, TCP retransmits packets. This difference is an important feature of QUIC, letting implementations decide how to repacketize and reschedule lost data.

When multiplexing streams, different packets can contain STREAM frames belonging to different stream identifiers. This creates independence between streams and helps avoid the head-of-line blocking caused by packet loss that we see in TCP. If a UDP packet containing data for one stream is lost, other streams can continue to make progress without being blocked by retransmission of the lost stream.

To use our magic shredder analogy one more time: we’re sending a book again, but this time we parallelise our task by using independent shredders. We need to logically associate them together so that the receiver knows the pages and shreds are all for the same book, but otherwise they can progress with less chance of jamming.

A Primer on Proxies

HTTP/3 is an example of an application mapping that describes how streams are used to exchange: HTTP settings, QPACK state, and request and response messages. HTTP/3 still defines its own frames like HEADERS and DATA, but it is overall simpler than HTTP/2 because QUIC deals with the hard stuff. Since HTTP/3 just sees a logical byte stream, its frames can be arbitrarily sized. The QUIC layer handles segmenting HTTP/3 frames over STREAM frames for sending in packets. HTTP/3 also supports the CONNECT method. It functions identically to CONNECT in HTTP/2, each request stream converting to an end-to-end tunnel.

HTTP packetization comparison

We’ve talked about HTTP/1.1, HTTP/2 and HTTP/3. The diagram below is a convenient way to summarize how HTTP requests and responses get serialized for transmission over a secure transport. The main difference is that with TLS, protected records are split across several TCP segments. While with QUIC there is no record layer, each packet has its own protection.

A Primer on Proxies

Limitations and looking ahead

HTTP CONNECT is a simple and elegant protocol that has a tremendous number of application use cases, especially for privacy-enhancing technology. In particular, applications can use it to proxy DNS-over-HTTPS similar to what’s been done for Oblivious DoH, or more generic HTTPS traffic (based on HTTP/1.1 or HTTP/2), and many more.

However, what about non-TCP traffic? Recall that HTTP/3 is an application mapping for QUIC, and therefore runs over UDP as well. What if we wanted to proxy QUIC? What if we wanted to proxy entire IP datagrams, similar to VPN technologies like IPsec or WireGuard? This is where MASQUE comes in. In the next post, we’ll discuss how the MASQUE Working Group is standardizing technologies to enable proxying for datagram-based protocols like UDP and IP.

Седмицата (14–19 март)

Post Syndicated from Зорница Христова original https://toest.bg/editorial-14-19-march-2022/

Събуждам се, включвам компютъра, проверявам станало ли е чудо. Това е ритуалът ми от последните дни.

Смея се, разбира се, на себе си. Но вижте: комунистическите режими в Източния блок се срутиха като плочки домино, в Югославия имаше война, после два самолета се забиха в кулите на Световния търговски център, после съобщаваха за други терористични актове, после пандемията, после войната в Украйна… Всички тези неща се случиха извън моята страна (или започнаха другаде). И се отразиха върху нея – жестоко. Една част от тях можеше да бъдат предвидени. Предвидихме ли ги? Имаме ли идея накъде ще се развива настоящата криза?

Емилия Милчева

Много вероятно е например Путин да инвестира сериозен ресурс, за да смени режимите в Източна Европа – да речем, чрез комбинация от информационна война и вложения в удобни за него партии. Анализът на Емилия Милчева „Сезонът на путлеристите“ отчита ръста на подкрепата за „Възраждане“ („най-кресливата, патетична и с прокремълски уклон група в българския парламент“), но не само. Бъдещият проект на бившия военен министър Стефан Янев ще използва същите настроения, казва тя. И шансовете му не са малки. Междувременно президентът не иска самолети на НАТО, въпреки че, по собствените му думи, армията ни няма ресурс да ни защитава. Значи и той чака „чудо“. Дали?

Николета Атанасова

Едно може да се предвиди: политиката на „снишаването“ неизменно води до политика на „потушаване на пожари“ – тоест несръчно, кризисно справяне с неща, за които е трябвало да се подготвим. Това е повече от очевидно при посрещането на украинските бежанци. Тази седмица темата е разгърната в три последователни текста, така че става безпощадно ясно за какво иде реч.

Първо, новата подкаст поредица на Николета Атанасова „Украйна не е загубена“ дава възможност да чуете с ушите си как точно звучи бежанският опит, дава глас на реалните хора зад „броя на потърсилите убежище у нас“. Препоръчвам да започнете оттам.

След като сте чули самите хора, прочетете статията на Светла Енчева, описваща процеса на интегра… чакайте, какъв процес на интеграция? Към момента, в който е публикувана статията, пунктовете за регистрация на бежанците не работят с пълен капацитет, съответно те не могат да получат статут на временна закрила – с всичко следващо от него.

Светла продължава нататък и се опитва да си представи как би изглеждала възможната интеграция в нашето общество с цялата тромавост на административния апарат, с езиковата бариера и т.н. Как може човек да се оправи в системата на здравеопазването например? Ами в образователната система? Та тя не може да интегрира дори българчета с друг майчин език, напомня авторката. Леко се размечтавам за стандартна практика от добре разработени курсове по български за двуезични учители, но нищо такова няма да стане, разбира се, въпросът ще се реши „по чудо“.

Венелина Попова

Ако съществува антоним на думата „чудо“, то вероятно е нещо, което спъва и нормалните рационални решения на проблемите. Не забързва действието, а го забавя до обезсмисляне. Според статията на Венелина Попова такава структура е Държавната агенция за бежанците.

Към въпросите, зададени на председателката на ДАБ при изслушването в парламента, Венелина добавя още доста – свързани както с ефективността на Агенцията, така и със съмненията за вътрешна корупция. Става ясно, че са отпуснати 100 допълнителни щатни бройки за справяне с кризата, но ДАБ не само не предприема нищо специално (например да сформира мобилни екипи за най-натоварените места), но и отказва да регистрира бежанци. Тези, които били влезли легално в страната и имали право на 90-дневен престой, не били неин проблем. И така нататък.

Толкова по настоящата (уж неочаквана) криза. А какво може да предстои? Статията на арх. Анета Василева за настъплението на Китай в архитектурната геополитика на Африка показва възможна посока за радикална промяна на света. Китай може да се окаже по-близо, отколкото мислим – като най-големия инвеститор на подценявания, но богат на ресурси африкански континент. Така както в момента обмисля да изкупи акциите на руски енергийни фирми. И така както позицията на Пекин би могла да наклони везните във войната между Русия и Украйна.

С други думи, твърде вероятно е оттук нататък икономическата тежест на Китай да нараства и да се усеща все по-близо до нас. В същото време само преди седмица чух от млада китаистка с икономически профил, че няма смисъл да работи по специалността си.

Стефан Иванов

Не по-малко важно от предвиждането на бъдещето е да осмислим какво все пак ни се е случвало в миналото. Особено ако то се явява бяло петно, нещо, което удобно се пропуска. Точно такъв е случаят с войната, която се водеше през 90-те в непосредствено съседство до нас – и точно затова за рубриката „На второ четене“ Стефан Иванов посяга към книга именно по тази тема. „Югославия, моя любов“ на Горан Войнович е роман, който разделя историята на късчета и ги събира по друг начин. И макар обърнат назад, гледа и напред към онова, което предстои – „как може войната да свърши и човешкото същество не само да оцелее, но и да продължи живота си“.

Емилия Милчева

Междувременно едно малко локално чудо все пак се случи. Арестът на бившия премиер Бойко Борисов предизвика повече ентусиазъм от настъпването на Нова година. Eмилия Милчева обаче нарича тази акция по-скоро изпитание – и за Европрокуратурата, и за управляващите у нас. Дали ще доведе до усещане, че борбата с корупцията все пак е възможна, или напротив, ще смъкне още повече авторитета на правителството – зависи от това как ще се развие. Да видим.

А след сутрешната проверка, ако чудото пак е забравило да настъпи, най-добре да действаме. Все нещо ни е по силите.

Източник

На второ четене: „Югославия, моя страна“ от Горан Войнович

Post Syndicated from Стефан Иванов original https://toest.bg/na-vtoro-chetene-yugoslaviya-moya-strana-goran-vojnovic/

Никой от нас не чете единствено най-новите книги. Тогава защо само за тях се пише? „На второ четене“ е рубрика, в която отваряме списъците с книги, публикувани преди поне година, четем ги и препоръчваме любимите си от тях. Рубриката е част от партньорската програма Читателски клуб „Тоест“. Изборът на заглавия обаче е единствено на авторите – Стефан Иванов и Севда Семер, които биха ви препоръчали тези книги и ако имаше как веднъж на две седмици да се разходите с тях в книжарницата.

„Югославия, моя страна“ от Горан Войнович

превод от словенски Лилия Мързликар, изд. ICU, 2019

Корица на книгатаКогато се събудихме на 24 февруари и се оказа, че още преди изгрев-слънце Русия е започнала война с Украйна, някои от спонтанните, шокирани коментари, които чух и четох, бяха, че това е първото подобно събитие от Втората световна война насам. Моята първа мисъл беше, че клането в Сребреница е от още по-скоро, а градчето е на няма и 500 километра от София.

Затова и потърсих книга за войната в Босна и Херцеговина. Защото за което не може да се говори, понякога още повече трябва да се говори. Да се пишат романи за него, но и да се четат. Да се помни, за да може да се избегне, ако е възможно, катастрофалното и порочно безкрайно повторение.

Днес за Босна и Херцеговина все още има реален риск от разпад. Литературата (да не се бърка с пропагандата) желае да предотврати такива разпади. Тя сънува и мечтае за едни и същи неща, откакто я има – прехвърляне на буквалните и брутални конфликти в полето на символното и словесното. Част от литературата дори си мислеше, че го е постигнала и че е дошъл краят на реалната война, краят на национализма и фанатизма, настъпил е конкретен вечен мир, в който като планетарно общество трябва да се справим с климатичната криза.

Понякога и литературата е късогледа и наивна.

Когато разбрах, че словенският писател Горан Войнович е издал „Югославия, моя страна“ на 33-годишна възраст, се учудих как е могъл да напише толкова цялостен, детайлен, майсторски, мъдър, дълбок и панорамен роман. При това и с чувство за хумор. В разговор с Антония Апостолова той казва, че книгата „като че ли отваря най-много личната ми рана“. И е наистина така. Трудно ми беше да повярвам, че това не е автобиография и изповед, стигаща до дъното на всичко, което може да се разкаже. Като читател още чакам поне сходен такъв български роман. Дано го дочакам.

Войнович отново ми връща вярата, че има голяма ценност и смисъл от такова рисковано писане, което всячески се опитва да не премълчава нищо. Нито красотата на детството, нито националните предразсъдъци, нито родовите травми, нито безизходицата, която идва и не си отива, каквото и да направиш, нито сякаш неизчерпаемата способност на хората на Балканите да се обиждат от всичко и после да си отмъщават. Войнович не ги превръща в стилизирани, безобидни и сантиментални лостове за трогване и сълзи. Те имат своята тежест и сила. Не са в кавички.

Писателят не спекулира и не си играе с дистанцирани или иронични предположения и фантасмагории. Не. В същото интервю по-горе той казва:

… една от способностите на литературата е да разглоби официалната история на отделни части и след това от тях да сглоби една по-различна история. Литературата всъщност си играе свободно със спомените и понякога извиква и онези, вече забравените, а по този начин, естествено, разкрива истинската природа на спомените, както и на историите.

Той се пита и усилено се опитва да отговори откъде идват тези пръчки, които все се забиват в колелата на личния и общия живот. Защо те увреждат съдбите на поколение след поколение жители на Балканите и не само тук. И възможни ли са прошка и помирение. Войнович препотвърждава старата и уж очевидна истина, че въображението и свободата, които дава романът като жанр за изразяване на множество разкази, ценности и разбирания, трудно могат да бъдат постигнати с друг тип писане. Истински литературен подвиг е да обемеш сложността на случилото се през 90-те в бивша Югославия и да го предадеш по такъв директен и открит начин. Книгата буквално се чете на един дъх.

Но за какво всъщност става въпрос в нея?

За син, който мисли, че баща му е покойник, а всъщност се оказва, че не е – и синът тръгва да го търси. За баща, който е спрял да бъде баща, за да бъде само военен, а после и военен престъпник и беглец от закона и съвестта си. За майка, която е спряла да бъде майка на сина си, така както и нейните майка и баща са се отказали да ѝ бъдат родители. За Югославия, Сърбия, Хърватия, Македония, Черна гора, Босна и Херцеговина. За изгубената невинност, която от един момент нататък няма как да си наденеш даже като маска.

Става въпрос и за дузина човеци, живеещи в малък апартамент по време на война и как „спокойният сън представляваше единствената привилегия, която можеха да предоставят на децата си“:

Вероятно само затова не спираха да ходят на пръсти, сърбаха в мълчание едно след друго джезве турско кафе, четяха вестници и си кимаха или поклащаха глава […] всичко това без думи, без да изпуснат един-единствен звук. Бяха си създали свой собствен и тих свят, в който за няколко часа всички телефони, транзистори, чайници, тенджери под налягане, домофони, машинки за бръснене, сешоари, телевизори, мелнички за кафе, кухненски роботи и перални млъкваха. Всяка сутрин тези хора се отказваха отново и отново от тях, само и само да не събудят случайно Йована, Мишо или Владан, които бяха толкова сладки, докато спят, както обичаше да прошепва Сава, надничайки в спалнята.

В крайна сметка в тъмното и болезнено сърце на романа е историята за това как се става „човек, който подпалва села и хвърля мъртъвците в купища един върху друг“:

Това беше омагьосан кръг и за миг през ума ми мина мисълта, че може би и самият аз съм част от тази проклета история и някога някой може да захвърли и моя труп върху купчина други трупове, в която след това ще се взира онзи, който ще остане след мен. В тази премълчана история жертвите ставаха палачи, а палачите ставаха жертви и всички в тази история убиваха или бяха убити.

Как да се изплъзнеш от порочния кръг? Как да вземеш съдбата си в ръце? Как да избереш да не повтаряш същата грешка? Как може войната да свърши и човешкото същество не само да оцелее, но и да продължи живота си?

Книгата на Войнович поставя и тези въпроси, но читателят, с действията си във всекидневието, ще даде собствените си отговори.

Заглавна илюстрация: © Елена и Лина Кривошиеви
Активните дарители на „Тоест“ получават постоянна отстъпка в размер на 20% от коричната цена на всички заглавия от каталога на ICU, както и на няколко други български издателства в рамките на партньорската програма Читателски клуб „Тоест“. За повече информация прочетете на toest.bg/club.

Източник

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close