Tag Archives: fm

[$] A filesystem “change journal” and other topics

Post Syndicated from jake original https://lwn.net/Articles/755277/rss

At the 2017 Linux Storage, Filesystem, and Memory-Management Summit
(LSFMM), Amir Goldstein presented his work
on adding a superblock watch mechanism to provide a scalable way to notify
of changes in a filesystem. At the 2018 edition of LSFMM, he was back to
discuss adding NTFS-like change
to the kernel in support of backup solutions of various
sorts. As a second topic for the session, he also wanted to discuss doing
more performance-regression testing
for filesystems.

Monitoring your Amazon SNS message filtering activity with Amazon CloudWatch

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/monitoring-your-amazon-sns-message-filtering-activity-with-amazon-cloudwatch/

This post is courtesy of Otavio Ferreira, Manager, Amazon SNS, AWS Messaging.

Amazon SNS message filtering provides a set of string and numeric matching operators that allow each subscription to receive only the messages of interest. Hence, SNS message filtering can simplify your pub/sub messaging architecture by offloading the message filtering logic from your subscriber systems, as well as the message routing logic from your publisher systems.

After you set the subscription attribute that defines a filter policy, the subscribing endpoint receives only the messages that carry attributes matching this filter policy. Other messages published to the topic are filtered out for this subscription. In this way, the native integration between SNS and Amazon CloudWatch provides visibility into the number of messages delivered, as well as the number of messages filtered out.

CloudWatch metrics are captured automatically for you. To get started with SNS message filtering, see Filtering Messages with Amazon SNS.

Message Filtering Metrics

The following six CloudWatch metrics are relevant to understanding your SNS message filtering activity:

  • NumberOfMessagesPublished – Inbound traffic to SNS. This metric tracks all the messages that have been published to the topic.
  • NumberOfNotificationsDelivered – Outbound traffic from SNS. This metric tracks all the messages that have been successfully delivered to endpoints subscribed to the topic. A delivery takes place either when the incoming message attributes match a subscription filter policy, or when the subscription has no filter policy at all, which results in a catch-all behavior.
  • NumberOfNotificationsFilteredOut – This metric tracks all the messages that were filtered out because they carried attributes that didn’t match the subscription filter policy.
  • NumberOfNotificationsFilteredOut-NoMessageAttributes – This metric tracks all the messages that were filtered out because they didn’t carry any attributes at all and, consequently, didn’t match the subscription filter policy.
  • NumberOfNotificationsFilteredOut-InvalidAttributes – This metric keeps track of messages that were filtered out because they carried invalid or malformed attributes and, thus, didn’t match the subscription filter policy.
  • NumberOfNotificationsFailed – This last metric tracks all the messages that failed to be delivered to subscribing endpoints, regardless of whether a filter policy had been set for the endpoint. This metric is emitted after the message delivery retry policy is exhausted, and SNS stops attempting to deliver the message. At that moment, the subscribing endpoint is likely no longer reachable. For example, the subscribing SQS queue or Lambda function has been deleted by its owner. You may want to closely monitor this metric to address message delivery issues quickly.

Message filtering graphs

Through the AWS Management Console, you can compose graphs to display your SNS message filtering activity. The graph shows the number of messages published, delivered, and filtered out within the timeframe you specify (1h, 3h, 12h, 1d, 3d, 1w, or custom).

SNS message filtering for CloudWatch Metrics

To compose an SNS message filtering graph with CloudWatch:

  1. Open the CloudWatch console.
  2. Choose Metrics, SNS, All Metrics, and Topic Metrics.
  3. Select all metrics to add to the graph, such as:
    • NumberOfMessagesPublished
    • NumberOfNotificationsDelivered
    • NumberOfNotificationsFilteredOut
  4. Choose Graphed metrics.
  5. In the Statistic column, switch from Average to Sum.
  6. Title your graph with a descriptive name, such as “SNS Message Filtering”

After you have your graph set up, you may want to copy the graph link for bookmarking, emailing, or sharing with co-workers. You may also want to add your graph to a CloudWatch dashboard for easy access in the future. Both actions are available to you on the Actions menu, which is found above the graph.


SNS message filtering defines how SNS topics behave in terms of message delivery. By using CloudWatch metrics, you gain visibility into the number of messages published, delivered, and filtered out. This enables you to validate the operation of filter policies and more easily troubleshoot during development phases.

SNS message filtering can be implemented easily with existing AWS SDKs by applying message and subscription attributes across all SNS supported protocols (Amazon SQS, AWS Lambda, HTTP, SMS, email, and mobile push). CloudWatch metrics for SNS message filtering is available now, in all AWS Regions.

For information about pricing, see the CloudWatch pricing page.

For more information, see:

[$] An update on bcachefs

Post Syndicated from jake original https://lwn.net/Articles/755276/rss

The bcachefs filesystem has been under
development for a number of years now; according to lead developer Kent
Overstreet, it is time to start talking about getting the code upstream.
He came to the 2018 Linux Storage, Filesystem, and Memory-Management Summit
(LSFMM) to discuss that in a combined filesystem and storage
session. Bcachefs grew out of bcache, which is a block layer
cache that was merged into Linux 3.10 in mid-2013.

[$] Case-insensitive filesystem lookups

Post Syndicated from jake original https://lwn.net/Articles/754508/rss

Case-insensitive file name lookups are a feature that is fairly frequently
raised at the Linux
Storage, Filesystem, and Memory-Management Summit (LSFMM). At the 2018
summit, Gabriel Bertazi proposed a new way to support
the feature, though it met with a rather skeptical reception—with one
notable exception. Ted Ts’o seemed favorably disposed to the idea, in part
it would potentially be a way to get rid of some longstanding Android ugliness:

[$] SMB/CIFS compounding support

Post Syndicated from jake original https://lwn.net/Articles/754507/rss

In a filesystem-track session at the 2018 Linux Storage, Filesystem, and
Memory-Management Summit (LSFMM), Ronnie Sahlberg talked about some changes
he has made to add support for compounding to the SMB/CIFS
implementation in Linux. Compounding is a way to combine multiple
operations into a single request that can help reduce network round-trips.

[$] Network filesystem topics

Post Syndicated from jake original https://lwn.net/Articles/754506/rss

At the 2018 Linux Storage, Filesystem, and
Memory-Management Summit (LSFMM), Steve French led a discussion of various
problem areas for network filesystems. Unlike previous sessions (in 2016 and 2017), there was some good news to report
because the long-awaited statx()
system call
was released in Linux 4.11. But there
is still plenty of work to be done to better support network filesystems in

[$] XFS online filesystem scrubbing and repair

Post Syndicated from jake original https://lwn.net/Articles/754504/rss

In a filesystem track session at the 2018 Linux Storage, Filesystem, and
Memory-Management Summit (LSFMM), Darrick Wong talked about the online
scrubbing and repair features he has been working on. His target has mostly been
XFS, but he has concurrently been working on scrubbing for ext4.
Part of what he wanted to discuss was the possibility of standardizing some
of these interfaces across different filesystem types.

[$] Supporting multi-actuator drives

Post Syndicated from jake original https://lwn.net/Articles/753652/rss

In a combined filesystem and storage session at the 2018 Linux Storage,
Filesystem, and Memory-Management Summit (LSFMM), Tim Walker asked for help
in designing the interface to some new storage hardware. He wanted some
feedback on how a multi-actuator
should present itself to the system.
These drives have two (or, eventually, more) sets of read/write heads and
other hardware that can all operate in parallel.

[$] A mapping layer for filesystems

Post Syndicated from jake original https://lwn.net/Articles/753650/rss

In a plenary session on the second day of the Linux Storage, Filesystem,
and Memory-Management Summit (LSFMM), Dave Chinner described his ideas for
a virtual block address-space layer. It would allow “space accounting to be
shared and managed at various layers in the storage stack”. One of the
targets for this work is for filesystems on thin-provisioned devices, where
the filesystem
is larger than the storage devices holding it (and administrators are
expected to add storage as needed); in current systems, running out of
space causes huge problems for filesystems and users because the filesystem
cannot communicate that error in a usable fashion.

[$] Shared memory mappings for devices

Post Syndicated from jake original https://lwn.net/Articles/753481/rss

In a short filesystem-only discussion at the 2018 Linux Storage,
Filesystem, and Memory-Management Summit (LSFMM), Jérôme Glisse wanted to
talk about some (more) changes to support GPUs, FPGAs, and RDMA devices.
In other talks at LSFMM, he discussed
changes to struct page
in support of these kinds of devices, but here he was looking to discuss
other changes
to support mapping a device’s memory into multiple processes. It should be
noted that I had a hard time following the discussion in this session, so
there may be significant gaps in the article.

[$] XFS parent pointers

Post Syndicated from jake original https://lwn.net/Articles/753480/rss

At the 2018 Linux Storage, Filesystem, and Memory-Management Summit
(LSFMM), Allison Henderson led a session to discuss an XFS feature she has been
working on: parent pointers. These would
be pointers stored in extended attributes (xattrs) that would allow various tools to
reconstruct the path for a file from its inode.
In XFS repair scenarios, that path will help with reconstruction as well as
provide users with better information about where the problems lie.

[$] A new API for mounting filesystems

Post Syndicated from jake original https://lwn.net/Articles/753473/rss

The mount()
system call suffers from a number of different shortcomings that has led
some to consider a different API. At last year’s Linux Storage,
Filesystem, and Memory-Management Summit (LSFMM), that someone was
Miklos Szeredi, who led a session to discuss his
ideas for a new filesystem mounting API. Since then, David Howells has been
working with Szeredi and VFS maintainer Al Viro on this API; at the 2018
LSFMM, he presented that work.

[$] get_user_pages() continued

Post Syndicated from corbet original https://lwn.net/Articles/753272/rss

At a plenary session held
relatively early during the 2018 Linux Storage, Filesystem, and
Memory-Management Summit, the developers discussed a number of problems
with the kernel’s get_user_pages() interface. During the waning
hours of LSFMM, a tired (but dedicated) set of developers convened again in
the memory-management track to
continue the discussion and try to push it toward a real solution.

[$] PostgreSQL visits LSFMM

Post Syndicated from jake original https://lwn.net/Articles/752952/rss

The recent fsync() woes
experienced by
PostgreSQL led to a session on the first
day (April 23) of the 2108 Linux
Storage, Filesystem, and Memory-Management Summit (LSFMM). Those problems
also led
to a second-day session with PostgreSQL developer Andres Freund who gave an
overview of how PostgreSQL does I/O and where that ran aground on some
assumptions that had been made. The session led to a fair amount of
discussion with the filesystem-track developers; real solutions seem to be
in the offing.

[$] Fixing error reporting—again

Post Syndicated from jake original https://lwn.net/Articles/752613/rss

After a session at last year’s Linux
Storage, Filesystem, and Memory Management Summit (LSFMM), Jeff Layton was able to
make some improvements to block-layer error
handling. Those changes, which added a new
errseq_t type to hold an error number and sequence number, seemed
to help and were well
received—except by the PostgreSQL
. So Layton led a session at the 2018 LSFMM to discuss ways
to improve things further; it would be followed later in the week with a
session by one of the PostgreSQL developers to look at the specifics of the
problem from their perspective.

Congratulations to Oracle on MySQL 8.0

Post Syndicated from Michael "Monty" Widenius original http://monty-says.blogspot.com/2018/04/congratulations-to-oracle-on-mysql-80.html

Last week, Oracle announced the general availability of MySQL 8.0. This is good news for database users, as it means Oracle is still developing MySQL.

I decide to celebrate the event by doing a quick test of MySQL 8.0. Here follows a step-by-step description of my first experience with MySQL 8.0.
Note that I did the following without reading the release notes, as is what I have done with every MySQL / MariaDB release up to date; In this case it was not the right thing to do.

I pulled MySQL 8.0 from [email protected]:mysql/mysql-server.git
I was pleasantly surprised that ‘cmake . ; make‘ worked without without any compiler warnings! I even checked the used compiler options and noticed that MySQL was compiled with -Wall + several other warning flags. Good job MySQL team!

I did have a little trouble finding the mysqld binary as Oracle had moved it to ‘runtime_output_directory’; Unexpected, but no big thing.

Now it’s was time to install MySQL 8.0.

I did know that MySQL 8.0 has removed mysql_install_db, so I had to use the mysqld binary directly to install the default databases:
(I have specified datadir=/my/data3 in the /tmp/my.cnf file)

> cd runtime_output_directory
> mkdir /my/data3
> ./mysqld –defaults-file=/tmp/my.cnf –install

2018-04-22T12:38:18.332967Z 1 [ERROR] [MY-011011] [Server] Failed to find valid data directory.
2018-04-22T12:38:18.333109Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed.
2018-04-22T12:38:18.333135Z 0 [ERROR] [MY-010119] [Server] Aborting

A quick look in mysqld –help –verbose output showed that the right command option is –-initialize. My bad, lets try again,

> ./mysqld –defaults-file=/tmp/my.cnf –initialize

2018-04-22T12:39:31.910509Z 0 [ERROR] [MY-010457] [Server] –initialize specified but the data directory has files in it. Aborting.
2018-04-22T12:39:31.910578Z 0 [ERROR] [MY-010119] [Server] Aborting

Now I used the right options, but still didn’t work.
I took a quick look around:

> ls /my/data3/

So even if the mysqld noticed that the data3 directory was wrong, it still wrote things into it.  This even if I didn’t have –log-binlog enabled in the my.cnf file. Strange, but easy to fix:

> rm /my/data3/binlog.index
> ./mysqld –defaults-file=/tmp/my.cnf –initialize

2018-04-22T12:40:45.633637Z 0 [ERROR] [MY-011071] [Server] unknown variable ‘max-tmp-tables=100’
2018-04-22T12:40:45.633657Z 0 [Warning] [MY-010952] [Server] The privilege system failed to initialize correctly. If you have upgraded your server, make sure you’re executing mysql_upgrade to correct the issue.
2018-04-22T12:40:45.633663Z 0 [ERROR] [MY-010119] [Server] Aborting

The warning about the privilege system confused me a bit, but I ignored it for the time being and removed from my configuration files the variables that MySQL 8.0 doesn’t support anymore. I couldn’t find a list of the removed variables anywhere so this was done with the trial and error method.

> ./mysqld –defaults-file=/tmp/my.cnf

2018-04-22T12:42:56.626583Z 0 [ERROR] [MY-010735] [Server] Can’t open the mysql.plugin table. Please run mysql_upgrade to create it.
2018-04-22T12:42:56.827685Z 0 [Warning] [MY-010015] [Repl] Gtid table is not ready to be used. Table ‘mysql.gtid_executed’ cannot be opened.
2018-04-22T12:42:56.838501Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2018-04-22T12:42:56.848375Z 0 [Warning] [MY-010441] [Server] Failed to open optimizer cost constant tables
2018-04-22T12:42:56.848863Z 0 [ERROR] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we’re sending the information to the error-log instead: MY-001146 – Table ‘mysql.component’ doesn’t exist
2018-04-22T12:42:56.848916Z 0 [Warning] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we’re sending the information to the error-log instead: MY-003543 – The mysql.component table is missing or has an incorrect definition.
2018-04-22T12:42:56.854141Z 0 [System] [MY-010931] [Server] /home/my/mysql-8.0/runtime_output_directory/mysqld: ready for connections. Version: ‘8.0.11’ socket: ‘/tmp/mysql.sock’ port: 3306 Source distribution.

I figured out that if there is a single wrong variable in the configuration file, running mysqld –initialize will leave the database in an inconsistent state. NOT GOOD! I am happy I didn’t try this in a production system!

Time to start over from the beginning:

> rm -r /my/data3/*
> ./mysqld –defaults-file=/tmp/my.cnf –initialize

2018-04-22T12:44:45.548960Z 5 [Note] [MY-010454] [Server] A temporary password is generated for [email protected]: px)NaaSp?6um
2018-04-22T12:44:51.221751Z 0 [System] [MY-013170] [Server] /home/my/mysql-8.0/runtime_output_directory/mysqld (mysqld 8.0.11) initializing of server has completed


I wonder why the temporary password is so complex; It could easily have been something that one could easily remember without decreasing security, it’s temporary after all. No big deal, one can always paste it from the logs. (Side note: MariaDB uses socket authentication on many system and thus doesn’t need temporary installation passwords).

Now lets start the MySQL server for real to do some testing:

> ./mysqld –defaults-file=/tmp/my.cnf

2018-04-22T12:45:43.683484Z 0 [System] [MY-010931] [Server] /home/my/mysql-8.0/runtime_output_directory/mysqld: ready for connections. Version: ‘8.0.11’ socket: ‘/tmp/mysql.sock’ port: 3306 Source distribution.

And the lets start the client:

> ./client/mysql –socket=/tmp/mysql.sock –user=root –password=”px)NaaSp?6um”
ERROR 2059 (HY000): Plugin caching_sha2_password could not be loaded: /usr/local/mysql/lib/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory

Apparently MySQL 8.0 doesn’t work with old MySQL / MariaDB clients by default 🙁

I was testing this in a system with MariaDB installed, like all modern Linux system today, and didn’t want to use the MySQL clients or libraries.

I decided to try to fix this by changing the authentication to the native (original) MySQL authentication method.

> mysqld –skip-grant-tables

> ./client/mysql –socket=/tmp/mysql.sock –user=root
ERROR 1045 (28000): Access denied for user ‘root’@’localhost’ (using password: NO)

Apparently –skip-grant-tables is not good enough anymore. Let’s try again with:

> mysqld –skip-grant-tables –default_authentication_plugin=mysql_native_password

> ./client/mysql –socket=/tmp/mysql.sock –user=root mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 7
Server version: 8.0.11 Source distribution

Great, we are getting somewhere, now lets fix “root”  to work with the old authenticaion:

MySQL [mysql]> update mysql.user set plugin=”mysql_native_password”,authentication_string=password(“test”) where user=”root”;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ‘(“test”) where user=”root”‘ at line 1

A quick look in the MySQL 8.0 release notes told me that the PASSWORD() function is removed in 8.0. Why???? I don’t know how one in MySQL 8.0 is supposed to generate passwords compatible with old installations of MySQL. One could of course start an old MySQL or MariaDB version, execute the password() function and copy the result.

I decided to fix this the easy way and use an empty password:

(Update:: I later discovered that the right way would have been to use: FLUSH PRIVILEGES;  ALTER USER’ root’@’localhost’ identified by ‘test’  ; I however dislike this syntax as it has the password in clear text which is easy to grab and the command can’t be used to easily update the mysql.user table. One must also disable the –skip-grant mode to do use this)

MySQL [mysql]> update mysql.user set plugin=”mysql_native_password”,authentication_string=”” where user=”root”;
Query OK, 1 row affected (0.077 sec)
Rows matched: 1 Changed: 1 Warnings: 0
I restarted mysqld:
> mysqld –default_authentication_plugin=mysql_native_password

> ./client/mysql –user=root –password=”” mysql
ERROR 1862 (HY000): Your password has expired. To log in you must change it using a client that supports expired passwords.

Ouch, forgot that. Lets try again:

> mysqld –skip-grant-tables –default_authentication_plugin=mysql_native_password

> ./client/mysql –user=root –password=”” mysql
MySQL [mysql]> update mysql.user set password_expired=”N” where user=”root”;

Now restart and test worked:

> ./mysqld –default_authentication_plugin=mysql_native_password

>./client/mysql –user=root –password=”” mysql

Finally I had a working account that I can use to create other users!

When looking at mysqld –help –verbose again. I noticed the option:

Create the default database and exit. Create a super user
with empty password.

I decided to check if this would have made things easier:

> rm -r /my/data3/*
> ./mysqld –defaults-file=/tmp/my.cnf –initialize-insecure

2018-04-22T13:18:06.629548Z 5 [Warning] [MY-010453] [Server] [email protected] is created with an empty password ! Please consider switching off the –initialize-insecure option.

Hm. Don’t understand the warning as–initialize-insecure is not an option that one would use more than one time and thus nothing one would ‘switch off’.

> ./mysqld –defaults-file=/tmp/my.cnf

> ./client/mysql –user=root –password=”” mysql
ERROR 2059 (HY000): Plugin caching_sha2_password could not be loaded: /usr/local/mysql/lib/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory

Back to the beginning 🙁

To get things to work with old clients, one has to initialize the database with:
> ./mysqld –defaults-file=/tmp/my.cnf –initialize-insecure –default_authentication_plugin=mysql_native_password

Now I finally had MySQL 8.0 up and running and thought I would take it up for a spin by running the “standard” MySQL/MariaDB sql-bench test suite. This was removed in MySQL 5.7, but as I happened to have MariaDB 10.3 installed, I decided to run it from there.

sql-bench is a single threaded benchmark that measures the “raw” speed for some common operations. It gives you the ‘maximum’ performance for a single query. Its different from other benchmarks that measures the maximum throughput when you have a lot of users, but sql-bench still tells you a lot about what kind of performance to expect from the database.

I tried first to be clever and create the “test” database, that I needed for sql-bench, with
> mkdir /my/data3/test

but when I tried to run the benchmark, MySQL 8.0 complained that the test database didn’t exist.

MySQL 8.0 has gone away from the original concept of MySQL where the user can easily
create directories and copy databases into the database directory. This may have serious
implication for anyone doing backup of databases and/or trying to restore a backup with normal OS commands.

I created the ‘test’ database with mysqladmin and then tried to run sql-bench:

> ./run-all-tests –user=root

The first run failed in test-ATIS:

Can’t execute command ‘create table class_of_service (class_code char(2) NOT NULL,rank tinyint(2) NOT NULL,class_description char(80) NOT NULL,PRIMARY KEY (class_code))’
Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ‘rank tinyint(2) NOT NULL,class_description char(80) NOT NULL,PRIMARY KEY (class_’ at line 1

This happened because ‘rank‘ is now a reserved word in MySQL 8.0. This is also reserved in ANSI SQL, but I don’t know of any other database that has failed to run test-ATIS before. I have in the past run it against Oracle, PostgreSQL, Mimer, MSSQL etc without any problems.

MariaDB also has ‘rank’ as a keyword in 10.2 and 10.3 but one can still use it as an identifier.

I fixed test-ATIS and then managed to run all tests on MySQL 8.0.

I did run the test both with MySQL 8.0 and MariaDB 10.3 with the InnoDB storage engine and by having identical values for all InnoDB variables, table-definition-cache and table-open-cache. I turned off performance schema for both databases. All test are run with a user with an empty password (to keep things comparable and because it’s was too complex to generate a password in MySQL 8.0)

The result are as follows
Results per test in seconds:

Operation         |MariaDB|MySQL-8|

ATIS              | 153.00| 228.00|
alter-table       |  92.00| 792.00|
big-tables        | 990.00|2079.00|
connect           | 186.00| 227.00|
create            | 575.00|4465.00|
insert            |4552.00|8458.00|
select            | 333.00| 412.00|
table-elimination |1900.00|3916.00|
wisconsin         | 272.00| 590.00|

This is of course just a first view of the performance of MySQL 8.0 in a single user environment. Some reflections about the results:

  • Alter-table test is slower (as expected) in 8.0 as some of the alter tests benefits of the instant add column in MariaDB 10.3.
  • connect test is also better for MariaDB as we put a lot of efforts to speed this up in MariaDB 10.2
  • table-elimination shows an optimization in MariaDB for the  Anchor table model, which MySQL doesn’t have.
  • CREATE and DROP TABLE is almost 8 times slower in MySQL 8.0 than in MariaDB 10.3. I assume this is the cost of ‘atomic DDL’. This may also cause performance problems for any thread using the data dictionary when another thread is creating/dropping tables.
  • When looking at the individual test results, MySQL 8.0 was slower in almost every test, in many significantly slower.
  • The only test where MySQL was faster was “update_with_key_prefix”. I checked this and noticed that there was a bug in the test and the columns was updated to it’s original value (which should be instant with any storage engine). This is an old bug that MySQL has found and fixed and that we have not been aware of in the test or in MariaDB.
  • While writing this, I noticed that MySQL 8.0 is now using utf8mb4 as the default character set instead of latin1. This may affect some of the benchmarks slightly (not much as most tests works with numbers and Oracle claims that utf8mb4 is only 20% slower than latin1), but needs to be verified.
  • Oracle claims that MySQL 8.0 is much faster on multi user benchmarks. The above test indicates that they may have done this by sacrificing single user performance.
  •  We need to do more and many different benchmarks to better understand exactly what is going on. Stay tuned!

Short summary of my first run with MySQL 8.0:

  • Using the new caching_sha2_password authentication as default for new installation is likely to cause a lot of problems for users. No old application will be able to use MySQL 8.0, installed with default options, without moving to MySQL’s client libraries. While working on this blog I saw MySQL users complain on IRC that not even MySQL Workbench can authenticate with MySQL 8.0. This is the first time in MySQL’s history where such an incompatible change has ever been done!
  • Atomic DDL is a good thing (We plan to have this in MariaDB 10.4), but it should not have such a drastic impact on performance. I am also a bit skeptical of MySQL 8.0 having just one copy of the data dictionary as if this gets corrupted you will lose all your data. (Single point of failure)
  • MySQL 8.0 has several new reserved words and has removed a lot of variables, which makes upgrades hard. Before upgrading to MySQL 8.0 one has to check all one’s databases and applications to ensure that there are no conflicts.
  • As my test above shows, if you have a single deprecated variable in your configuration files, the installation of MySQL will abort and can leave the database in inconsistent state. I did of course my tests by installing into an empty data dictionary, but one can assume that some of the problems may also happen when upgrading an old installation.

In many ways, MySQL 8.0 has caught up with some earlier versions of MariaDB. For instance, in MariaDB 10.0, we introduced roles (four years ago). In MariaDB 10.1, we introduced encrypted redo/undo logs (three years ago). In MariaDB 10.2, we introduced window functions and CTEs (a year ago). However, some catch-up of MariaDB Server 10.2 features still remains for MySQL (such as check constraints, binlog compression, and log-based rollback).

MySQL 8.0 has a few new interesting features (mostly Atomic DDL and JSON TABLE functions), but at the same time MySQL has strayed away from some of the fundamental corner stone principles of MySQL:

From the start of the first version of MySQL in 1995, all development has been focused around 3 core principles:

  • Ease of use
  • Performance
  • Stability

With MySQL 8.0, Oracle has sacrifices 2 of 3 of these.

In addition (as part of ease of use), while I was working on MySQL, we did our best to ensure that the following should hold:

  • Upgrades should be trivial
  • Things should be kept compatible, if possible (don’t remove features/options/functions that are used)
  • Minimize reserved words, don’t remove server variables
  • One should be able to use normal OS commands to create and drop databases, copy and move tables around within the same system or between different systems. With 8.0 and data dictionary taking backups of specific tables will be hard, even if the server is not running.
  • mysqldump should always be usable backups and to move to new releases
  • Old clients and application should be able to use ‘any’ MySQL server version unchanged. (Some Oracle client libraries, like C++, by default only supports the new X protocol and can thus not be used with older MySQL or any MariaDB version)

We plan to add a data dictionary to MariaDB 10.4 or MariaDB 10.5, but in a way to not sacrifice any of the above principles!

The competition between MySQL and MariaDB is not just about a tactical arms race on features. It’s about design philosophy, or strategic vision, if you will.

This shows in two main ways: our respective view of the Storage Engine structure, and of the top-level direction of the roadmap.

On the Storage Engine side, MySQL is converging on InnoDB, even for clustering and partitioning. In doing so, they are abandoning the advantages of multiple ways of storing data. By contrast, MariaDB sees lots of value in the Storage Engine architecture: MariaDB Server 10.3 will see the general availability of MyRocks (for write-intensive workloads) and Spider (for scalable workloads). On top of that, we have ColumnStore for analytical workloads. One can use the CONNECT engine to join with other databases. The use of different storage engines for different workloads and different hardware is a competitive differentiator, now more than ever.

On the roadmap side, MySQL is carefully steering clear of features that close the gap between MySQL and Oracle. MariaDB has no such constraints. With MariaDB 10.3, we are introducing PL/SQL compatibility (Oracle’s stored procedures) and AS OF (built-in system versioned tables with point-in-time querying). For both of those features, MariaDB is the first Open Source database doing so. I don’t except Oracle to provide any of the above features in MySQL!

Also on the roadmap side, MySQL is not working with the ecosystem in extending the functionality. In 2017, MariaDB accepted more code contributions in one year, than MySQL has done during its entire lifetime, and the rate is increasing!

I am sure that the experience I had with testing MySQL 8.0 would have been significantly better if MySQL would have an open development model where the community could easily participate in developing and testing MySQL continuously. Most of the confusing error messages and strange behavior would have been found and fixed long before the GA release.

Before upgrading to MySQL 8.0 please read https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html to see what problems you can run into! Don’t expect that old installations or applications will work out of the box without testing as a lot of features and options has been removed (query cache, partition of myisam tables etc)! You probably also have to revise your backup methods, especially if you want to ever restore just a few tables. (With 8.0, I don’t know how this can be easily done).

According to the MySQL 8.0 release notes, one can’t use mysqldump to copy a database to MySQL 8.0. One has to first to move to a MySQL 5.7 GA version (with mysqldump, as recommended by Oracle) and then to MySQL 8.0 with in-place update. I assume this means that all old mysqldump backups are useless for MySQL 8.0?

MySQL 8.0 seams to be a one way street to an unknown future. Up to MySQL 5.7 it has been trivial to move to MariaDB and one could always move back to MySQL with mysqldump. All MySQL client libraries has worked with MariaDB and all MariaDB client libraries has worked with MySQL. With MySQL 8.0 this has changed in the wrong direction.

As long as you are using MySQL 5.7 and below you have choices for your future, after MySQL 8.0 you have very little choice. But don’t despair, as MariaDB will always be able to load a mysqldump file and it’s very easy to upgrade your old MySQL installation to MariaDB 🙂

I wish you good luck to try MySQL 8.0 (and also the upcoming MariaDB 10.3)!

Fox Networks Obtains Piracy Blocking Injunction Against Rojadirecta

Post Syndicated from Andy original https://torrentfreak.com/fox-networks-obtains-piracy-blocking-injunction-against-rojadirecta-180405/

Twelve years ago this October, a court in Denmark ordered a local ISP to begin blocking unlicensed Russian music site AllofMP3. It was a landmark moment that opened the floodgates.

Although most countries took a few years to follow, blocking is now commonplace across Europe and if industry lobbyists have their way, it will soon head to North America. Meanwhile, other regions are getting their efforts underway, with Uruguay the latest country to reserve a place on the list.

The news comes via Fox Sports Latin America, which expressed satisfaction this week that a court in the country had handed down an interim injunction against local ISPs which compels them to block access to streaming portal Rojadirecta.

Despite a focus on Spanish speaking regions, Rojadirecta is one of the best known and longest-standing unauthorized sports in the world. Offering links to live streams of most spectator sports, Rojadirecta has gained a loyal and international following.

This has resulted in a number of lawsuits and legal challenges in multiple regions, the latest being a criminal copyright infringement complaint by Fox Sports Latin America. As usual, the company is annoyed that its content is being made available online without the proper authorization.

“This exemplary ruling marks the beginning of judicial awareness on online piracy issues,” said Daniel Steinmetz, Chief Anti-Piracy Officer of Fox Networks Group Latin America.

“FNG Latin America works constantly to combat the illegal use of content on different fronts and with great satisfaction we have found in Uruguay an important ally in the fight against this scourge. We are on our way to ending the impunity of these illegal content relay sites.”

Fox Sports says that with this pioneering action, Uruguay is now at the forefront of the campaign to tackle piracy currently running rampant across South America.

According to a NetNames report, there are 222 million Internet users in the region, of which 110 million access pirated content. This translates to 1,377 million TV hours per year but it’s hoped that additional action in other countries will help to stem the rising tide.

“We have already presented actions in other countries in the region where we will seek to replicate what we have obtained in Uruguay,” Fox said in a statement.

Local reports indicate that Internet providers have not yet taken action to block RojaDirecta but it’s expected they will do so in the near future.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Engineering deep dive: Encoding of SCTs in certificates

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org/2018/04/04/sct-encoding.html

<p>Let&rsquo;s Encrypt recently <a href="https://community.letsencrypt.org/t/signed-certificate-timestamps-embedded-in-certificates/57187">launched SCT embedding in
This feature allows browsers to check that a certificate was submitted to a
<a href="https://en.wikipedia.org/wiki/Certificate_Transparency">Certificate Transparency</a>
log. As part of the launch, we did a thorough review
that the encoding of Signed Certificate Timestamps (SCTs) in our certificates
matches the relevant specifications. In this post, I&rsquo;ll dive into the details.
You&rsquo;ll learn more about X.509, ASN.1, DER, and TLS encoding, with references to
the relevant RFCs.</p>

<p>Certificate Transparency offers three ways to deliver SCTs to a browser: In a
TLS extension, in stapled OCSP, or embedded in a certificate. We chose to
implement the embedding method because it would just work for Let&rsquo;s Encrypt
subscribers without additional work. In the SCT embedding method, we submit
a &ldquo;precertificate&rdquo; with a <a href="#poison">poison extension</a> to a set of
CT logs, and get back SCTs. We then issue a real certificate based on the
precertificate, with two changes: The poison extension is removed, and the SCTs
obtained earlier are added in another extension.</p>

<p>Given a certificate, let&rsquo;s first look for the SCT list extension. According to CT (<a href="https://tools.ietf.org/html/rfc6962#section-3.3">RFC 6962
section 3.3</a>),
the extension OID for a list of SCTs is <code></code>. An <a href="http://www.hl7.org/Oid/information.cfm">OID (object
ID)</a> is a series of integers, hierarchically
assigned and globally unique. They are used extensively in X.509, for instance
to uniquely identify extensions.</p>

<p>We can <a href="https://acme-v01.api.letsencrypt.org/acme/cert/031f2484307c9bc511b3123cb236a480d451">download an example certificate</a>,
and view it using OpenSSL (if your OpenSSL is old, it may not display the
detailed information):</p>

<pre><code>$ openssl x509 -noout -text -inform der -in Downloads/031f2484307c9bc511b3123cb236a480d451

CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1(0)
Log ID : DB:74:AF:EE:CB:29:EC:B1:FE:CA:3E:71:6D:2C:E5:B9:
Timestamp : Mar 29 18:45:07.993 2018 GMT
Extensions: none
Signature : ecdsa-with-SHA256
Signed Certificate Timestamp:
Version : v1(0)
Log ID : 29:3C:51:96:54:C8:39:65:BA:AA:50:FC:58:07:D4:B7:
Timestamp : Mar 29 18:45:08.010 2018 GMT
Extensions: none
Signature : ecdsa-with-SHA256

<p>Now let&rsquo;s go a little deeper. How is that extension represented in
the certificate? Certificates are expressed in
<a href="https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One">ASN.1</a>,
which generally refers to both a language for expressing data structures
and a set of formats for encoding them. The most common format,
<a href="https://en.wikipedia.org/wiki/X.690#DER_encoding">DER</a>,
is a tag-length-value format. That is, to encode an object, first you write
down a tag representing its type (usually one byte), then you write
down a number expressing how long the object is, then you write down
the object contents. This is recursive: An object can contain multiple
objects within it, each of which has its own tag, length, and value.</p>

<p>One of the cool things about DER and other tag-length-value formats is that you
can decode them to some degree without knowing what they mean. For instance, I
can tell you that 0x30 means the data type &ldquo;SEQUENCE&rdquo; (a struct, in ASN.1
terms), and 0x02 means &ldquo;INTEGER&rdquo;, then give you this hex byte sequence to

<pre><code>30 06 02 01 03 02 01 0A

<p>You could tell me right away that decodes to:</p>


<p>Try it yourself with this great <a href="https://lapo.it/asn1js/#300602010302010A">JavaScript ASN.1
decoder</a>. However, you wouldn&rsquo;t know
what those integers represent without the corresponding ASN.1 schema (or
&ldquo;module&rdquo;). For instance, if you knew that this was a piece of DogData, and the
schema was:</p>

<pre><code>DogData ::= SEQUENCE {
cutenessLevel INTEGER

<p>You&rsquo;d know this referred to a three-legged dog with a cuteness level of 10.</p>

<p>We can take some of this knowledge and apply it to our certificates. As a first
step, convert the above certificate to hex with
<code>xxd -ps &lt; Downloads/031f2484307c9bc511b3123cb236a480d451</code>. You can then copy
and paste the result into
<a href="https://lapo.it/asn1js">lapo.it/asn1js</a> (or use <a href="https://lapo.it/asn1js/#3082062F30820517A0030201020212031F2484307C9BC511B3123CB236A480D451300D06092A864886F70D01010B0500304A310B300906035504061302555331163014060355040A130D4C6574277320456E6372797074312330210603550403131A4C6574277320456E637279707420417574686F72697479205833301E170D3138303332393137343530375A170D3138303632373137343530375A302D312B3029060355040313223563396137662E6C652D746573742E686F66666D616E2D616E64726577732E636F6D30820122300D06092A864886F70D01010105000382010F003082010A0282010100BCEAE8F504D9D91FCFC69DB943254A7FED7C6A3C04E2D5C7DDD010CBBC555887274489CA4F432DCE6D7AB83D0D7BDB49C466FBCA93102DC63E0EB1FB2A0C50654FD90B81A6CB357F58E26E50F752BF7BFE9B56190126A47409814F59583BDD337DFB89283BE22E81E6DCE13B4E21FA6009FC8A7F903A17AB05C8BED85A715356837E849E571960A8999701EAE9CE0544EAAB936B790C3C35C375DB18E9AA627D5FA3579A0FB5F8079E4A5C9BE31C2B91A7F3A63AFDFEDB9BD4EA6668902417D286BE4BBE5E43CD9FE1B8954C06F21F5C5594FD3AB7D7A9CBD6ABF19774D652FD35C5718C25A3BA1967846CED70CDBA95831CF1E09FF7B8014E63030CE7A776750203010001A382032A30820326300E0603551D0F0101FF0404030205A0301D0603551D250416301406082B0601050507030106082B06010505070302300C0603551D130101FF04023000301D0603551D0E041604148B3A21ABADF50C4B30DCCD822724D2C4B9BA29E3301F0603551D23041830168014A84A6A63047DDDBAE6D139B7A64565EFF3A8ECA1306F06082B0601050507010104633061302E06082B060105050730018622687474703A2F2F6F6373702E696E742D78332E6C657473656E63727970742E6F7267302F06082B060105050730028623687474703A2F2F636572742E696E742D78332E6C657473656E63727970742E6F72672F302D0603551D110426302482223563396137662E6C652D746573742E686F66666D616E2D616E64726577732E636F6D3081FE0603551D200481F63081F33008060667810C0102013081E6060B2B0601040182DF130101013081D6302606082B06010505070201161A687474703A2F2F6370732E6C657473656E63727970742E6F72673081AB06082B0601050507020230819E0C819B54686973204365727469666963617465206D6179206F6E6C792062652072656C6965642075706F6E2062792052656C79696E67205061727469657320616E64206F6E6C7920696E206163636F7264616E636520776974682074686520436572746966696361746520506F6C69637920666F756E642061742068747470733A2F2F6C657473656E63727970742E6F72672F7265706F7369746F72792F30820104060A2B06010401D6790204020481F50481F200F0007500DB74AFEECB29ECB1FECA3E716D2CE5B9AABB36F7847183C75D9D4F37B61FBF64000001627313EB19000004030046304402207E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD30220659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419007700293C519654C83965BAAA50FC5807D4B76FBF587A2972DCA4C30CF4E54547F478000001627313EB2A0000040300483046022100AB72F1E4D6223EF87FC68491C208D29D4D57EBF47588BB7544D32F9537E2CEC10221008AFFC40CC6C4E3B24578DADE4F815ECBCE2D57A579342119A1E65BC7E5E69CE2300D06092A864886F70D01010B0500038201010095F87B663176776502F792DDD232C216943C7803876FCBEB46393A36354958134482E0AFEED39011618327C2F0203351758FEB420B73CE6C797B98F88076F409F3903F343D1F5D9540F41EF47EB39BD61B62873A44F00B7C8B593C6A416458CF4B5318F35235BC88EABBAA34F3E3F81BD3B047E982EE1363885E84F76F2F079F2B6EEB4ECB58EFE74C8DE7D54DE5C89C4FB5BB0694B837BD6F02BAFD5A6C007D1B93D25007BDA9B2BDBF82201FE1B76B628CE34E2D974E8E623EC57A5CB53B435DD4B9993ADF6BA3972F2B29D259594A94E17BBE06F34AAE5CF0F50297548C4DFFC5566136F78A3D3B324EAE931A14EB6BE6DA1D538E48CF077583C67B52E7E8">this handy link</a>). You can also run <code>openssl asn1parse -i -inform der -in Downloads/031f2484307c9bc511b3123cb236a480d451</code> to use OpenSSL&rsquo;s parser, which is less easy to use in some ways, but easier to copy and paste.</p>

<p>In the decoded data, we can find the OID <code></code>, indicating
the SCT list extension. Per <a href="https://tools.ietf.org/html/rfc5280#page-17">RFC 5280, section
4.1</a>, an extension is defined:</p>

<pre><code>Extension ::= SEQUENCE {
— contains the DER encoding of an ASN.1 value
— corresponding to the extension type identified
— by extnID

<p>We&rsquo;ve found the <code>extnID</code>. The &ldquo;critical&rdquo; field is omitted because it has the
default value (false). Next up is the <code>extnValue</code>. This has the type
<code>OCTET STRING</code>, which has the tag &ldquo;0x04&rdquo;. <code>OCTET STRING</code> means &ldquo;here&rsquo;s
a bunch of bytes!&rdquo; In this case, as described by the spec, those bytes
happen to contain more DER. This is a fairly common pattern in X.509
to deal with parameterized data. For instance, this allows defining a
structure for extensions without knowing ahead of time all the structures
that a future extension might want to carry in its value. If you&rsquo;re a C
programmer, think of it as a <code>void*</code> for data structures. If you prefer Go,
think of it as an <code>interface{}</code>.</p>

<p>Here&rsquo;s that <code>extnValue</code>:</p>

<pre><code>04 81 F5 0481F200F0007500DB74AFEECB29ECB1FECA3E716D2CE5B9AABB36F7847183C75D9D4F37B61FBF64000001627313EB19000004030046304402207E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD30220659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419007700293C519654C83965BAAA50FC5807D4B76FBF587A2972DCA4C30CF4E54547F478000001627313EB2A0000040300483046022100AB72F1E4D6223EF87FC68491C208D29D4D57EBF47588BB7544D32F9537E2CEC10221008AFFC40CC6C4E3B24578DADE4F815ECBCE2D57A579342119A1E65BC7E5E69CE2

<p>That&rsquo;s tag &ldquo;0x04&rdquo;, meaning <code>OCTET STRING</code>, followed by &ldquo;0x81 0xF5&rdquo;, meaning
&ldquo;this string is 245 bytes long&rdquo; (the 0x81 prefix is part of <a href="#variable-length">variable length
number encoding</a>).</p>

<p>According to <a href="https://tools.ietf.org/html/rfc6962#section-3.3">RFC 6962, section
3.3</a>, &ldquo;obtained SCTs
can be directly embedded in the final certificate, by encoding the
SignedCertificateTimestampList structure as an ASN.1 <code>OCTET STRING</code>
and inserting the resulting data in the TBSCertificate as an X.509v3
certificate extension&rdquo;</p>

<p>So, we have an <code>OCTET STRING</code>, all&rsquo;s good, right? Except if you remove the
tag and length from extnValue to get its value, you&rsquo;re left with:</p>

<pre><code>04 81 F2 00F0007500DB74AFEEC…

<p>There&rsquo;s that &ldquo;0x04&rdquo; tag again, but with a shorter length. Why
do we nest one <code>OCTET STRING</code> inside another? It&rsquo;s because the
contents of extnValue are required by RFC 5280 to be valid DER, but a
SignedCertificateTimestampList is not encoded using DER (more on that
in a minute). So, by RFC 6962, a SignedCertificateTimestampList is wrapped in an
<code>OCTET STRING</code>, which is wrapped in another <code>OCTET STRING</code> (the extnValue).</p>

<p>Once we decode that second <code>OCTET STRING</code>, we&rsquo;re left with the contents:</p>


<p>&ldquo;0x00&rdquo; isn&rsquo;t a valid tag in DER. What is this? It&rsquo;s TLS encoding. This is
defined in <a href="https://tools.ietf.org/html/rfc5246#section-4">RFC 5246, section 4</a>
(the TLS 1.2 RFC). TLS encoding, like ASN.1, has both a way to define data
structures and a way to encode those structures. TLS encoding differs
from DER in that there are no tags, and lengths are only encoded when necessary for
variable-length arrays. Within an encoded structure, the type of a field is determined by
its position, rather than by a tag. This means that TLS-encoded structures are
more compact than DER structures, but also that they can&rsquo;t be processed without
knowing the corresponding schema. For instance, here&rsquo;s the top-level schema from
<a href="https://tools.ietf.org/html/rfc6962#section-3.3">RFC 6962, section 3.3</a>:</p>

<pre><code> The contents of the ASN.1 OCTET STRING embedded in an OCSP extension
or X509v3 certificate extension are as follows:

opaque SerializedSCT&lt;1..2^16-1&gt;;

struct {
SerializedSCT sct_list &lt;1..2^16-1&gt;;
} SignedCertificateTimestampList;

Here, &quot;SerializedSCT&quot; is an opaque byte string that contains the
serialized TLS structure.

<p>Right away, we&rsquo;ve found one of those variable-length arrays. The length of such
an array (in bytes) is always represented by a length field just big enough to
hold the max array size. The max size of an <code>sct_list</code> is 65535 bytes, so the
length field is two bytes wide. Sure enough, those first two bytes are &ldquo;0x00
0xF0&rdquo;, or 240 in decimal. In other words, this <code>sct_list</code> will have 240 bytes. We
don&rsquo;t yet know how many SCTs will be in it. That will become clear only by
continuing to parse the encoded data and seeing where each struct ends (spoiler
alert: there are two SCTs!).</p>

<p>Now we know the first SerializedSCT starts with <code>0075…</code>. SerializedSCT
is itself a variable-length field, this time containing <code>opaque</code> bytes (much like <code>OCTET STRING</code>
back in the ASN.1 world). Like SignedCertificateTimestampList, it has a max size
of 65535 bytes, so we pull off the first two bytes and discover that the first
SerializedSCT is 0x0075 (117 decimal) bytes long. Here&rsquo;s the whole thing, in


<p>This can be decoded using the TLS encoding struct defined in <a href="https://tools.ietf.org/html/rfc6962#page-13">RFC 6962, section

<pre><code>enum { v1(0), (255) }

struct {
opaque key_id[32];
} LogID;

opaque CtExtensions&lt;0..2^16-1&gt;;

struct {
Version sct_version;
LogID id;
uint64 timestamp;
CtExtensions extensions;
digitally-signed struct {
Version sct_version;
SignatureType signature_type = certificate_timestamp;
uint64 timestamp;
LogEntryType entry_type;
select(entry_type) {
case x509_entry: ASN.1Cert;
case precert_entry: PreCert;
} signed_entry;
CtExtensions extensions;
} SignedCertificateTimestamp;

<p>Breaking that down:</p>

<pre><code># Version sct_version v1(0)
# LogID id (aka opaque key_id[32])
# uint64 timestamp (milliseconds since the epoch)
# CtExtensions extensions (zero-length array)
# digitally-signed struct

<p>To understand the &ldquo;digitally-signed struct,&rdquo; we need to turn back to <a href="https://tools.ietf.org/html/rfc5246#section-4.7">RFC 5246,
section 4.7</a>. It says:</p>

<pre><code>A digitally-signed element is encoded as a struct DigitallySigned:

struct {
SignatureAndHashAlgorithm algorithm;
opaque signature&lt;0..2^16-1&gt;;
} DigitallySigned;

<p>And in <a href="https://tools.ietf.org/html/rfc5246#section-">section</a>:</p>

<pre><code>enum {
none(0), md5(1), sha1(2), sha224(3), sha256(4), sha384(5),
sha512(6), (255)
} HashAlgorithm;

enum { anonymous(0), rsa(1), dsa(2), ecdsa(3), (255) }

struct {
HashAlgorithm hash;
SignatureAlgorithm signature;
} SignatureAndHashAlgorithm;

<p>We have &ldquo;0x0403&rdquo;, which corresponds to sha256(4) and ecdsa(3). The next two
bytes, &ldquo;0x0046&rdquo;, tell us the length of the &ldquo;opaque signature&rdquo; field, 70 bytes in
decimal. To decode the signature, we reference <a href="https://tools.ietf.org/html/rfc4492#page-20">RFC 4492 section
5.4</a>, which says:</p>

<pre><code>The digitally-signed element is encoded as an opaque vector &lt;0..2^16-1&gt;, the
contents of which are the DER encoding corresponding to the
following ASN.1 notation.

Ecdsa-Sig-Value ::= SEQUENCE {

<p>Having dived through two layers of TLS encoding, we are now back in ASN.1 land!
<a href="https://lapo.it/asn1js/#304402207E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD30220659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419">decode</a>
the remaining bytes into a SEQUENCE containing two INTEGERS. And we&rsquo;re done! Here&rsquo;s the whole
extension decoded:</p>

<pre><code># Extension SEQUENCE – RFC 5280
# length 0x0104 bytes (260 decimal)
# length 0x0A bytes (10 decimal)
# value (
# length 0xF5 bytes (245 decimal)
# OCTET STRING (embedded) – RFC 6962
# length 0xF2 bytes (242 decimal)
# Beginning of TLS encoded SignedCertificateTimestampList – RFC 5246 / 6962
# length 0xF0 bytes
# opaque SerializedSCT&lt;1..2^16-1&gt;
# length 0x75 bytes
# Version sct_version v1(0)
# LogID id (aka opaque key_id[32])
# uint64 timestamp (milliseconds since the epoch)
# CtExtensions extensions (zero-length array)
# digitally-signed struct – RFC 5426
# SignatureAndHashAlgorithm (ecdsa-sha256)
# opaque signature&lt;0..2^16-1&gt;;
# length 0x0046
# DER-encoded Ecdsa-Sig-Value – RFC 4492
44 # length 0x44 bytes
02 # r INTEGER
20 # length 0x20 bytes
# value
02 # s INTEGER
20 # length 0x20 bytes
# value
# opaque SerializedSCT&lt;1..2^16-1&gt;
# length 0x77 bytes
# Version sct_version v1(0)
# LogID id (aka opaque key_id[32])
# uint64 timestamp (milliseconds since the epoch)
# CtExtensions extensions (zero-length array)
# digitally-signed struct – RFC 5426
# SignatureAndHashAlgorithm (ecdsa-sha256)
# opaque signature&lt;0..2^16-1&gt;;
# length 0x0048
# DER-encoded Ecdsa-Sig-Value – RFC 4492
46 # length 0x46 bytes
02 # r INTEGER
21 # length 0x21 bytes
# value
02 # s INTEGER
21 # length 0x21 bytes
# value

<p>One surprising thing you might notice: In the first SCT, <code>r</code> and <code>s</code> are twenty
bytes long. In the second SCT, they are both twenty-one bytes long, and have a
leading zero. Integers in DER are two&rsquo;s complement, so if the leftmost bit is
set, they are interpreted as negative. Since <code>r</code> and <code>s</code> are positive, if the
leftmost bit would be a 1, an extra byte has to be added so that the leftmost
bit can be 0.</p>

<p>This is a little taste of what goes into encoding a certificate. I hope it was
informative! If you&rsquo;d like to learn more, I recommend &ldquo;<a href="http://luca.ntop.org/Teaching/Appunti/asn1.html">A Layman&rsquo;s Guide to a
Subset of ASN.1, BER, and DER</a>.&rdquo;</p>

<p><a name="poison"></a>Footnote 1: A &ldquo;poison extension&rdquo; is defined by <a href="https://tools.ietf.org/html/rfc6962#section-3.1">RFC 6962
section 3.1</a>:</p>

<pre><code>The Precertificate is constructed from the certificate to be issued by adding a special
critical poison extension (OID ``, whose
extnValue OCTET STRING contains ASN.1 NULL data (0x05 0x00))

<p>In other words, it&rsquo;s an empty extension whose only purpose is to ensure that
certificate processors will not accept precertificates as valid certificates. The
specification ensures this by setting the &ldquo;critical&rdquo; bit on the extension, which
ensures that code that doesn&rsquo;t recognize the extension will reject the whole
certificate. Code that does recognize the extension specifically as poison
will also reject the certificate.</p>

<p><a name="variable-length"></a>Footnote 2: Lengths from 0-127 are represented by
a single byte (short form). To express longer lengths, more bytes are used (long form).
The high bit (0x80) on the first byte is set to distinguish long form from short
form. The remaining bits are used to express how many more bytes to read for the
length. For instance, 0x81F5 means &ldquo;this is long form because the length is
greater than 127, but there&rsquo;s still only one byte of length (0xF5) to decode.&rdquo;</p>