Германия, Меркел, бежанци… и ние

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=1966

Гледам напоследък разни статии как под натиска на бежанците германците масово губели доверие в Анжела Меркел. Как я мразели и гонели, как искали всички бежанци да се махнат, и т.н… Само дето в Германия не забелязвам подобно нещо.

Рейтингът на Меркел е поспаднал, но към половината германци продължават да искат за канцлер именно нея. Следващите по доверие са с под 20% подкрепа. Ако канцлер се избираше пряко, тя би спечелила изборите още на първия тур. Или ако има втори тур, би го взела с над 60% от гласовете дори ако противниците ѝ се подкрепят срещу нея.

Същото с още по-голяма сила важи за отношението на германците към бежанците. Повечето германци са пропуснали вестта, че атентаторът от Мюнхен беше ултрадесен. Да, от ирански произход, но роден в Германия и определено не мюсюлманин. Но дори така, и дори след приемането на 1 милион имигранти, 70% от германците продължават да подкрепят приемането им. Без съмнение тук има роля добрата работа на германската полиция. Още повече вероятно помага фактът, че в Германия имигрантите биват реално интегрирани, за разлика от почти навсякъде другаде в Европа. Но процентът подкрепящи е от реално проучване на общественото мнение. С фактите спорят само лудите.

Така че в крайна сметка статиите, дето плюят Меркел и бежанците, са продукт на пожелателно мислене. Което не пожелава добро на Германия. Чие е – ами, някои от статиите са от руски журналисти, други от английски, четох и поне една американска и поне една френска (германски засега не съм намерил). Но не това е истинският въпрос. Какво желаят на Германия руснаците или французите ме вълнува твърде малко. Вълнува ме какво ѝ желаем и колко сме в час с реалността в нея ние в България.

Споделих днес това с няколко познати. Един от тях заяви, че то просто не може да е истина. Загубих половин час, за да намеря пак проучванията, бях на косъм да не успея, но имах късмет и му ги показах. Той помърмори няколко минути и заяви, че германците са идиоти и че скоро злите мюсюлмани ще ги изколят до човек. Отговорих му, че това мнение вече се е чувало през 60-те, когато Германия приема към 10 милиона гастарбайтери от Турция. И как от тях на практика никой не се е върнал в Турция. Как раждаемостта им наистина е много по-висока от тази на германците, така че в Германия в момента трябва да има поне 25 милиона турци – но към момента се самоопределят като турци само към три и половина милиона, останалите се смятат за германци… Не мисля, че ми повярва. И не ми пука. Лекуването на заблуди си е проблем на заблудения, не мой.

Друг от познатите ми заяви, че не можем да го повярваме, понеже съдим за германците по себе си. Че всъщност трудолюбиви, гостоприемни, благородни и свестни са те, а ние сме мързеливи, стиснати, смотаняци и боклуци. С ужас се чудя дали в това няма някаква истина – хем не мога да го приема, хем да го отрека означава да отричам някои очевидни факти… Но и така да е, това не отговаря на въпроса – откъде германците имат силата да бъдат толкова благородни и разумни? Разберем ли го, може би има как да се научим да бъдем такива и ние.

А мисля, че не е толкова сложно.

От една страна, имигрантите са най-хубавият подарък, който една страна може да направи на друга. Стига другата да има капка мозък, да отпрати престъпниците сред тях и да интегрира останалите – но това си зависи само от нея. Неспособността да го направи е открай докрай неин проблем и говори за нейна некадърност, а не за имигрантите… Затова приемането и интегрирането им е от огромна изгода. Стига, разбира се, страната да е страна, а не…

От друга страна, днес е утрешното ни минало. Сега градим това, с което утре ще се гордеем, или от което ще се срамуваме – ние, децата ни, внуците ни. Постъпката ни в този момент е, за което отсега нататък думата „българин“ ще звучи гордо или срамно… Кой избор как ще звучи?

Гордеем ли се, че в Търновската конституция пише „Всякой роб, от какъвто пол, вяра и народност да бъде, свободен става щом стъпи на Българска територия“? Или се срамуваме от това?… А след петдесет или сто години ще се гордеем ли с това, че сме хващали и връзвали стъпили на българска територия бегълци? Че сме им опразвали джобовете и сме им казвали „Гоу бак! Ѝммидиътли!“?… Ще се гордеят ли с това децата и внуците ни, или ще се срамуват? Достойнство ли ще носи то на името „българин“ по света, или позор?

Гордеем ли се, че през Втората световна война сме спасили живота на българските евреи? Сред които сигурно ще да е имало и безсъвестни богаташи, и бандити, и какви ли не? Ѝли се срамуваме от това?… Тогава немалко „българи юнаци“ и „великопатриоти“ са се срамували. Обяснявали са как чифутите съсипват народа ни и трябва да бъдат изтребени до крак. Как тези, дето не мислят така, са предатели на България и българщината, и агенти на ционизма… Какво се оказа? Кое е нещото, което споменаваме в далечни страни, ако искаме да обрисуваме страната и народа си като смели и достойни?

… Навърших 50 тази година. Може да доживея момента, когато приемането на сирийските бежанци ще бъде официално сложено от Историята на един рафт със спасяването на евреите от нацизма. Може и да не го доживея. Но децата ми вероятно ще го доживеят. По-младите от вас, скъпи ми читатели, и децата ви – също.

И дали тогава и чужденците ще търсят как гордо да се нарекат българи, или пък дори ние ще се чудим за какви други да се представим, че да не берем срама, зависи от нас. Тук и сега… Германците очевидно са разбрали това. Просто трябва да го разберем и ние.

Kernel prepatch 4.8-rc1

Post Syndicated from corbet original http://lwn.net/Articles/696634/rss

Linus has released the 4.8-rc1 prepatch and
closed the merge window for this development cycle — sort of. “I
actually still have a few pull requests pending in my inbox that I just
wanted to take another look at before merging, but the large bulk of the
merge window material has been merged, and I wanted to make sure there
aren’t any new ones coming in.
” A total of 11,618 non-merge
changesets were pulled during the merge window.

Weekly roundup: three big things

Post Syndicated from Eevee original https://eev.ee/dev/2016/08/07/weekly-roundup-three-big-things/

August is about video games. Actually, the next three months are about video games. Primary goals and their rough stages:

  1. Draft three chapters of this book
    • August: one chapter (at which point I might start talking about what the book is)
    • September: another chapter
    • October: yet another chapter
  2. Get veekun beta-worthy
    • August: basics of the new schema committed; basics of gen 1 and gen 6 games dumped; skeleton cli and site
    • September: most games dumped; lookup; core pages working; new site in publicly-available beta
    • October: all games dumped; new site design work; extras like search and calculators
  3. Finish Runed Awakening
    • August: working ending; at least one solution to each puzzle; private beta
    • September: alternate solutions; huge wave of prose editing; patreon beta
    • October: fix the mountains of issues people find; finish any remaining illustrations

Yeah, we’ll see how all that goes. I also have some vague secondary goals like “do art” and “release tiny games” and “do Doom stuff” but those are extremely subject to change. Hopefully I can stick to the above three big things for three months.

Anyway, this week:

  • blog: Finished and published posts on why to use Python 3 and how to port to it, plus made numerous suggested edits. Wrote a brief thing about my frustrations with Pokémon Go. And wrote about veekun’s schema woes, which helped me reason through a few lingering thorny problems.

    That might be a record for most things I’ve published within a calendar week.

  • art: I tried an hour of timed (real-life) figure drawings, which was kinda weird. I’ve really lapsed on the daily Pokémon, possibly because I changed up the rules to be an hour for a single painting, and that feels like a huge amount of time (…for something I don’t think will come out very well). I’ll either make a better effort to do them every day, or change the rules again so I stop putting them off.

    I drew Griffin’s Nuzlocke team kind of on a whim? A day-long whim?

  • book: I wrote some preface, which you’re probably supposed to do last, but it helped me figure out the tone of the writing. I’ve mentioned this before regarding previous failed attempts, but writing a book is surprisingly harder than writing a blog post — I can’t quite put my finger on why, but the medium feels completely different and alien, and I’m much more self-conscious about how I write.

    I did get a bit of a chapter written, though. I probably spent much more time wrangling authoring tools into producing something acceptable.

  • doom: I somehow drifted into doing stuff to anachrony again. Apparently I left it in near-shambles, with at least a dozen half-finished things all over the place and few comments about what on Earth I was thinking. I’ve cleaned a lot of them up, figured out how to fix some long-standing irritations, and excised some bad ideas. It’s almost presentable now, and I started building a little contrived demo map that tries to show how some of the things work. Someday I might even use all this for a real map, wow.

  • zdoom: Oops, I also picked up my Lua-in-ZDoom experiment again. After doing some things to C++ that made me feel like a witch, someone recommended Sol, a single-file (10k line…) C++ library for interacting with Lua. It is fucking incredible and makes everything so much easier and the author is on Twitter and fixes things faster than I can bring them up.

    I don’t know how much time I want to devote to this — it is just an experiment — but Sol will make it go preposterously faster. It’s single-handedly made a proof of concept look feasible.

  • ops: I spent half a day fixing microscopic IPv6 problems that have been getting on my nerves for ages.

  • veekun: After publishing the schema post, I went to have a look at where I’d left the new dumper code. I spent a few hours writing rock-solid(-ish) version and language detection, which doesn’t have much to do with the schema but is really important to have.

I just about filled a page in my notebook with all this, which I haven’t done in a while. Feels pretty good! I’m also a quarter through the month already, so I’d better get moving on those three big things.

Отлагане заради правилата или правила заради отлагането

Post Syndicated from nellyo original https://nellyo.wordpress.com/2016/08/07/psm_bnt_cem/

1014576_10153116148720085_511446705_o

Управлението  на обществените медии е важно за властта. По-неумелите политици направо искат политически контрол, но при наличието на медиен регулатор това вече не се практикува толкова често. При  действащото законодателство предпочитаният от управляващите избор на генерален директор  може да се осъществи само ако си осигурят  съдействието на регулатора: кадровите решения по закон са в неговата компетентност. Но по дефиниция медийният регулатор е независим.

Тъй като мандатът на генералния директор на БНТ изтича,  всички очакваха решението на регулатора: ще насрочи избор – или ще отлага, както подсказват   публикации без автор в медии,  близки до управляващите.  Второто.

Официално оповестената причина:    изработване, обсъждане и приемане на нови правила за избор на генерален директор на БНТ.  Причина или претекст? Второто.

Отлагането е заради правилата или нови правила се създават  заради отлагането? Второто.

За анализ на предлагания проект е важно да се знаят дефицитите,  които трябва да се преодолеят, и целите, които трябва да се постигнат. За съжаление мотиви към публикувания проект липсват. Ако потърсим мотиви – например в  протоколи №27/31 май 2016 година и  №28/7 юни 2016 на СЕМ:  според мнозинството членове досегашните правила  търпят „доста критики от страна на обществеността като непрозрачни“, „проблемните звена се свеждат до изискванията за подбор на кандидатите и процедурата за гласуване“, „процедурата на гласуване е непрозрачна и дава възможност за тълкувания на гласовете“.

Дадени са  пояснения за намеренията на регулатора и пред медиите:

„работна група ще предвиди всякакви възможни варианти при гласуването, например  какво ще става, ако двама членове от 5-членния СЕМ гласува за един кандидат, другите двама – за друг, а петият член е встрани“.

и още пояснения:

“Процедурата трябва да е направена така, че да предвижда всякакви възможни опасности и пречки относно избора. Мога да ви дам пример с разпределението на гласовете, така че да не може да се направи избор. Или събирането на кворум, когато някой го няма. Всички тези проблеми не са разписани в процедурата, а искаме да предвидим всичко, за да направим правилната сметка за лесната и прозрачната процедура по избор”

Сега към проекта. Видимо  намеренията не са осъществени,  подобни разпоредби няма.  Може да се направи и  подробен коментар – включително например защо не може да се предлага за обществено обсъждане по-строго изискване от записаното в ЗРТ, ако няма предвидена законодателна делегация – но тук ще се огранича до извода: няколкомесечната процедура за приемане  на нови правила не  намира оправдание  – и това е проверимо на базата на досегашните правила, заявените намерения и  публикувания проект.

Толкова по темата. Както става ясно, правилата са симптоматика, темата е  паралелната реалност.  

 

 

 

 

 

 

Filed under: BG Law Making, BG Media, BG Regulator, Media Law

Let’s Encrypt will be trusted by Firefox 50

Post Syndicated from n8willis original http://lwn.net/Articles/696587/rss

The Let’s Encrypt project, which provides a free SSL/TLS certificate authority (CA), has announced that Mozilla has accepted the project’s root key into the Mozilla root program and will be trusted by default as of Firefox 50. This is a step forward from Let’s Encrypt’s earlier status. “In order to start issuing widely trusted certificates as soon as possible, we partnered with another CA, IdenTrust, which has a number of existing trusted roots. As part of that partnership, an IdenTrust root ‘vouches for’ the certificates that we issue, thus making our certificates trusted. We’re incredibly grateful to IdenTrust for helping us to start carrying out our mission as soon as possible. However, our plan has always been to operate as an independently trusted CA. Having our root trusted directly by the Mozilla root program represents significant progress towards that independence.” The project has also applied for inclusion the CA trust roots maintained by Apple, Microsoft, Google, Oracle, and Blackberry. News on those programs is still pending.

Storing Pokémon without SQL

Post Syndicated from Eevee original https://eev.ee/blog/2016/08/05/storing-pok%C3%A9mon-without-sql/

I run veekun, a little niche Pokédex website that mostly focuses on (a) very accurate data for every version, derived directly from the games and (b) a bunch of nerdy nerd tools.

It’s been languishing for a few years. (Sorry.) Part of it is that the team has never been very big, and all of us have either drifted away or gotten tied up in other things.

And part of it is that the schema absolutely sucks to work with. I’ve been planning to fix it for a year or two now, and with Sun/Moon on the horizon, it’s time I actually got around to doing that.

Alas! I’m still unsure on some of the details. I’m hoping if I talk them out, a clear best answer will present itself. It’s like advanced rubber duck debugging, with the added bonus that maybe a bunch of strangers will validate my thinking.

(Spoilers: I think I figured some stuff out by the end, so you don’t actually need to read any of this.)

The data

Pokémon has a lot of stuff going on under the hood.

  • The Pokémon themselves have one or two types; a set of abilities; moves they might learn at a given level or from a certain “tutor” NPC or via a specific item; evolution via one of at least twelve different mechanisms and which may branch; items they may be holding in the wild; six stats, plus effort for those six stats; flavor text; and a variety of other little data.

  • A number of Pokémon also have multiple forms, which can mean any number of differences that still “count” as the same Pokémon. Some forms are purely cosmetic (Unown); some affect the Pokémon’s type (Arceus); some affect stats (Pumpkaboo); some affect learned moves (Meowstic); some swap out a signature move (Rotom); some disable evolution (Pichu). Some forms can be switched at will; some switch automatically; some cannot be switched between at all. There aren’t really any hard and fast rules here. They’re effectively different Pokémon with the same name, except most of the properties are the same.

  • Moves are fairly straightforward, except that their effects vary wildly and it would be mighty convenient to be able to categorize them in a way that’s useful to a computer. After 17 years of trying, I’ve still not managed this.

  • Places connect to each other in various directions. They also may have some number of wild Pokémon, which appear at given levels with given probability. Oh, but certain conditions can change some — but not all! — of the possible encounters in an area, making for a UI nightmare. It gets particularly bad in Heart Gold and Soul Silver, where encounters (and their rates) are affected by time of day (morning, midday, night) and the music you’re playing (Sinnoh, Hoenn, none) and whether there’s an active swarm. Try to make sense of Rattata on Route 3.

  • Event Pokémon — those received from giveaways — may be given in several different ways, to several different regions, and may “lock” any of the Pokémon’s attributes either to a specific value or a choice of values.

  • And of course, all of this exists in at least eight different languages, plus a few languages with their own fanon vernacular, plus romanization for katakana and Hangul.

Even that would be all well and good, but the biggest problem of all is that any and all of this can change between games. Pairs of games — say, Red and Blue — tend to be mostly identical except for the encounters, since they come out at the same time. Spiky-Eared Pichu exists only in HGSS, and never appears again. The move Hypnosis has 60% accuracy in every game, except in Diamond and Pearl, where it has 70% accuracy. Sand Attack is ground-type, except in the first generation of games, where it was normal. Several Pokémon change how they evolve in later games, because they relied on a mechanic that was dropped. The type strength/weakness chart has been updated a couple times. And so on.

Oh, and there are several spin-off series, which often reuse the names of moves but completely change how they work. The entire Mystery Dungeon series, for example. Or even Pokémon Go.

This is awful.

The current approach

Since time immemorial, veekun has used a relational database. (Except for that one time I tried a single massive XML file, but let’s not talk about that.) It’s already straining the limits of this format, and it doesn’t even include half the stuff I just mentioned, like event Pokémon or where the move tutors are or Spiky-Eared Pichu’s disabled evolution.

Just the basic information about the Pokémon themselves is spread across three tables: pokemon_species, pokemon, and pokemon_forms. “Species” is supposed to be the pure essence of the name, so it contains stuff like “is this a baby” or “what does this evolve from/into” (which, in the case of Pichu, is already wrong!). pokemon_forms contains every form imaginable, including all 28 Unown, and tries to loosely categorize them — but it also treats Pokémon without forms as having a single “default” form. And then pokemon contains a somewhat arbitrary subset of forms and tacks other data onto them. Other tables arbitrarily join to whichever of these is most appropriate.

Tables may also be segmented by “version” (Red), “version group” (Red and Blue), or “generation” (Red, Blue, and Yellow), depending on when the data tends to vary. Oh, but there are also a number of conquest_* tables for Pokémon Conquest, which doesn’t have a row in versions since it’s not a mainline version. And I think there’s a goofy hack for Stadium in there somewhere.

For data that virtually never varies, except that one time it did, we… don’t really do anything. Base EXP was completely overhauled in X and Y, for example, and we only have a single base_experience column in the pokemon table, so it just contains the new X and Y values. What if you want to know about experience for an older game? Well, oops. Similarly, the type chart is the one from X and Y, which is no longer correct for previous games.

Aligning entities across games can be a little tricky, too. Earlier games had the Itemfinder, gen 5 had the Dowsing MCHN, and now we have the Dowsing Machine. These are all clearly the same item, but only the name Dowsing Machine appears anywhere in veekun, because there’s no support for changing names across games. The last few games also technically “renamed” every move and Pokémon from all-caps to title case, but this isn’t reflected anywhere. In fact, the all-caps names have never appeared on veekun.

All canonical textual data, including the names of fundamental entities like Pokémon and moves, are in separate tables so they can be separated by language as well. Numerous combinations of languages/games are missing, and I don’t think we actually have a list of which games were even released in which languages.

The result is a massive spread of tables, many of them very narrow but very tall, with joins that are not obvious if you’re not a DBA. I forget how half of it works if I haven’t looked at it in at least a month. I make this stuff available for anyone to use, too, so I would greatly prefer if it were (a) understandable by mortals and (b) not comically incomplete in poorly-documented ways.

I think a lot of this is a fear of massively duplicating the pile of data we’ve already got. Fixing the Dowsing Machine thing, for example, would require duplicating the name of every single item for every single game, just to fix this one item that was renamed twice. Fixing the base EXP problem would require yet another new table just for base experience, solely because it changed once.

It’s long past time to fix this.

SQL is bad, actually

(Let me cut you off right now: NoSQL is worse.)

I like the idea of a relational database. You have a schema describing your data, and you can link it together in myriad different ways, and it’s all built around set operations, and wow that’s pretty cool.

The actual implementation leaves a little to be desired. You can really only describe anything as flat tuples. You want to have things that can contain several other things, perhaps in order? Great! Make another flat tuple describing that, and make sure you remember to ask for the order explicitly, every single time you query.

Oh boy, querying. Querying is so, so tedious. You can’t even use all your carefully-constructed foreign key constraints as a shortcut; you have to write out foo.bar_id = bar.id in full every single time.

There are GUIs and whatnot, but the focus is all wrong. It’s on tables. Of course it’s on tables, but a single table is frequently not a useful thing to see on its own. For any given kind of entity (as defined however you think about your application), a table probably only contains a slice of what the entity is about, but it contains that slice for every single instance. Meanwhile, you can’t actually see a single entity on its own.

I’ll repeat that: you cannot.

Consider, for example, a Pokémon. A Pokémon has up to two types, which are rather fundamental properties. How do you view or fetch the Pokémon and its types?

Fuck you, that’s how. If you join pokemon to pokemon_types, you get this goofy result where everything about the Pokémon is potentially duplicated, but each row contains a distinct type.

Want to see abilities as well? There can be up to three of those! Join to both pokemon_abilities and pokemon_types, and now you get up to six rows, which looks increasingly not at all like what you actually wanted. Want moves as well? Good luck.

I don’t understand how this is still the case. SQL is 42 years old! How has it not evolved to have even the slightest nod towards the existence of nested data? This isn’t some niche use case; it’s responsible for at least a third of veekun’s tables!

This die-hard focus on data-as-spreadsheets is probably why we’ve tried so hard to avoid “duplication”, even when it’s the correct thing to do. The fundamental unit of a relational database is the table, and seeing a table full of the same information copied over and over just feels wrong.

But it’s really the focus on tables that’s wrong. The important point isn’t that Bulbasaur is named “BULBASAUR” in ten different games; it’s that each of those games has a name for Bulbasaur, and it happens to be the same much of the time.

NoSQL exists, yes, but I don’t trust anyone who looked at SQL and decided that the real problem was that it has too much schema.

I know the structure of my data, and I’m happy to have it be enforced. The problem isn’t that writing a schema is hard. The problem is that any schema that doesn’t look like a bank ledger maps fairly poorly to SQL primitives. It works, and it’s correct (if you can figure out how to express what you want), but the ergonomics are atrocious.

We’ve papered over some of this with SQLAlchemy’s excellent ORM, but you have to be very good at SQLAlchemy to make the mapping natural, which is the whole goal of using an ORM. I’m pretty good, and it’s still fairly clumsy.

A new idea

So. How about YAML?

See, despite our hesitation to duplicate everything, the dataset really isn’t that big. All of the data combined are a paltry 17MB, which could fit in RAM without much trouble; then we could search and wrangle it with regular Python operations. I could still have a schema, remember, because I wrote a thing for that. And other people could probably make more sense of some YAML files than CSV dumps (!) of a tangled relational database.

The idea is to re-dump every game into its own set of YAML files, describing just the raw data in a form generic enough that it can handle every (main series) game. I did a proof of concept of this for Pokémon earlier this year, and it looks like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
%TAG !dex! tag:veekun.com,2005:pokedex/
--- !!omap
- bulbasaur: !dex!pokemon
    name: BULBASAUR
    types:
    - grass
    - poison
    base-stats:
      attack: 49
      defense: 49
      hp: 45
      special: 65
      speed: 45
    growth-rate: medium-slow
    base-experience: 64
    pokedex-numbers:
      kanto: 1
    evolutions:
    - into: ivysaur
      minimum-level: 16
      trigger: level-up
    species: SEED
    flavor-text: "A strange seed was\nplanted on its\nback at birth.\fThe plant sprouts\nand
      grows with\nthis POKéMON."
    height: 28
    weight: 150
    moves:
      level-up:
      - 1: tackle
      # ...
    game-index: 153

This is all just regular ol’ YAML syntax. This is for English Red; there’d also be one for French Red, Spanish Red, etc. Ultimately, there’d be a lot of files, with a separate set for every game in every language.

The UI will have to figure out when some datum was the same in every game, but it frequently does that even now, so that’s not a significant new burden. If anything, it’s an improvement, since now it’ll be happening only in one place; right now there are a lot of ad-hoc “deduplication” steps done behind the scenes when we add new data.

I like this idea, but I still feel very uneasy about it for unclear reasons. It is a wee bit out there. I could just take this same approach of “fuck it, store everything” and still use a relational database. But look at this little chunk of data; it already tells you plenty of interesting facts about Bulbasaur and only Bulbasaur, yet it would need at least half a dozen tables to express in a relational database. And you couldn’t inspect just Bulbasaur, and you’d have to do multiple queries to actually get everything, and there’d be no useful way to work with the data independently of the app, and so on. Worst of all, the structure is often not remotely obvious from looking at the tables, whereas you can literally see it in YAML syntax.

There are other advantages, as well:

  • A schema can still be enforced Python-side, using the camel loader, which by the way will produce objects rather than plain dicts. (That’s what the !dex!pokemon tag is for.)
  • If you don’t care about veekun at all and just want data, you have it in a straightforward format, for any version you like.
  • YAML libraries are fairly common, and even someone with very limited programming experience can make sense of the above structure. Currently we store CSV database dumps and offer a tool to load into an RDBMS, which has led to a number of bug reports about obscure compatibility issues with various databases, as well as numerous emails from people who are confused about how to load the data or even about what a database is.
  • It’s much more obvious what’s missing. If there’s no directory for Pokémon Yellow, surprise! That means we don’t have Pokémon Yellow. If the directory exists but there’s no places.yaml, guess what we’re missing! Figuring out what’s there and what’s not in a relational system is much more difficult; I only recently realized that we don’t have flavor text for any game before Black/White.
  • I’ll never again have to rearchitect the schema because a new game changed something I didn’t expect could ever change. Similarly, the UI can drop a lot of special cases for “this changes between games”, “this changes between generations”, etc. and treat it all consistently.
  • Pokémon forms can just be two Pokémon with the same species name. Fuck it, store everything. YAML even has “merge” syntax built right in that can elide the common parts. (This isn’t shown above, and I don’t know exactly what the syntax looks like yet.)

Good idea? Sure, maybe? Okay let’s look at some details, where the devil’s in.

Problems

There are several, and they are blocking my progress on this, and I only have three months to go.

Speed

There will be a lot of YAML, and loading a lot of YAML is not particularly quick, even with pyyaml’s C loader. YAML is a complicated format and this is a lot of text to chew through. I won’t know for sure how slow this is until I actually have more than a handful of games in this format, though.

I have a similar concern about memory use, since I’ll suddenly be storing a whole lot of identical data. I do have an idea for reducing memory use for strings, which is basically manual interning:

1
string_datum = big_ol_string_dict.setdefault(string_datum, string_datum)

If I load two YAML files that contain the same string, I can reuse the first one instead of keeping two copies around for no reason. (Strings are immutable in Python, so this is fine.)

Alas, I’ve seen this done before, and it does have a teeny bit of overhead, which might make the speed issue even worse.

So I think what I’m going to do is load everything into objects, resolve duplicate strings, and then… store it all in a pickle! Then the next time the app goes to load the data, if the pickle is newer than any of the files, just load the pickle instead. Pickle is a well-specified binary format (much faster to parse) and should be able to remember that strings have already been de-duplicated.

I know, I know: I said don’t use pickle. This is the one case where pickle is actually useful: as a disposable cache. It doesn’t leave the machine, so there are no security concerns; it’s not shared between multiple copies of the app at the same time; and if it fails to load for any reason at all, the app can silently trash it and load the data directly.

I just hope that pickle will be quick enough, or this whole idea falls apart. Trouble is, I can’t know for sure until I’m halfway done.

Languages versus games

Earlier I implied that every single game would get its own set of data: English Red has a set of files, French Red has the same set of files, etc.

For the very early games, this directly reflects their structure: each region got its own cartridge with the game in a single language. Different languages might have different character sets, different UI, different encounters (Phanpy and Teddiursa were swapped in Gold and Silver’s Western releases), different mechanics (leech moves fail against a Substitute in gen 1, but only in Japanese), and different graphics (several Gold and Silver trainer classes were slightly censored outside of Japan). You could very well argue that they’re distinct games.

The increased storage space of the Nintendo DS changed things. The games were still released regionally, but every game contains every language’s flavor text and “genus (the stuff you see in the Pokédex). This was an actual feature of the game: if you received a Pokémon caught in another language — made much easier by the introduction of online trading — then you’d get the flavor text for that language in your Pokédex.

The DS versions also use a filesystem rather than baking everything into the binary, so very little code needed to change between languages; everything of interest was in text files.

From X and Y, there are no localizations. Every game contains the full names and descriptions of everything, plus the entire game script, in every language. In fact, you can choose which language to play the game in — in an almost unprecedented move for a Nintendo game, an American player with the American copy of the game can play the entire thing in Japanese.

(If this weren’t the case, you’d need an entire separate 3DS to do that, since the 3DS is region-locked. Thanks, Nintendo.)

The question, then, is how to sensibly store all this.


With the example YAML above, human-language details like names and flavor text are baked right into the Pokémon. This makes sense in the context of a single game, where those are properties of a Pokémon. If you take that to be the schema, then the obvious thing to do is to have a separate file for every game in every language: /red/en/pokemon.yaml, /red/fr/pokemon.yaml, and so on.

This isn’t ideal, since most of the other data is going to be the same. But those games are also the smallest, and anyway this captures the rare oddball difference like Phanpy and Teddiursa (though hell if I know how to express that in the UI).

With X and Y, everything goes out the window. There are effectively no separate games any more, so /x/en versus /x/fr makes no sense. It’s very clear now that flavor text — and even names — aren’t direct properties of the Pokémon, but of some combination of the Pokémon and the player.


One option is to put some flexibility in the directory structure.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
/red
  /en
    pokemon.yaml
    pokemon-text.yaml
  /ja
    pokemon.yaml
    pokemon-text.yaml
...
/x
  pokemon.yaml
  /en
    pokemon-text.yaml
  /ja
    pokemon-text.yaml

A pokemon-text.yaml file would be a very simple mapping.

1
2
3
4
5
6
7
bulbasaur:
    name: BULBASAUR
    species: SEED
    flavor-text: "A strange seed was\nplanted on its\nback at birth.\fThe plant sprouts\nand
      grows with\nthis POKéMON."
ivysaur:
    ...

(Note that the lower-case keys like bulbasaur are identifiers, not names — they’re human-readable and obviously based on the English names, but they’re supposed to be treated as opaque dev-only keys. In fact I might try to obfuscate them further, to discourage anyone from title-casing them and calling them names.)

Something about this doesn’t sit well. I think part of it is that the structure in pokemon-text.yaml doesn’t represent a meaningful thing, which is somewhat at odds with the idea of loading each file directly into a set of objects. With this approach, I have to patchwork update existing objects as I go.

It’s kind of a philosophical quibble, granted.


An extreme solution would be to pretend that X and Y are several different games: have /x/en and /x/fr, even though they contain mostly the same information taken from the same source.

I don’t think that’s a great idea, especially since the merged approach will surely be how all future games work as well.


At the other extreme, I could treat the older games as though they were separate versions themselves. Add a grouping called “cartridge” or something that’s a subset of “version”. Many of the oddball differences are between the Japanese version and everyone else, too.

There’s even a little justification for this in the way the first few games were released. Japan first got Red and Green, which had goofy art and were very buggy; they were later polished and released as the single version Japanese Blue, which became the basis for worldwide releases of Red and Blue. Japanese Red is a fairly different game from American Red; Japanese Blue is closer to American Blue but still not really the same. veekun already has a couple of nods towards this, such as having separate Red/Green and Red/Blue sprite art.

That would lead to a list of games like jp-red, jp-green, jp-blue, ww-red, ww-blue, yellow (I think they were similar across the board), jp-gold, jp-silver, ww-gold, ww-silver, crystal (again, I don’t think there were any differences), and so on. The schema would look like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
bulbasaur:
    name:
        en: BULBASAUR
        fr: BULBIZARRE
        es: BULBASAUR
        ...
    flavor-text: 
        en: "A strange seed was\nplanted on its\nback at birth.\fThe plant sprouts\nand
          grows with\nthis POKéMON."
        ...

The Japanese games, of course, would only have Japanese entries. A huge advantage of this approach is that it also works perfectly with the newer games, where this is effectively the structure of the original data anyway.

This does raise the question of exactly how I generate such a file without constantly reloading and redumping it. I guess I could dump every language game at the same time. That would also let me verify that there are no differences besides text.

The downside is mostly that the UI would have to consolidate this, and the results might be a little funky. Merging jp-gold with ww-gold and just calling it “Gold” when the information is the same, okay, sure, that’s easy and makes sense. jp-red versus ww-red is a bit weirder of a case. On the other hand, veekun currently pretends Red and Green didn’t even exist, which is certainly wrong.

I’d have to look more into the precise differences to be sure this would actually work, but the more I think about it, the more reasonable this sounds. Probably the biggest benefit is that non-text data would only differ across games, not potentially across games and languages.

Wow, this might be a really good idea. And it had never occurred to me before writing this section. This rubber duck thing really works, thanks!

Forms

As mentioned above, rather than try to group forms into various different tiers based on how much they differ, I might as well just go whole hog and have every form act as a completely distinct Pokémon.

Doing this with YAML’s merge syntax would even make the differences crystal clear:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
plant-wormadam:
    &plant-wormadam
    types: [bug, grass]
    abilities:
        1: anticipation
        2: anticipation
        hidden: overcoat
    moves:
        ...
    # etc
trash-wormadam:
    <<: *plant-wormadam  # means "merge in everything from this other node"
    types: [bug, ground]
    moves:
        ...
# Even better:
unown-a:
    &unown-a
    types: [psychic]
    name: ...
    # whatever else
unown-c:
    <<: *unown-a
unown-d:
    <<: *unown-a
unown-e:
    <<: *unown-a

One catch is that I don’t know how to convince PyYAML to output merge nodes, though it’s perfectly happy to read them.

But wait, hang on. This is a list of Pokémon, not forms. Wormadam is a Pokémon. Plant Wormadam is a form.

Right?

This distinction has haunted us rather thoroughly since we first tried to support it with Diamond and Pearl. The website is still a little weird about this: it acts as though “Plant Wormadam” is the name of a distinct Pokémon (because it affects type) and has distinct pages for Sandy and Trash Wormadam, but “Burmy” is a single page, even though Wormadam evolves from Burmy and they have the same forms. (In Burmy’s case, though, form only affects the sprite and nothing else.) You can also get distinct forms in search results, which may or may not be what you want — but it also may or may not make sense to “ignore” forms when searching. In many cases we’ve arbitrarily chosen a form as the “default” even when there isn’t a clear one, just so you get something useful when you type in “wormadam”.

Either way, there needs to be something connecting them. Merge keys are only a shorthand for writing YAML; they’re completely invisible to app code and don’t exist in the deserialized data.

YAML does have a nice shorthand syntax for a list of mappings:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
bulbasaur:
-   name: ...
    types: ...
unown:
-   &unown-a
    name: ...
    types: ...
    form: a
-   <<: *unown-a
    form: b
-   <<: *unown-a
    form: c
...

Hm, now we lose the unown-a that functions as the actual identifier for the form.

Alternatively, there could be an entire separate type for sets of forms, since we do have tags here.

1
2
3
4
5
6
7
bulbasaur: !dex!pokemon
    name: ...
unown: !dex!pokemon-form-set
    unown-a: !dex!pokemon
        name: ...
    unown-b: !dex!pokemon
        ...

An unadorned Pokemon could act as a set of 1, then? I guess?

Come to think of it, this knits with another question: where does data specific to a set of forms go? Data like “can you switch between forms” and “is this purely cosmetic”. We can’t readily get that from the games, since it’s code rather than data.

It’s also extremely unlikely to ever change, since it’s a fundamental part of each multi-form Pokémon’s in-universe lore. So it makes sense to store that stuff in some separate manually-curated place, right? In which case, we could do the same for storing which sets of forms “count” as the same Pokémon. That is, the data files could contain plant-wormadam and sandy-wormadam as though they were completely distinct, and then we’d have our own bits on top (which we need anyway) to say that, hey, those are both forms of the same thing, wormadam.

That mirrors how the actual games handle this, too — the three Wormadam forms have completely separate stat/etc. structs.

Ah, but the games don’t store the Burmy or Unown forms separately, because they’re cosmetic. How does our code handle that? I guess there’s only one unown, and then we also know that there are 28 possible sprites?

But Arceus’s forms have different types, and they’re not stored separately either. (I think you could argue that Arceus is cosmetic-only, the cosmetic form is changed by Arceus’s type, and Arceus’s type is really just changed by Arceus’s ability. I’m pretty sure the ability doesn’t work if you hack it onto any other Pokémon, but I can’t remember whether Arceus still changes type if hacked to have a different ability.)

Relying too much on outside information also makes the data a teeny bit harder for anyone else to use; suddenly they have three Wormadams, none of which are quite called “Wormadam”, but all of which share the same Pokédex number. (Oh, right, we could just link them by Pokédex number.) That feels goofy, but if what you’re after is something with a definitive set of types, there is nothing called simply “Wormadam”.

Oh, and there’s a minigame that only exists in Heart Gold and Soul Silver, but that has different stats even for cosmetic forms. Christ.

I don’t think there’s any perfect answer here. I have a list of all the forms if you’d like to see more of this madness.

The Python API

So you want to load all this data and do stuff with it. Cool. There’ll be a class like this:

1
2
3
4
5
class Pokemon(Locus):
    types = List(Type, min=1, max=2)
    growth_rate = Scalar(GrowthRate)
    game_index = Scalar(int)
    ...

You know, a little declarative schema that matches the YAML structure. I love declarative classes.

The big question here is what a Pokemon is. (Besides whether it’s a form or not.) Is it a wrapper around all the possible data from every possible game, or just the data from one particular game? Probably the former, since the latter would mean having some twenty different Pokemon all called bulbasaur and that’s weird.

(Arguably, the former would be wrong because much of this stuff only applies to the main games and not Mystery Dungeon or Ranger or whatever else. That’s a very different problem that I’ll worry about later.)

I guess then a Pokemon would wrap all its attributes in a kind of amalgamation object:

1
2
3
4
5
6
7
8
print(pokemon)                          # <Pokemon: bulbasaur>
print(pokemon.growth_rate)              # <MultiValue: bulbasaur.growth_rate>
current = Game.latest
print(current)                          # <Game: alpha-sapphire>
print(pokemon.growth_rate[current])     # <GrowthRate: medium-slow>
pokemonv = pokemon.for_version(current)
print(pokemonv)                         # <Pokemon: bulbsaur/alpha-sapphire>
print(pokemonv.growth_rate)             # <GrowthRate: medium-slow>

There’s one more level you might want: a wrapper that slices by language solely for your own convenience, so you can say print(some_pokemon.name) and get a single string rather than a thing that contains all of them.

Should you be able to slice by language but not by version, so pokemon.name is a thing containing all English names across all the games? I guess that sounds reasonable to want, right? It would also let you treat text like any other property, which could be handy.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
print(pokemon)                          # <Pokemon: bulbasaur>
print(pokemon.growth_rate)              # <MultiValue: bulbasaur.growth_rate>
# I'm making up method names on the fly here, so.
# Also there will probably be a few ways to group together changed properties,
# depending entirely on what the UI needs.
print(pokemon.growth_rate.meld())       # [((...every game...), <GrowthRate: medium-slow>)]
print(pokemon.growth_rate.unify())      # <GrowthRate: medium-slow>
pokemonl = pokemon.for_language(Language['en'])
print(pokemonl.name)                    # <MultiValue: bulbasaur.name>
print(pokemonl.name.meld())             # [((<Game: ww-red>, ...), 'BULBASAUR'), ((<Game: x>, ...), 'Bulbasaur')]
print(pokemonl.name.unify())            # None, maybe ValueError?

(Having written all of this, I suddenly realize that I’m targeting Python 3, where I can use é in class names. Which I am probably going to do a lot.)

I think… this all… seems reasonable and doable. It’ll require some clever wrapper types, but that’s okay.

Hmm

I know these are relatively minor problems in the grand scheme of things. People handle hundreds of millions of rows in poorly-designed MySQL tables all the time and manage just fine. I’m mostly waffling because this is a lot of (hobby!) work and I’ve already been through several of these rearchitecturings and I’m tired of discovering the dozens of drawbacks only after all the work is done.

Writing this out has provided some clarity, though, and I think I have a better idea of what I want to do. So, thanks.

I’d like to have a proof of concept of this, covering some arbitrary but representative subset of games, by the end of the month. Keep your eyes peeled.

Terry Joins Backblaze as Account Executive

Post Syndicated from Yev original https://www.backblaze.com/blog/terry-joins-backblaze-account-executive/

terry

Backblaze continues to grow and our latest offering Backblaze B2 is off to a running start. We needed an Account Executive to help bring integrators, partners, and businesses in to the B2 fold. That’s where Terry came in! Lets learn a bit more about her shall we?

What is your Backblaze Title?

Account Executive.

Where are you originally from?

The Emerald City (that being Wichita, Kansas no less).

What attracted you to Backblaze?

The excitement of helping to grow Backblaze. 

What’s your dream job?

2 days a week telecommuting with annual salary of $1 million.

Favorite place you’ve traveled?

I enjoy traveling and have been to some awesome places. Italy is special though for rounding out my favorite things….beautiful sights, architecture, friendly people, great food & wine!

Of what achievement are you most proud?

My daughter who is a kind and good human being.

Coke or Pepsi?

Neither – wine is healthier!

Favorite food?

Too many to mention.

Telecommuting and a salary of $1M doesn’t sound too bad to us either, and we hear you on the wine Terry. Normally we keep the fridges stocked with soda and beer, but we’ll see if we can’t squirrel away some wine for you as well. Welcome aboard!

The post Terry Joins Backblaze as Account Executive appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Friday’s security updates

Post Syndicated from n8willis original http://lwn.net/Articles/696545/rss

Arch Linux has updated firefox (multiple vulnerabilities), jdk7-openjdk (multiple vulnerabilities), jre7-openjdk (multiple vulnerabilities), and jre7-openjdk-headless (multiple vulnerabilities).

Debian has updated openjdk-7 (multiple vulnerabilities).

Debian-LTS has updated curl
(multiple vulnerabilities) and mysql-5.5 (multiple vulnerabilities).

Fedora has updated collectd (F23; F24:
code execution),
dietlibc (F23; F24: insecure default PATH), perl (F24: privilege escalation), perl-DBD-MySQL (F24: code execution), and python-autobahn (F24: insecure origin validation).

openSUSE has updated MozillaFirefox, mozilla-nss (13.2, Leap
42.1: multiple vulnerabilities).

Oracle has updated kernel (O7; O6:
multiple vulnerabilities; O7; O6; O6; O5:
privilege escalation)
and squid (O6: code execution).

Scientific Linux has updated squid (SL6: code execution).

SUSE has updated kernel
(SLE12-LP: multiple vulnerabilities).

Ubuntu has updated firefox
(12.04, 14.04, 16.04: multiple vulnerabilities), libreoffice (12.04: code execution), oxide-qt (14.04, 16.04: multiple vulnerabilities), and qemu, qemu-kvm (12.04, 14.04, 16.04: multiple vulnerabilities).

Frequent Password Changes Is a Bad Security Idea

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/frequent_passwo.html

I’ve been saying for years that it’s bad security advice, that it encourages poor passwords. Lorrie Cranor, now the FTC’s chief technologist, agrees:

By studying the data, the researchers identified common techniques account holders used when they were required to change passwords. A password like “tarheels#1”, for instance (excluding the quotation marks) frequently became “tArheels#1” after the first change, “taRheels#1” on the second change and so on. Or it might be changed to “tarheels#11” on the first change and “tarheels#111” on the second. Another common technique was to substitute a digit to make it “tarheels#2”, “tarheels#3”, and so on.

“The UNC researchers said if people have to change their passwords every 90 days, they tend to use a pattern and they do what we call a transformation,” Cranor explained. “They take their old passwords, they change it in some small way, and they come up with a new password.”

The researchers used the transformations they uncovered to develop algorithms that were able to predict changes with great accuracy. Then they simulated real-world cracking to see how well they performed. In online attacks, in which attackers try to make as many guesses as possible before the targeted network locks them out, the algorithm cracked 17 percent of the accounts in fewer than five attempts. In offline attacks performed on the recovered hashes using superfast computers, 41 percent of the changed passwords were cracked within three seconds.

That data refers to this study.

My advice for choosing a secure password is here.

Pi 3 booting part II: Ethernet

Post Syndicated from Gordon Hollingworth original https://www.raspberrypi.org/blog/pi-3-booting-part-ii-ethernet-all-the-awesome/

Yesterday, we introduced the first of two new boot modes which have now been added to the Raspberry Pi 3. Today, we introduce an even more exciting addition: network booting a Raspberry Pi with no SD card.

Again, rather than go through a description of the boot mode here, we’ve written a fairly comprehensive guide on the Raspberry Pi documentation pages, and you can find a tutorial to get you started here. Below are answers to what we think will be common questions, and a look at some limitations of the boot mode.

Note: this is still in beta testing and uses the “next” branch of the firmware. If you’re unsure about using the new boot modes, it’s probably best to wait until we release it fully.

What is network booting?

Network booting is a computer’s ability to load all its software over a network. This is useful in a number of cases, such as remotely operated systems or those in data centres; network booting means they can be updated, upgraded, and completely re-imaged, without anyone having to touch the device!

The main advantages when it comes to the Raspberry Pi are:

  1. SD cards are difficult to make reliable unless they are treated well; they must be powered down correctly, for example. A Network File System (NFS) is much better in this respect, and is easy to fix remotely.
  2. NFS file systems can be shared between multiple Raspberry Pis, meaning that you only have to update and upgrade a single Pi, and are then able to share users in a single file system.
  3. Network booting allows for completely headless Pis with no external access required. The only desirable addition would be an externally controlled power supply.

I’ve tried doing things like this before and it’s really hard editing DHCP configurations!

It can be quite difficult to edit DHCP configurations to allow your Raspberry Pi to boot, while not breaking the whole network in the process. Because of this, and thanks to input from Andrew Mulholland, I added the support of proxy DHCP as used with PXE booting computers.

What’s proxy DHCP and why does it make it easier?

Standard DHCP is the protocol that gives a system an IP address when it powers up. It’s one of the most important protocols, because it allows all the different systems to coexist. The problem is that if you edit the DHCP configuration, you can easily break your network.

So proxy DHCP is a special protocol: instead of handing out IP addresses, it only hands out the TFTP server address. This means it will only reply to devices trying to do netboot. This is much easier to enable and manage, because we’ve given you a tutorial!

Are there any bugs?

At the moment we know of three problems which need to be worked around:

  • When the boot ROM enables the Ethernet link, it first waits for the link to come up, then sends its first DHCP request packet. This is sometimes too quick for the switch to which the Raspberry Pi is connected: we believe that the switch may throw away packets it receives very soon after the link first comes up.
  • The second bug is in the retransmission of the DHCP packet: the retransmission loop is not timing out correctly, so the DHCP packet will not be retransmitted.

The solution to both these problems is to find a suitable switch which works with the Raspberry Pi boot system. We have been using a Netgear GS108 without a problem.

  • Finally, the failing timeout has a knock-on effect. This means it can require the occasional random packet to wake it up again, so having the Raspberry Pi network wired up to a general network with lots of other computers actually helps!

Can I use network boot with Raspberry Pi / Pi 2?

Unfortunately, because the code is actually in the boot ROM, this won’t work with Pi 1, Pi B+, Pi 2, and Pi Zero. But as with the MSD instructions, there’s a special mode in which you can copy the ‘next’ firmware bootcode.bin to an SD card on its own, and then it will try and boot from the network.

This is also useful if you’re having trouble with the bugs above, since I’ve fixed them in the bootcode.bin implementation.

Finally, I would like to thank my Slack beta testing team who provided a great testing resource for this work. It’s been a fun few weeks! Thanks in particular to Rolf Bakker for this current handy status reference…

Current state of network boot on all Pis

Current state of network boot on all Pis

The post Pi 3 booting part II: Ethernet appeared first on Raspberry Pi.

Telegram Hack – Possible Nation State Attack By Iran

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/NATE__J1uuA/

So there’s been a lot of news lately about the Telegram hack and how 15 million accounts were compromised, which is not technically true. There’s 2 vectors of attack at play here, both of which regard Iranian users, but are not connected (other than the attackers probably being the same group). So the two attacks […]

The post Telegram Hack…

Read the full post at darknet.org.uk

Let’s Encrypt Root to be Trusted by Mozilla

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org//2016/08/05/le-root-to-be-trusted-by-mozilla.html

The Let’s Encrypt root key (ISRG Root X1) will be trusted by default in Firefox 50, which is scheduled to ship in Q4 2016. Acceptance into the Mozilla root program is a major milestone as we aim to rely on our own root for trust and have greater independence as a certificate authority (CA).

Public CAs need their certificates to be trusted by browsers and devices. CAs that want to issue independently under their own root accomplish this by either buying an existing trusted root, or by creating a new root and working to get it trusted. Let’s Encrypt chose to go the second route.

Getting a new root trusted and propagated broadly can take 3-6 years. In order to start issuing widely trusted certificates as soon as possible, we partnered with another CA, IdenTrust, which has a number of existing trusted roots. As part of that partnership, an IdenTrust root “vouches for” the certificates that we issue, thus making our certificates trusted. We’re incredibly grateful to IdenTrust for helping us to start carrying out our mission as soon as possible.

Chain of trust between Firefox and Let's Encrypt certificates.
Chain of Trust Between Firefox and Let’s Encrypt Certificates

However, our plan has always been to operate as an independently trusted CA. Having our root trusted directly by the Mozilla root program represents significant progress towards that independence.

We have also applied to the Microsoft, Apple, Google, Oracle and Blackberry root programs. We look forward to acceptance into these programs as well.

Let’s Encrypt depends on industry and community support. Please consider getting involved, and if your company or organization would like to sponsor Let’s Encrypt please email us at sponsor@letsencrypt.org.

Why You Should Speak At & Attend LinuxConf Australia

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2016/08/04/lca2016.html

[ This blog
was crossposted
on Software Freedom Conservancy’s website
. ]

Monday 1 February 2016 was the longest day of my life, but I don’t mean
that in the canonical, figurative, and usually negative sense of that
phrase. I mean it literally and in a positive way. I woke up that morning
Amsterdam in the Netherlands — having the previous night taken a
evening train from Brussels, Belgium with my friend and colleague Tom
Marble
. Tom and I had just spent the weekend
at FOSDEM 2016, where he and
I co-organize
the Legal
and Policy Issues DevRoom
(with our mutual friends and colleagues,
Richard Fontana and Karen M. Sandler).

Tom and I headed over to AMS airport around 07:00 local time, found some
breakfast and boarded our flights. Tom was homeward bound, but I was about
to do the crazy thing that he’d done in the reverse a few years before: I
was speaking at FOSDEM and LinuxConf Australia, back-to-back. In fact,
because the airline fares were substantially cheaper this way, I didn’t
book a “round the world” flight, but instead two back-to-back
round-trip tickets. I boarded the plane at AMS at 09:30 that morning
(local time), and landed in my (new-ish) hometown of Portland, OR as
afternoon there began. I went home, spent the afternoon with my wife,
sister-in-law, and dogs, washed my laundry, and repacked my bag. My flight
to LAX departed at 19:36 local time, a little after US/Pacific sunset.

I crossed the Pacific ocean, the international dateline, left a day on
deposit to pickup on the way back, after 24 hours of almost literally
chasing the sun, I arrived in Melbourne on the morning of Wednesday 3
February, road a shuttle bus, dumped my bags at my room, and arrived just
in time for
the Wednesday
afternoon tea break at LinuxConf Australia 2016 in Geelong
.

Nearly everyone who heard this story — or saw me while it was
happening — asked me the same question: Why are you doing
this?
. The five to six people packed in with me in my coach section on
the LAX→SYD leg are probably still asking this, because I had an
allergic attack of some sort most of the flight and couldn’t stop coughing,
even with two full bags of Fisherman’s Friends over those 15 hours.

But, nevertheless, I gave a simple answer to everyone who questioned my
crazy BRU→AMS→PDX→LAX→SYD→MEL itinerary: FOSDEM and LinuxConf AU are
two of the most important events on the Free Software annual calendar.
There’s just no question. I’ll write more about FOSDEM sometime soon, but
the rest of this post, I’ll dedicate to LinuxConf Australia (LCA).

One of my biggest regrets in Free Software is that I was once — and
you’ll be surprised by this given my story above — a bit squeamish
about the nearly 15 hour flight to get from the USA to Australia, and
therefore I didn’t attend LCA until 2015. LCA began way back in 1999.
Keep in mind that, other than FOSDEM, no major, community-organized events
have survived from that time. But LCA has the culture and mindset of the
kinds of conferences that our community made in 1999.

LCA is community organized and operated. Groups of volunteers
each year plan the event. In the tradition of science fiction conventions
and other hobbyist activities, groups bid for the conference and offer
their time and effort to make the conference a success. They have an
annual hand-off meeting to be sure the organization lessons are passed from
one committee to the next, and some volunteers even repeat their
involvement year after year. For organizational structure, they rely on a
non-profit organization, Linux
Australia
, to assist with handling the funds and providing
infrastructure (just like Conservancy does for our member projects and
their conferences!)

I believe fully that the success of software freedom and GNU/Linux in
particularly has not primarily been because companies allow developers to
spend some of their time coding on upstream. Sure, many Free Software
projects couldn’t survive without that component, but what really makes
GNU/Linux, or any Free Software project, truly special is that there’s a
community of users and developers who use, improve, and learn about the
software because it excites and interests them. LCA is one of the few
events specifically designed to invite that sort of person to attend, and
it has for almost an entire generation stood in stark contrast the highly
corporate, for-profits events that slowly took over our community in the
years that followed LCA’s founding. (Remember all those years of
LinuxWorld
Expo
? I wasn’t even sad when IDG stopped running it!)

Speaking particularly of earlier this year, LCA 2016 in Geelong, Australia
was a particular profound event for me. LCA is one of the few events that
accepts my rather political talks about what’s happening in Open Source and
Free Software, so I gave a talk
on Friday
5 February 2016
entitled Copyleft For the Next Decade: A
Comprehensive Plan
, which was recorded, so you can watch it. I do
warn everyone that the jokes did not go over well (mine never do), so after I
finished, I was feeling a bit down that I hadn’t made the talk entertaining
enough. But then, something amazing happened: people started walking up to
me and telling me how important my message was. One individual even came up
and told me that he was excited enough that he’d like
to match
any donation that Software Freedom Conservancy received during LCA 2016
.
Since it was the last day of the event, I quickly went to one of the
organizers, Kathy Reid, and asked
if they would announce this match during the closing ceremonies; she agreed.
In a matter of just an hour or two, I’d gone from believing my talk had
fallen flat to realizing that — regardless of whether I’d presented
well — the concepts I discussed had connected with people.

Then, I sat down in the closing session. I started to tear up slightly
when the
organizers announced the donation match
. Within 90 seconds, though,
that turned to full tears of joy when the incoming President of Linux
Australia, Hugh Blemings, came on
stage and
said
:

[I’ll start with] a Software Freedom Conservancy thing, as it turns out.
… I can tell that most of you weren’t at Bradley’s talk earlier on
today, but if there is one talk I’d encourage you to watch on the
playback later it would be that one. There’s a very very important
message in there and something to take away for all of us. On behalf of
the Council I’d like to announce … that we’re actually in the
process of making a significant donation from Linux Australia to Software
Freedom Conservancy as well. I urge all of you to consider contributing
individual as well, and there is much left for us to be done as a
community on that front.

I hope that this post helps organizers of events like LCA fully understand
how much something like this means to us who run a small charities —
and not just with regard to the financial contributions. Knowing that the
organizers of community events feel so strongly positive about our work
really keeps us going. We work hard and spend much time at Conservancy to
serve the Open Source and Free Software community, and knowing the work is
appreciated inspires us to keep working. Furthermore, we know that without
these events, it’s much tougher for us to reach others with our message of
software freedom. So, for us, the feeling is mutual: I’m delighted that
the Linux Australia and LCA folks feel so positively about Conservancy, and
I now look forward to another 15 hour flight for the next LCA.

And, on that note, I chose a strategic time to post this story. On Friday
5 August 2016, the CFP for LCA
2017 closes
. So, now is the time for all of you to submit a talk. If
you regularly speak at Open Source and Free Software events, or have been
considering it, this event really needs to be on your calendar. I look
forward to seeing all of you Hobart this January.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close