Tag Archives: 2013

Ако настъпи война – 2: Съветите на Енота

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=2143


Под предишния запис от темата можете да намерите няколко линка. Два са тежки за четене, но навеждат на някои мисли. Единият е разказ на преживял окупация по време на гражданската война в Югославия. Другият са съветите на бивш сътрудник на ГРУ (руското военно разузнаване), известен под прякора Енота, как да се оцелее при война.

В някой друг запис, ако ми остане време, ще споделя някои мисли около тях. В този превеждам част от съветите на Енота. Те не са напълно адекватни за България, по куп причини. Например, като знам какво представляват политиците ни, бих очаквал това, че сме победени и завладени, да го научим не от военни действия, а по радиото. Вероятно от същите тези политици…

Но може и да се лъжа – ако тук стане реална война, ситуацията ще е много по-близка до руската, отколкото до шведската. Каквото обществото, такава реалността му. А съветите на Енота са цинични и брутални, но понякога могат да са животоспасяващи. И вие сами си решавате кои от тях да приемете, а кои не.

—-

Сценариите могат да са различни. Същността е винаги една – трябва да оцелеете първите две седмици, после „ще се види“. В първия вариант не се натягайте особено. Изсипят ли се в Москва чеченци или друга подобна измет, хора за „работа“ срещу тях ще се намерят. На вас това не ви трябва. Ето че се ражда и правило номер едно. Не си врете носа никъде, най-вече в бой. За това винаги ще се намерят главорези. Вашата задача е да „опазите трудовите ресурси“, тоест себе си.

При втория вариант ще се наложи да воювате, ако естествено искате. При третия можете и да не воювате вече. Родината я просрахме. Може „със смъртта на храбрите“, може и да продължите да живеете и работите, ако ви позволят.

Главното е, че са възможни три степени на загазване. Локални боеве, пълномащабна война, безперспективна окупация с последващо разчленяване на страната. Обяснявам тези неща, за да разберете един много важен момент, за който хората обикновено забравят под стрес: действията ви зависят от обстановката. Никаква самодейност, само здрав смисъл.

Стрелба на улицата не значи край на света. Даже ако във входа ви е пунктът за първична обработка на ранените, а на двора има 120-мм миномет с разчета си, това не значи, че трябва да бягате веднага. (Е, ако има миномет, сменяйте позицията спешно, задължително ще го бумнат заедно с вас.)

Да-да, стрелбата и труповете още нищо не значат, колкото и да ви е странно. Ненавременната маневра „да се измитаме максимално далече“ може да ви струва живота. Не се суетете, не се паникьосвайте, вместо това внимателно се вгледайте КОЙ и ПО КОГО стреля, и най-важното – ЗАЩО.

Докато сме в града

Ако по време на събитията и безпорядъка ви удари по главата решението да бягате, накратко ще изложа шансовете ви. В град от сорта на Москва или Санкт-Петербург шансовете да оцелеете са много малки. В градовете няма достатъчен запас от продоволствие, а по време на размирици никой няма да го раздава. Храна има само в магазините и складовете (за тях забравете, войската или бандитите ще са там моментално).

Да купувате продоволствие има смисъл през първото денонощие, докато още се продава. После магазините ще ги затворят и персоналът ще ги окраде. Ако сте прозяпали момента за „покупката“, пушката в ръце и тръгвате да „приватизирате“. Съветвам ви да вземете с вас съсед, и не само един. Първо, така ще отнесете повече храна, ще ви трябва и кой да ви прикрива от други юнаци като вас, които ще срещнете там или на обратния път. Второ, огневата ви мощ с гладкостволка клони към нулата, още няколко дула ще са само от полза. Помнете обаче – съберете ли много народ, ставате „групова цел“. Също, подялбата на плячката може да е доста тъжна. 3-4 човека, не повече.

Естествено, трябва да имате у дома запаси от храна и вода. С водата е още по-зле, надали ще ви карат. Спре ли да тече от чешмата, имате в казанчето на тоалетната. Не си и помисляйте да я пускате! Тя е същата вода от крана, просто налята в казанчето. С нея можете да изкарате цяла седмица луксозно (или поне със сигурност да не умрете).

Ако имате възможност, грабвате туба-две и кормите някоя бензиностанция. Горивото е много важно. Не бива обаче да го държите у дома. Парите са лесно запалими. Направете си тайник, най-добре на чердака, в мазето хора ще се крият от обстрелите.

Надали ще ви убиват специално. В мътни времена никой не хаби боеприпасите по хора без оръжие. Естествено, това не е повод да се разхождате и да си подсвирквате преди сън, но помнете – вие не сте цел No. 1. Опитът от град Грозни показва, че мъже, воюващи на пълни обороти, просто игнорират цивилните, не им е до тях. Наказание за глупост винаги можете да получите, особено по здрач, но нещата не са съвсем лоши.

Помнете – не се разполагайте близо нито до телецентър, нито до инфраструктурен обект. Ако в жилището ви нахлуят въоръжени хора и ви „информират“, че то вече е позицията на картечницата им, им кажете „ОК, разполагайте се“, и изчезнете оттам. Без никакви „това е моя собственост, никъде няма да ида“. Получавате куршум в главата моментално, не им е до вас, пречите ли – решават проблема по най-лесния и бърз начин. Махайте се даже ако не ви карат. Противниците им много скоро ще обстрелят позицията им, и не с камъчета от прашка.

Избягвайте да се мотаете и пред болници. И двете страни ще искат да си карат там ранените, сградата е стратегическа, нищо чудно да почнат сражение за нея – а то значи стрелба. Почне ли пък бомбардировка, някой задължително ще цапне болницата, изобщо не се съмнявайте. Тези, дето са писали Женевската конвенция, няма да ги има на бойното поле, така че спазването ѝ е леко условно. Като в „Карибски пирати“: „Това не е законодателство. По-скоро е набор пожелателни правила.“

Не забравяйте – почне ли такава каша, вече нямате собственост. И настоятелно ви съветвам да не проимвате. Убивате, ако някой се опитва да докопа храната и водата ви, всичко останало са дреболии. Ако сте успели да размените в най-близкия полицейски участък колата си срещу автомат, голямо браво. Ако ще да е чисто нов Мерцедес срещу старичък сгъваем калашник със само 2-3 пълнителя, пак голямо браво. Колата вече не ви трябва. Да излезете с нея от града няма да можете 100%, а пък желанието да я надупчат ще е сериозно.

Докато сте в града, избягвайте камуфлажни дрехи. По такива се стреля.

И така, в града ни са почнали улични боеве. Решили сме впредвид обстоятелствата или по тактически съображения да останем там (въпреки че идеята почти винаги е лоша). Знаем, че магазините ще почнат да ги грабят още на второто денонощие, оръжие има в най-близкия полицейски участък, малко вода има в казанчето на тоалетната (ако успеете да се сдобиете с бутилка-две минерална от някой магазин, още по-добре). Собственост вече нямате. Който е въоръжен, той е прав. Където има въоръжени, вас трябва да ви няма. Който се облича като военен – воюва, независимо дали иска. Тайник с гориво е голям плюс, горивото в някой момент може да стане на цената на оръжието и боеприпасите. Към важни обекти не си и помисляте да се приближавате.

И още нещо. НИКОГА И НИКЪДЕ НЕ ХОДИТЕ ПРОСТО ЕЙ ТАКА, ОСОБЕНО ПЪК „ДА ВИДИТЕ КАКВО СТАВА“. В градски бой повечето неща се вършат тихо, по разузнавателно-диверсантски. Види ли ви разузнавателна група, 100% ще тръгне да ви коли. Във филмите ще ви покажат с пръст да мълчите и ще си продължат, в реалния живот ще ви заколят на място. Оцеляването и изпълнението на задачата им зависи от липсата на свидетели. Ако група е заела позиции в кашата на градски бой, в момента в който им видите позициите, ще направи същото. Даже картечен разчет, току-що окопал се на кръстовище, няма да изпитва топли чувства към вас. Ако ви забележат отдалече и ви канят с пръст „да поговорите“, бягайте колкото крака ви държат. Може да се усмихват, да изглеждат приветливо, да ви подмамват с провизии – докопат ли ви, млъквате завинаги. Ликвидирането на проблема „местни ни видяха“ е отработена процедура. Така че без въпроси и без да си показвате рогцата от черупката.

Ако ще напускаме града

Проблемът е, че градът е обкръжен, или в него се водят боеве, или и двете. Ако сте пропуснали момента на началото на сраженията, това е много лошо, но още не сте обречени. Винаги има как да се излезе от града. Независимо от ситуацията, има два етапа. Първи – движение през града. Втори – изход от обкръжението.

Около големите населени пунктове има околовръстни шосета, и те са основният проблем. Мотострелковаци в БТР-и, ако са по асфалт, могат да обкръжат град за няколко часа. Станало ли е вече, забравете всяка мисъл „да се промъкнете незабелязано“. „Непонятно движение“ в бойни условия означава задължително ред от автомата или картечницата. Златното правило „каквото не виждам, по него не стрелям“ често не работи. Обкръжението се преодолява чрез доброволно предаване. Но още не сме стигнали до него.

Важно: не се качвайте на превози! Какъвто и да е транспорт в града задължително ще бъде обстрелян.

И така, имате раница с благинки за оцеляване. В идеалния случай – малогабаритно оръжие (сгъваем калашник плюс пистолет, стандартният набор на ченгетата). И още една торбичка със същото като раницата, но в доста по-скромни мащаби. Примерно в раницата имате храна за три дни, в торбичката за още един ден. Торбичката – завързана за тялото ви така, че да не личи, и не я сваляте. Много важно – вземете отделно, ако щете в гащите, всички скъпоценности, които намерите.

Раницата я замятате с бял чаршаф и го закрепвате на нея стабилно. Прави се, ако ви види някой военен (а такива ще има много, не се и надявайте да се промъкнете незабелязани), да разбере че сте ЦИВИЛЕН и да не си издава позицията заради вас. Ще ви съпроводят в кръстачката на мерника, но ще ви оставят да продължите. Разбира се, не правете манифестация по централната улица, но и не се мажете с кал в стил Шварценегер – ще ви мярнат и отстрелят, понеже няма да знаят кой и какъв сте. Камуфлажа го зарежете, даже ако ви е единствените читави дрехи.

Вие сте цивилен и трябва да изглеждате като цивилен, с бяла раница като бяло знаме, иначе ще ви отстрелят. С целия си вид трябва да показвате, че не представлявате никакъв интерес, просто се омитате. Естествено, оръжието си е с вас, просто го криете, вместо да го носите открито. Пистолетът – в джоба, готов за стрелба. Автомат, ако сте се снабдили – прикладът сгънат, и под куртката. Предпазителя му по-добре свалете предварително, може да е труден за превключване и да се подшашнете. Патрон в цевта, разбира се. На гърдите не носете нищо обемисто, най-много скрития автомат – наложи ли се да залегнете, лежите ли върху някаква торба, дето ви повдига над земята, ще сте по-лесни за целене.

Ако към вас върви открито въоръжен човек, спирайте и „без фокуси“, другарите му са на позиция. Вероятно ще иска да ви обира, ако искаше да ви стреля, щеше да ви е застрелял вече. Поиска ли раницата – давате му я (така или иначе при изхода от обкръжението ще ви я вземат). Молите го да ви остави чаршафа (ще си го метнете на гърба) и малката торбичка. Моментът е психологически, даваме спокойно големите неща и молим да ни оставят малките. Като правило хората се съгласяват, на това и разчитаме още отначало. Да се измъкнете с куп благини няма да ви остави никой, нужни са всекиму.

Питат ли за оръжие – казвате, че имате автомат (не вадите и не показвате, а казвате със спокоен глас), и молите да ви го оставят. Ще ви го вземат 100%, но това ще ви позволи да запазите пистолета. (За него не споменавайте – дадете ли автомата, надали ще ви тършуват.) Автомата ще го забележат винаги, а щом го давате спокойно и без съпротива, значи не сте буен. Един вид разменяте своите неща за свои. Ако нямате пистолет, взимате гладкостволка в разглобен вид, важното е да дадете „голямото и страшно“ оръжие. Изхождаме от допускането, че щом вече градът е обкръжен и са почнали улични боеве, трябва да не сте зяпали реклами по телевизията, а да сте напазарували вече в полицейския участък.

Ако успеете за ден да минете 10-15 километра, това е отлична скорост. Не забравяйте, ще се движите не по права линия, а с много заобиколки заради уличните боеве. Ако по карта домът ви е на 10 километра от околовръстното, няма гаранция, че ще успеете да ги минете за денонощие. Вървете ДЕНЕМ. Тръгне ли някой откаченяк нощем, куршумът му е в кърпа вързан. Вървите денем, с бял чаршаф, предавате се – тръгнете ли да се криете, ще има огън по вас.

Стигнете ли до обкръжението или до заградителните кордони, изхвърляте пистолета и с високо вдигнати ръце и викане, че се предавате, размахвате белия чаршаф и вървите към войниците. При възможност гледайте да отивате не къде да е, а към пропускателен пункт или охранителен пост. Ако трябва, вървете половин километър към него с вдигнати ръце. Идеята е, че пунктовете и постовете са укрепени и често оборудвани за „прием“, войниците се чувстват там по-комфортно и шансовете да ви гръмнат са по-малки. Ще ви претръскат до дупка. Оръжието сте го изхвърлили предварително, кротичък гражданин сте. Ще ви привика офицер, вероятно лейтентант, надали по-старши, не е нужно да биете чело в земята. Намирате начин да говорите на четири очи с него и му предлагате ценности срещу право на преминаване. Мине ли всичко успешно, сте излезли от града.

По пътя с пълна сигурност ще изгубите почти всичко ценно и всичкото оръжие, и ще затриете за това смешно разстояние поне 1-2 дни. ТОВА Е НОРМАЛНО. Обкръжен град е огромен пленнически лагер. Давайте всичко, само за да излезете. Вътре ще почне глад, и то скоро.

И така, вървите внимателно, но не се криете като „разузнавачи“. Облечени сме цивилно, имаме бял парцал на гърба (отпред се вижда, че не носим оръжие, но откъм гърба – не, трябва да се застраховаме). Имаме малка торбичка с най-необходими благини. Имаме скъпоценности (злато) като валута. Оръжие, с което да не забравим да се разделим преди да стигнем до военния пост (намерят ли го у вас, трудно ще минете за цивилен – ще ви пишат или дезертьор, или преоблечен враг). Ако сте успели да излезете с тези неща от града за 1 до 3 дни, преминавайки от район в район, това е нормално.

От личния опит: обикновените фъстъчени сникерси са много хранителни. 6 двойни са дневната норма калории за мъж. Удобства за притопляне на храна надали ще имате. Сникерсите не са шведска маса, но по време на война не бъдете претенциозни към храната. Идеята със сникерсите, честно казано, е открадната от чеченците. Те на тях изкарват войната. Може да се яде в движение и е чудесна идея – захар, глюкоза, повдига настроението (ще ви е много нужно, понеже ще сте физически и психологически в дупка).

Главното – разберете, че момчетата с автоматите са бая напрегнати, по тях стрелят. Адски лесно е да им дадете повод да ви надупчат. Така че внимателно и без да се дуете. В простичък текст, съгласявайте се с всичко.

А сега много кратко ще разкажа накъде и защо натам. Дотук специално разглеждахме най-хардкор сценариите. И сега ще е така. Надявам се да не е нужно да обяснявам защо.

И така, най-лошият вариант: оказваме се извън града, но почти без храна и оръжие. В идеалния случай всеки от вас ще е огледал предварително (примерно сега) картата и ще е нахвърлил няколко места, където може да отстъпи. НИКАКВИ ГЕРОЙСТВА! Първо да се успокои ситуацията, после ще се оправяте къде какво става. Избирайте места според посоките на света. Прост пример: Санкт-Петербург. На запад надали ще е вариант да се отстъпва. На юг също няма смисъл. Ще бягате или на север, към Карелия, или на изток, към Новгородската и Тверската области. И от Москва най-вероятно ще са северното (Архангелск) и източното (Урал) направление.

Помнете: към военни обекти НЕ се приближава! Мисълта, че там са свои, „руски войничета“, и ще ви приемат и нахранят, е глупост. В НАЙ-ДОБРИЯ случай офицерите ще ви напсуват и изритат, не им е до вас, това не е пункт за прием на бежанци. А това, че всеки миг може обектът да бъде бомбардиран, е обективна реалност. Не бива да забравяте и още нещо: срочната служба сега я карат близо до дома. Почне ли патардия, не си и представяйте даже какво става в главите на военните, на които роднините и близките може още да са в града. Помнете – всички са хора. Военните преживяват, нервничат и психясват точно както и цивилните, само дето го правят с оръжие в ръка. Затова идеята, че „войничетата ще помогнат“, не е най-добрата.

Мъдрата идея е да имате „къщичка на село“, където в мазето има склад с консерви, вода, лекарства и прочее, и ще отстъпвате там. Чеченците така правеха – отиваха по селата. Но тук обсъждаме най-лошия възможен сценарий, а много хора нямат въпросното недвижимо имущество.

Най-просто ми е да дам за пример Санкт Петербург. Я да видим картата. Във всяка посока трябва да имате набелязани ПОНЕ по две места, близко и далечно. За близкото препоръчвам някой туристически къмпинг, разположен до неголямо населено място. Ако сте били на излет край някое езеро или рекичка, на барбекю примерно, спокойно можете да идете там. Първо, ще знаете какво ви чака. Ще имате представа къде там има питейна вода и продоволствие. Второ, познавате мястото и това много ще ви крепи психологически.

Бежанците са тъжна картина, тежко е да ги гледаш. А бягството „на стадо“ може да не е организирано от никого, и ще бягате сами и без крайна точка, където да ви приеме някакъв „червен кръст“. А най-вероятно така и ще бъде, не се съмнявайте. Първите сериозни „благотворители“ се появиха в Чечня чак след първата война. Две години цивилните бяха оставени на собствените си грижи.

И така, вече имаме две точки за отстъпление близо до града. Сега са ни нужни две далечни точки. Ако ще отстъпвате на север, бих ви предложил Соловецкия манастир, на остров в Бяло море. Има там едно селище на име Рабочеостровск, от него курсират кораби. Естествено, те няма вече да курсират, но винаги може да „приватизирате“ лодка с гребла на пристанището. Бяло море е сравнително спокойно. Да се преплава е реално (сложно е, но е възможно, от хленчене файда няма, така че гребем). На изток бих отстъпвал към Иверския манастир в Тверска област. Той също е на островче насред езеро. Наблизо има хранителни складове и производства (по трасе М10).

Защо манастири? Тях няма да ги бомбардират първи (това не значи, че после приоритетите няма да се променят). И още нещо: идеите за християнските добродетели ги забравете веднага. Нито ще ви чака там някой, нито ще ви се радва. Отивате там реално да се продавате в робство. Ще работите за тях, селскостопанска работа, или охранителна, или там каквото – те ще ви дават по нещичко за ядене. Стигнете ли, казвате им: „Силен и здрав мъж съм, ще работя каквото кажете срещу храна.“ Темата за морално-нравствената отговорност на поповете пред миряните я забравете веднага, и гък не гъквайте по въпроса.

Разбира се, всичко е условно. Може да изберете друго място. Главното е принципът: собствеността ви вече я няма, доволни сте да сте в полу-робство, ако ще ви хранят. Между другото, липсата на вашата собственост значи липса на собствеността и на другите. Който не може да защити имуществото си с оръжие, няма имущество.

Сега по въпроса за транспорта. Обществен транспорт вече няма да има. Плюс е, че можете да се снабдите с кола – да я „приватизирате“ или намерите захвърлена. Захвърлени с празен резервоар не пипайте, гориво няма откъде да намерите. И да я бутате до бензиностанцията, там няма да ви огрее. Снабдите ли се с кола, накачете я с бели парцали, в идеалния случай лепнете на покрива кръст от червен скоч. Не е панацея, и такива бомбардират, но шансовете да ви пропуснат са малко по-големи.

Движете се бавно! 50-60 км в час, не повече. Може да налетите на военен конвой, ако карате бързо, ще ви опукат „за всеки случай“. Цивилни, които се опитват да ви спрат, не се котират, давайте газ. Те с вас няма да споделят своето, а да „споделят“ вашето ще се пробват. Конвой или дори отделно военно возило – спирате на банкета, отваряте леко прозорец или врата и показвате ръцете. Да излизате не е мъдро, поражда желание да ви преджобят. Седите спокойно, без нерви и се молите наум. Да ги пронизвате с поглед не се препоръчва – гледайте пред себе си или в пода.

Ако всичко се е получило, имате покрив над главата, работа, храна и хора, с които да поговорите (и това е важно). Така можете да изчакате седмица-две, да видите какво става, да оцените ситуацията в страната и да вземате решения.

А сега малко цинизъм. Ако имате „обоз“, демек семейство, сте покойник. Имате ли семейство, сте длъжни да се ометете от града и да цъфнете на вилата (със запаси от храна и вода) в първите секунди, когато на улицата почнат да псуват Великия Пу. Ако нямате къде да отстъпите с „обоза“ си, сте пълнеж за ковчези, и обозът ви също. Не тъпейте, подгответе се предварително, близките ТРЯБВА ДА ИМА КЪДЕ ДА ОТИДАТ. И трябва да имат продоволствие. После правете каквото щете. Ако щете, се връщайте и воювайте, ако щете се връщайте и висете по кръчмите, докато жена ви е „на картофите“. Но мислете за тях предварително, после ще бъде късно. Всичко по-горе е за самотници, дето нямат какво да губят. Имате ли семейство, се пригответе предварително. Историята учи, че семейството е по-скъпо от Родината, поне на първия етап.

—-

Тези съвети (заедно с още – как да воювате и с какво да се снабдите, ако ще воювате) ги има на стотици места из руския Нет, поне от 2013 г. Често с дребни, но забележими разлики в текста. Същността обаче е една и съща.

Реални ли са? Някои ми се струват да не са съвсем. Не вярвам примерно при минаване на обкръжението за един пистолет (особено ако е модел, който не се използва от военни) да ви обявят за дезертьор или преоблечен враг. Дезертьорът ще е захвърлил оръжието до последния патрон, врагът ще носи нещо далеч по-делово от пистолет. (И никой от двамата няма да си каже за него открито.) Ако пък им трябва пушечно месо, и да нямате оръжие, ще ви дадат.

Също, съветът с лодката и гребането до Соловецките острови ми се струва попресилен. От Рабочеостровск до Соловецк са нещо към 70-80 км по права линия. С гребна лодка, каквато може да бъде намерена на пристанище от сорта на това в Рабочеостровск, и световен шампион надали ще ги преодолее за 24 часа, а опитен, но неподготвен за състезание гребец – за 48 часа. Положението се подобрява малко, ако сте цяла лодка яки гребци, но пак трябва да сте сигурни, че ви чакат поне 24 часа екстра време – а прогнози за времето в Онежката губа вероятно тогава също няма да има.

Като цяло впечатлението ми за Енота от разказа му е за човек с опит от войната в Чечня, но и с известна доза художествено въображение.

И си мисля – дано никога и никой не се нуждае от тези съвети.

Което ме навежда на още мисли – но тях ще напиша следващия път.

Fully-Loaded Kodi Box Sellers Receive Hefty Jail Sentences

Post Syndicated from Andy original https://torrentfreak.com/fully-loaded-kodi-box-sellers-receive-hefty-jail-sentences-180524/

While users of older peer-to-peer based file-sharing systems have to work relatively hard to obtain content, users of the Kodi media player have things an awful lot easier.

As standard, Kodi is perfectly legal. However, when augmented with third-party add-ons it becomes a media discovery powerhouse, providing most of the content anyone could desire. A system like this can be set up by the user but for many, buying a so-called “fully-loaded” box from a seller is the easier option.

As a result, hundreds – probably thousands – of cottage industries have sprung up to service this hungry market in the UK, with regular people making a business out of setting up and selling such devices. Until three years ago, that’s what Michael Jarman and Natalie Forber of Colwyn Bay, Wales, found themselves doing.

According to reports in local media, Jarman was arrested in January 2015 when police were called to a disturbance at Jarman and Forber’s home. A large number of devices were spotted and an investigation was launched by Trading Standards officers. The pair were later arrested and charged with fraud offenses.

While 37-year-old Jarman pleaded guilty, 36-year-old Forber initially denied the charges and was due to stand trial. However, she later changed her mind and like Jarman, pleaded guilty to participating in a fraudulent business. Forber also pleaded guilty to transferring criminal property by shifting cash from the scheme through various bank accounts.

The pair attended a sentencing hearing before Judge Niclas Parry at Caernarfon Crown Court yesterday. According to local reporter Eryl Crump, the Court heard that the couple had run their business for about two years, selling around 1,000 fully-loaded Kodi-enabled devices for £100 each via social media.

According to David Birrell for the prosecution, the operation wasn’t particularly sophisticated but it involved Forber programming the devices as well as handling customer service. Forber claimed she was forced into the scheme by Jarman but that claim was rejected by the prosecution.

Between February 2013 and January 2015 the pair banked £105,000 from the business, money that was transferred between bank accounts in an effort to launder the takings.

Reporting from Court via Twitter, Crump said that Jarman’s defense lawyer accepted that a prison sentence was inevitable for his client but asked for the most lenient sentence possible.

Forber’s lawyer pointed out she had no previous convictions. The mother-of-two broke up with Jarman following her arrest and is now back in work and studying at college.

Sentencing the pair, Judge Niclas Parry described the offenses as a “relatively sophisticated fraud” carried out over a significant period. He jailed Jarman for 21 months and Forber for 16 months, suspended for two years. She must also carry out 200 hours of unpaid work.

The pair will also face a Proceeds of Crime investigation which could see them paying large sums to the state, should any assets be recoverable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

[$] An update on bcachefs

Post Syndicated from jake original https://lwn.net/Articles/755276/rss

The bcachefs filesystem has been under
development for a number of years now; according to lead developer Kent
Overstreet, it is time to start talking about getting the code upstream.
He came to the 2018 Linux Storage, Filesystem, and Memory-Management Summit
(LSFMM) to discuss that in a combined filesystem and storage
session. Bcachefs grew out of bcache, which is a block layer
cache that was merged into Linux 3.10 in mid-2013.

Съвместима ли е таксата за радио и телевизия с правото на ЕС

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/05/13/fee_psm/

През март  2018 г. Oberverwaltungsgericht Rheinland-Pfalz (Върховен административен съд на Рейнланд-Пфалц – OVG Rheinland-Pfalz) решава, че таксата за радио и телевизия в Германия е съвместима с правото на ЕС (дело № 7 A 11938/17) . Съдът отхвърля тезата, че таксата е несъвместима с правото на ЕС, тъй като предоставя на обществените радио- и телевизионни доставчици на медийни услуги несправедливо предимство пред техните частни конкуренти.

Съдът посочва, че през 2016 г. Bundesverwaltungsgericht (Федерален административен съд – BVerwG) вече е установил съответствието на таксата  – в новата й форма, въведена през 2013 г. – с правото на ЕС (решение от 18 март 2016 г., BVerwG 6 С 6.15). Съгласно това решение въвеждането на таксата  не изисква съгласието на Европейската комисия и е с съвместимо с Директивата за аудиовизуалните медийни услуги.  Обществените и частните радио- и телевизионни оператори  неизбежно ще бъдат финансирани по различни начини. Това обаче не означава непременно, че обществените радио- и телевизионни оператори са получили несправедливо предимство, тъй като за разлика от частните радио- и телевизионни оператори те са подложени на много по-ограничителни правила за рекламиране и следователно са финансово зависими от таксата.

Междувременно Landgericht Tübingen (Районен съд в Тюбинген, решение от 3 август 2017 г., дело № 5 T 246/17 и др.) е постановил, че таксата  нарушава правото на ЕС  – и в резултат има подадено преюдициално запитване до Съда на ЕС –  дело  С-492/17.

 

Преюдициални въпроси:

1)

Несъвместим ли е с правото на Съюза националният Gesetz vom 18.10.2011 zur Geltung des Rundfunkbeitragsstaatsvertrags (RdFunkBeitrStVtrBW) vom 17 Dezember 2010 (Закон от 18 октомври 2011 г. за прилагане на Държавния договор за вноската за радио- и телевизионно разпространение от 17 декември 2010 г., наричан по-нататък „RdFunkBeitrStVtrBW“) на провинция Баден-Вюртемберг, последно изменен с член 4 от Neunzehnter Rundfunkänderungsstaatsvertrag (Деветнадесети държавен договор за изменение на Държавните договори за радио- и телевизионно разпространение) от 3 декември 2015 г. (Закон от 23 февруари 2016 г., GBl. стр. 126, 129), поради това че вноската, събирана от 1 януари 2013 г. съгласно този закон безусловно по принцип от всяко живеещо в германската федерална провинция Баден-Вюртемберг пълнолетно лице в полза на радио- и телевизионните оператори SWR и ZDF, представлява помощ, която противоречи на правото на Съюза и предоставя по-благоприятно третиране само в полза на тези обществени радио- и телевизионни оператори спрямо частни радио- и телевизионни оператори? Трябва ли членове 107 и 108 ДФЕС да се тълкуват в смисъл, че за Закона за вноската за радио- и телевизионно разпространение е трябвало да се получи разрешението на Комисията и поради липсата на разрешение той е невалиден?

2)

Трябва ли член 107 ДФЕС, съответно член 108 ДФЕС да се тълкува в смисъл, че в обхвата му попада правна уредба, установена в националния закон „RdFunkBeitrStVtrBW“, която предвижда, че по принцип от всяко живеещо в Баден-Вюртемберг пълнолетно лице безусловно се събира вноска в полза само на държавни/обществени радио- и телевизионни оператори, поради това че тази вноска съдържа противоречаща на правото на Съюза и предоставяща по-благоприятно третиране помощ с цел изключването по технически причини на оператори от държави от Европейския съюз, доколкото вноските са предназначени да се използват за създаването на конкурентен начин на пренос (монопол върху DVB-T2), без да е предвидено той да се използва от чуждестранни оператори? Трябва ли член 107 ДФЕС, съответно член 108 ДФЕС да се тълкува в смисъл, че в обхвата му попадат не само преки субсидии, но и други релевантни от икономическа гледна точка привилегии (право на издаване на изпълнителен лист, правомощия за предприемане на действия както в качеството на стопанско предприятие, така и в качеството на орган, поставяне в по-благоприятно положение при изчисляването на дълговете)?

3)

Съвместимо ли е с принципа на равно третиране и със забраната за предоставящи привилегии помощи положение, при което на основание национален закон на провинция Баден-Вюртемберг германски телевизионен оператор, който се урежда от нормите на публичното право и има предоставени правомощия на орган, но същевременно се конкурира с частни радио- и телевизионни оператори на рекламния пазар, е привилегирован в сравнение с тези оператори поради това че не трябва като частните конкуренти да иска по общия съдебен ред да му бъде издаден изпълнителен лист за вземанията му срещу зрителите, преди да може да пристъпи към принудително изпълнение, а самият той има право, без участието на съд, да издаде титул, който същевременно му дава право на принудително изпълнение?

4)

Съвместимо ли е с член 10 от ЕКПЧ /член [11] от Хартата на основните права (свобода на информация) положение, при което държава членка предвижда в национален закон на провинция Баден-Вюртемберг, че телевизионен оператор, на който са предоставени правомощия на орган, има право да изисква плащането на вноска от всяко живеещо в зоната на радио- и телевизионното излъчване пълнолетно лице за целите на финансирането на точно този оператор, при неплащането на която е предвидена глоба, независимо дали това лице въобще разполага с приемник или само използва услугите на други, а именно чуждестранни или други, частни оператори?

5)

Съвместим ли е националният закон „RdFunkBeitrStVtrBW“, и по-специално членове 2 и 3, с установените в правото на Съюза принципи на равно третиране и на недопускане на дискриминация в положение, при което вноската, която следва да се плаща безусловно от всеки жител за целите на финансирането на обществен телевизионен оператор, налага на всяко лице, което само отглежда детето си, тежест в размер, многократно по-висок от сумата, дължима от лице, което живее в общо жилище с други хора? Следва ли Директива 2004/113/ЕО (1) да се тълкува в смисъл, че спорната вноска също попада в обхвата ѝ и че e достатъчно да е налице косвено поставяне в по-неблагоприятно положение, след като с оглед на реалните дадености 90 % от жените понасят по-голяма тежест?

6)

Съвместим ли националният закон „RdFunkBeitrStVtrBW“, и по-специално членове 2 и 3, с установените в правото на Съюза принципи на равно третиране и на недопускане на дискриминация в положение, при което вноската, която следва да се плаща безусловно от всеки жител за целите на финансирането на обществен телевизионен оператор, за нуждаещите се от второ жилище лица по свързана с работата причина е двойно по-голяма, отколкото за други работници?

7)

Съвместим ли е националният закон „RdFunkBeitrStVtrBW“, и по-специално членове 2 и 3, с установените в правото на Съюза принципи на равно третиране и на недопускане на дискриминация и със свободата на установяване, ако вноската, която следва да се плаща безусловно от всеки жител за целите на финансирането на обществен телевизионен оператор, е уредена по такъв начин, че при еднаква възможност за приемане на радио- и телевизионно разпространение непосредствено преди границата със съседна държава от ЕС германски гражданин дължи вноската само поради мястото си на пребиваване, докато германският гражданин, живущ непосредствено от другата страна на границата, не дължи вноската, също както гражданинът на друга държава — членка на ЕС, който по свързани с работата причини трябва да се установи непосредствено от другата страна на вътрешна граница на ЕС, понася тежестта на вноската, но не и гражданинът на ЕС, живущ непосредствено преди границата, дори и никой от двамата да не се интересува от приемането на излъчванията на германския оператор?

Коментар по въпрос №4:  допуснат е въпрос за съвместимост с чл.10 от Конвенцията за правата на човека. Съдът за правата на човека вече се е произнасял, има съображения за недопустимост по сходно дело отпреди десетина години –  ето тук съм писала – вж Faccio v Italy – но нека да се произнесе и Съдът на ЕС.

И – отново за характера на таксата: ако  плащат и хората без приемник, това очевидно не е такса в смисъл цена за услуга, а данъчно вземане, по мое мнение това е тенденцията.

Чакаме решението на Съда на ЕС. Нека да се развива и множи практиката.

This is a really lovely Raspberry Pi tricorder

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/raspberry-pi-tricorder-prop/

At the moment I’m spending my evenings watching all of Star Trek in order. Yes, I have watched it before (but with some really big gaps). Yes, including the animated series (I’m up to The Terratin Incident). So I’m gratified to find this beautiful The Original Series–style tricorder build.

Star Trek Tricorder with Working Display!

At this year’s Replica Prop Forum showcase, we meet up once again wtih Brian Mix, who brought his new Star Trek TOS Tricorder. This beautiful replica captures the weight and finish of the filming hand prop, and Brian has taken it one step further with some modern-day electronics!

A what now?

If you don’t know what a tricorder is, which I guess is faintly possible, the easiest way I can explain is to steal words that Liz wrote when Recantha made one back in 2013. It’s “a made-up thing used by the crew of the Enterprise to measure stuff, store data, and scout ahead remotely when exploring strange new worlds, seeking out new life and new civilisations, and all that jazz.”

A brief history of Picorders

We’ve seen other Raspberry Pi–based realisations of this iconic device. Recantha’s LEGO-cased tricorder delivered some authentic functionality, including temperature sensors, an ultrasonic distance sensor, a photosensor, and a magnetometer. Michael Hahn’s tricorder for element14’s Sci-Fi Your Pi competition in 2015 packed some similar functions, along with Original Series audio effects, into a neat (albeit non-canon) enclosure.

Brian Mix’s Original Series tricorder

Brian Mix’s tricorder, seen in the video above from Tested at this year’s Replica Prop Forum showcase, is based on a high-quality kit into which, he discovered, a Raspberry Pi just fits. He explains that the kit is the work of the late Steve Horch, a special effects professional who provided props for later Star Trek series, including the classic Deep Space Nine episode Trials and Tribble-ations.

A still from an episode of Star Trek: Deep Space Nine: Jadzia Dax, holding an Original Series-sylte tricorder, speaks with Benjamin Sisko

Dax, equipped for time travel

This episode’s plot required sets and props — including tricorders — replicating the USS Enterprise of The Original Series, and Steve Horch provided many of these. Thus, a tricorder kit from him is about as close to authentic as you can possibly find unless you can get your hands on a screen-used prop. The Pi allows Brian to drive a real display and a speaker: “Being the geek that I am,” he explains, “I set it up to run every single Original Series Star Trek episode.”

Even more wonderful hypothetical tricorders that I would like someone to make

This tricorder is beautiful, and it makes me think how amazing it would be to squeeze in some of the sensor functionality of the devices depicted in the show. Space in the case is tight, but it looks like there might be a little bit of depth to spare — enough for an IMU, maybe, or a temperature sensor. I’m certain the future will bring more Pi tricorder builds, and I, for one, can’t wait. Please tell us in the comments if you’re planning something along these lines, and, well, I suppose some other sci-fi franchises have decent Pi project potential too, so we could probably stand to hear about those.

If you’re commenting, no spoilers please past The Animated Series S1 E11. Thanks.

The post This is a really lovely Raspberry Pi tricorder appeared first on Raspberry Pi.

Резолюция на ЕС относно плурализма и свободата на медиите в ЕС 2018

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/05/03/freedom-3/

След Peзолюция на Европейския парламент относно свободата на печата и на медиите по света  (2013 г.)  днес ЕП   гласува   Резолюция на ЕП относно  плурализма и свободата на медиите в Европейския съюз (488 гласа “за”, 43 “против” и 114 въздържали се).

Като има предвид, че […]

във всяко едно демократично общество медийният сектор изпълнява ключова роля; като има предвид, че отражението на икономическата криза, съчетано с едновременното разрастване на платформите на социалните медии и други високотехнологични гиганти и изключително селективните рекламни приходи, драстично повиши нестабилността на условията на труд и социалната несигурност на работещите в медиите лица, включително на независимите журналисти, водейки до драстично понижаване на професионалните и социалните стандарти и качеството на журналистиката, което може да повлияе отрицателно на тяхната редакционна независимост;

като има предвид, че Европейската аудио-визуална обсерватория на Съвета на Европа осъди възникването на цифров дуопол на Google и Facebook, представляващ до 85% от целия растеж на пазара на цифрови реклами през 2016 г., което застрашава бъдещето на традиционните финансирани от реклами медийни дружества, например търговските телевизионни канали, вестници и списания, чиято аудитория е много по-ограничена;

като има предвид, че в контекста на политиката на разширяване Комисията е длъжна да изисква пълно спазване на критериите от Копенхаген, включително свободата на изразяване на мнение и свободата на медиите, поради което ЕС следва да дава пример за най-високи стандарти в тази област; като има предвид, че когато станат член на ЕС, държавите са длъжни да спазват постоянно и безусловно задълженията в областта на правата на човека по силата на Договорите на ЕС и Хартата на основните права на ЕС, и като има предвид, че зачитането на свободата на изразяване на мнение и свободата на медиите в държавите членки следва да бъде обект на редовен контрол;

като има предвид, че ЕС може да се ползва с доверие на световната сцена единствено ако свободата на печата и медиите се защитава и зачита в рамките на самия Съюз […]

отправя над 60 препоръки към държавите и ЕК, всяка от които е самостоятелно важна.

В  резолюцията са записани препоръки  като:

47.  предлага с оглед на ефективната защита на свободата и плурализма на медиите да се забрани, или най-малкото да стане напълно прозрачно, участието в обществени поръчки на дружества, чийто краен собственик притежава също така и медийно дружество;

предлага държавите членки да се задължат да докладват редовно за публичното финансиране, предоставяно на медийните дружества, както и да се извършва редовен мониторинг на всяко публично финансиране, предоставено на собственици на медии;

подчертава, че лица, които са били осъждани или признати за виновни в извършване на каквото и да било престъпление, не следва да бъдат собственици на медии;

48.  подчертава, че всяко публично финансиране на медийни организации следва да бъде предоставяно въз основа на недискриминационни, обективни и прозрачни критерии, за които всички медии да са предварително уведомени;

49.  припомня, че държавите членки следва да намерят начини за подпомагане на медиите, например като осигуряват неутралността на ДДС, както се препоръчва в неговата резолюция от 13 октомври 2011 г. относно бъдещето на ДДС и като подкрепят инициативите, свързани с медиите;

50.  призовава Комисията да разпределя постоянно и подходящо финансиране в рамките на бюджета на ЕС за подкрепа на мониторинга на плурализма на медиите на Центъра за плурализъм и свобода на медиите и за създаване на годишен механизъм за оценка на рисковете за плурализма на медиите в държавите членки; […]

51.  призовава Комисията да наблюдава и да събира информация и статистически данни относно свободата и плурализма на медиите във всички държави членки и да анализира отблизо случаите на нарушаване на основните права на журналистите, като същевременно спазва принципа на субсидиарност;

52.  подчертава необходимостта от засилване на споделянето на най-добри практики между регулаторните органи на държавите членки в аудио-визуалната сфера;

53.  призовава Комисията да вземе под внимание препоръките, съдържащи се в резолюцията на Европейския парламент от 25 октомври 2016 г. относно създаването на механизъм на ЕС за демокрацията, принципите на правовата държава и основните правапризовава Комисията да включи резултатите и препоръките от мониторинга на плурализма на медиите относно рисковете за плурализма и свободата на медиите в ЕС при изготвянето на годишния си доклад относно демокрацията, принципите на правовата държава и основните права (европейски доклад относно демокрацията, принципите на правовата държава и основните права);

54.  насърчава държавите членки да активизират усилията си за укрепване на медийната грамотност и да насърчават инициативите за обучение и образование сред всички граждани чрез формално, неформално и информално образование от гледна точка на ученето през целия живот, също и като обръщат особено внимание на първоначалната и текущата подготовка и подкрепа на учителите и като насърчават диалога и сътрудничеството между сектора на образованието и обучението и всички съответни заинтересовани страни, включително работещите в сферата на медиите лица, гражданското общество и младежките организации; отново потвърждава необходимостта от оказване на подкрепа за съобразените с възрастта иновативни инструменти за насърчаване на овластяването и онлайн сигурността като задължителни елементи в учебната програма на училищата и от преодоляване на цифровото разделение както чрез специални проекти за технологична грамотност, така и чрез подходящи инвестиции в инфраструктурата, за да се гарантира всеобщият достъп до информация;

55.  подчертава, че развиването на чувство за критична оценка и анализ по отношение на използването и създаването на медийно съдържание е от съществено значение, за да могат хората да вникват в актуалните проблеми и да оказват принос за обществения живот, както и за да бъдат те запознати с потенциала за промени и със заплахите, присъщи за все по-сложната и взаимосвързана медийна среда; подчертава, че медийната грамотност е основно демократично умение, което овластява гражданите; призовава Комисията и държавите членки да разработят конкретни мерки с цел насърчаване и подкрепа на проектите в областта на медийната грамотност като пилотния проект „Медийна грамотност за всички“ и да разработят цялостна политика за медийна грамотност, насочена към гражданите от всички възрастови групи и всички видове медии като неразделна част от политиката на Европейския съюз в областта на образованието, която да бъде целесъобразно подпомогната чрез съответните възможности за финансиране от ЕС, например европейските структурни и инвестиционни фондове и програмата „Хоризонт 2020“ [..]

 

 

The Helium Factor and Hard Drive Failure Rates

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/helium-filled-hard-drive-failure-rates/

Seagate Enterprise Capacity 3.5 Helium HDD

In November 2013, the first commercially available helium-filled hard drive was introduced by HGST, a Western Digital subsidiary. The 6 TB drive was not only unique in being helium-filled, it was for the moment, the highest capacity hard drive available. Fast forward a little over 4 years later and 12 TB helium-filled drives are readily available, 14 TB drives can be found, and 16 TB helium-filled drives are arriving soon.

Backblaze has been purchasing and deploying helium-filled hard drives over the past year and we thought it was time to start looking at their failure rates compared to traditional air-filled drives. This post will provide an overview, then we’ll continue the comparison on a regular basis over the coming months.

The Promise and Challenge of Helium Filled Drives

We all know that helium is lighter than air — that’s why helium-filled balloons float. Inside of an air-filled hard drive there are rapidly spinning disk platters that rotate at a given speed, 7200 rpm for example. The air inside adds an appreciable amount of drag on the platters that in turn requires an appreciable amount of additional energy to spin the platters. Replacing the air inside of a hard drive with helium reduces the amount of drag, thereby reducing the amount of energy needed to spin the platters, typically by 20%.

We also know that after a few days, a helium-filled balloon sinks to the ground. This was one of the key challenges in using helium inside of a hard drive: helium escapes from most containers, even if they are well sealed. It took years for hard drive manufacturers to create containers that could contain helium while still functioning as a hard drive. This container innovation allows helium-filled drives to function at spec over the course of their lifetime.

Checking for Leaks

Three years ago, we identified SMART 22 as the attribute assigned to recording the status of helium inside of a hard drive. We have both HGST and Seagate helium-filled hard drives, but only the HGST drives currently report the SMART 22 attribute. It appears the normalized and raw values for SMART 22 currently report the same value, which starts at 100 and goes down.

To date only one HGST drive has reported a value of less than 100, with multiple readings between 94 and 99. That drive continues to perform fine, with no other errors or any correlating changes in temperature, so we are not sure whether the change in value is trying to tell us something or if it is just a wonky sensor.

Helium versus Air-Filled Hard Drives

There are several different ways to compare these two types of drives. Below we decided to use just our 8, 10, and 12 TB drives in the comparison. We did this since we have helium-filled drives in those sizes. We left out of the comparison all of the drives that are 6 TB and smaller as none of the drive models we use are helium-filled. We are open to trying different comparisons. This just seemed to be the best place to start.

Lifetime Hard Drive Failure Rates: Helium vs. Air-Filled Hard Drives table

The most obvious observation is that there seems to be little difference in the Annualized Failure Rate (AFR) based on whether they contain helium or air. One conclusion, given this evidence, is that helium doesn’t affect the AFR of hard drives versus air-filled drives. My prediction is that the helium drives will eventually prove to have a lower AFR. Why? Drive Days.

Let’s go back in time to Q1 2017 when the air-filled drives listed in the table above had a similar number of Drive Days to the current number of Drive Days for the helium drives. We find that the failure rate for the air-filled drives at the time (Q1 2017) was 1.61%. In other words, when the drives were in use a similar number of hours, the helium drives had a failure rate of 1.06% while the failure rate of the air-filled drives was 1.61%.

Helium or Air?

My hypothesis is that after normalizing the data so that the helium and air-filled drives have the same (or similar) usage (Drive Days), the helium-filled drives we use will continue to have a lower Annualized Failure Rate versus the air-filled drives we use. I expect this trend to continue for the next year at least. What side do you come down on? Will the Annualized Failure Rate for helium-filled drives be better than air-filled drives or vice-versa? Or do you think the two technologies will be eventually produce the same AFR over time? Pick a side and we’ll document the results over the next year and see where the data takes us.

The post The Helium Factor and Hard Drive Failure Rates appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Hard Drive Stats for Q1 2018

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hard-drive-stats-for-q1-2018/

Backblaze Drive Stats Q1 2018

As of March 31, 2018 we had 100,110 spinning hard drives. Of that number, there were 1,922 boot drives and 98,188 data drives. This review looks at the quarterly and lifetime statistics for the data drive models in operation in our data centers. We’ll also take a look at why we are collecting and reporting 10 new SMART attributes and take a sneak peak at some 8 TB Toshiba drives. Along the way, we’ll share observations and insights on the data presented and we look forward to you doing the same in the comments.

Background

Since April 2013, Backblaze has recorded and saved daily hard drive statistics from the drives in our data centers. Each entry consists of the date, manufacturer, model, serial number, status (operational or failed), and all of the SMART attributes reported by that drive. Currently there are about 97 million entries totaling 26 GB of data. You can download this data from our website if you want to do your own research, but for starters here’s what we found.

Hard Drive Reliability Statistics for Q1 2018

At the end of Q1 2018 Backblaze was monitoring 98,188 hard drives used to store data. For our evaluation below we remove from consideration those drives which were used for testing purposes and those drive models for which we did not have at least 45 drives. This leaves us with 98,046 hard drives. The table below covers just Q1 2018.

Q1 2018 Hard Drive Failure Rates

Notes and Observations

If a drive model has a failure rate of 0%, it only means there were no drive failures of that model during Q1 2018.

The overall Annualized Failure Rate (AFR) for Q1 is just 1.2%, well below the Q4 2017 AFR of 1.65%. Remember that quarterly failure rates can be volatile, especially for models that have a small number of drives and/or a small number of Drive Days.

There were 142 drives (98,188 minus 98,046) that were not included in the list above because we did not have at least 45 of a given drive model. We use 45 drives of the same model as the minimum number when we report quarterly, yearly, and lifetime drive statistics.

Welcome Toshiba 8TB drives, almost…

We mentioned Toshiba 8 TB drives in the first paragraph, but they don’t show up in the Q1 Stats chart. What gives? We only had 20 of the Toshiba 8 TB drives in operation in Q1, so they were excluded from the chart. Why do we have only 20 drives? When we test out a new drive model we start with the “tome test” and it takes 20 drives to fill one tome. A tome is the same drive model in the same logical position in each of the 20 Storage Pods that make up a Backblaze Vault. There are 60 tomes in each vault.

In this test, we created a Backblaze Vault of 8 TB drives, with 59 of the tomes being Seagate 8 TB drives and 1 tome being the Toshiba drives. Then we monitored the performance of the vault and its member tomes to see if, in this case, the Toshiba drives performed as expected.

Q1 2018 Hard Drive Failure Rate — Toshiba 8TB

So far the Toshiba drive is performing fine, but they have been in place for only 20 days. Next up is the “pod test” where we fill a Storage Pod with Toshiba drives and integrate it into a Backblaze Vault comprised of like-sized drives. We hope to have a better look at the Toshiba 8 TB drives in our Q2 report — stay tuned.

Lifetime Hard Drive Reliability Statistics

While the quarterly chart presented earlier gets a lot of interest, the real test of any drive model is over time. Below is the lifetime failure rate chart for all the hard drive models which have 45 or more drives in operation as of March 31st, 2018. For each model, we compute their reliability starting from when they were first installed.

Lifetime Hard Drive Failure Rates

Notes and Observations

The failure rates of all of the larger drives (8-, 10- and 12 TB) are very good, 1.2% AFR (Annualized Failure Rate) or less. Many of these drives were deployed in the last year, so there is some volatility in the data, but you can use the Confidence Interval to get a sense of the failure percentage range.

The overall failure rate of 1.84% is the lowest we have ever achieved, besting the previous low of 2.00% from the end of 2017.

Our regular readers and drive stats wonks may have noticed a sizable jump in the number of HGST 8 TB drives (model: HUH728080ALE600), from 45 last quarter to 1,045 this quarter. As the 10 TB and 12 TB drives become more available, the price per terabyte of the 8 TB drives has gone down. This presented an opportunity to purchase the HGST drives at a price in line with our budget.

We purchased and placed into service the 45 original HGST 8 TB drives in Q2 of 2015. They were our first Helium-filled drives and our only ones until the 10 TB and 12 TB Seagate drives arrived in Q3 2017. We’ll take a first look into whether or not Helium makes a difference in drive failure rates in an upcoming blog post.

New SMART Attributes

If you have previously worked with the hard drive stats data or plan to, you’ll notice that we added 10 more columns of data starting in 2018. There are 5 new SMART attributes we are tracking each with a raw and normalized value:

  • 177 – Wear Range Delta
  • 179 – Used Reserved Block Count Total
  • 181- Program Fail Count Total or Non-4K Aligned Access Count
  • 182 – Erase Fail Count
  • 235 – Good Block Count AND System(Free) Block Count

The 5 values are all related to SSD drives.

Yes, SSD drives, but before you jump to any conclusions, we used 10 Samsung 850 EVO SSDs as boot drives for a period of time in Q1. This was an experiment to see if we could reduce boot up time for the Storage Pods. In our case, the improved boot up speed wasn’t worth the SSD cost, but it did add 10 new columns to the hard drive stats data.

Speaking of hard drive stats data, the complete data set used to create the information used in this review is available on our Hard Drive Test Data page. You can download and use this data for free for your own purpose, all we ask are three things: 1) you cite Backblaze as the source if you use the data, 2) you accept that you are solely responsible for how you use the data, and 3) you do not sell this data to anyone. It is free.

If you just want the summarized data used to create the tables and charts in this blog post, you can download the ZIP file containing the MS Excel spreadsheet.

Good luck and let us know if you find anything interesting.

[Ed: 5/1/2018 – Updated Lifetime chart to fix error in confidence interval for HGST 4TB drive, model: HDS5C4040ALE630]

The post Hard Drive Stats for Q1 2018 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Former Judge Accuses IP Court of Using ‘Pirate’ Microsoft Software

Post Syndicated from Andy original https://torrentfreak.com/former-judge-accuses-ip-court-of-using-pirate-microsoft-software-180429/

While piracy of movies, TV shows, and music grabs most of the headlines, software piracy is a huge issue, from both consumer and commercial perspectives.

For many years, software such as Photoshop has been pirated on a grand scale and around the world, millions of computers rely on cracked and unlicensed copies of Microsoft’s Windows software.

One of the key drivers of this kind of piracy is the relative expense of software. Open source variants are nearly always available but big brand names always seem more popular due to their market penetration and perceived ease of use.

While using pirated software very rarely gets individuals into trouble, the same cannot be said of unlicensed commercial operators. That appears to be the case in Russia where somewhat ironically the Court for Intellectual Property Rights stands accused of copyright infringement.

A complaint filed by the Paragon law firm at the Prosecutor General’s Office of the Court for Intellectual Property Rights (CIP) alleges that the Court is illegally using Microsoft software, something which has the potential to affect the outcome of court cases involving the US-based software giant.

Paragon is representing Alexander Shmuratov, who is a former Assistant Judge at the Court for Intellectual Property Rights. Shmuratov worked at the Court for several years and claims that the computers there were being operated with expired licenses.

Shmuratov himself told Kommersant that he “saw the notice of an activation failure every day when using MS Office products” in intellectual property court.

A representative of the Prosecutor General’s Office confirmed that a complaint had been received but said it had been forwarded to the Ministry of Internal Affairs.

In respect of the counterfeit software claims, CIP categorically denies the allegations. CIP says that licenses for all Russian courts were purchased back in 2008 and remained in force until 2011. In 2013, Microsoft agreed to an extension.

Only adding more intrigue to the story, CIP Assistant chairman Catherine Ulyanova said that the initator of the complaint, former judge Alexander Shmuratov, was dismissed from the CIP because he provided false information about income. He later mounted a challenge against his dismissal but was unsuccessful.

Ulyanova said that Microsoft licensed all courts from 2006 for use of Windows and MS Office. The licenses were acquired through a third-party company and more licenses than necessary were purchased, with some licenses being redistributed for use by CIP in later years with the consent of Microsoft.

Kommersant was unable to confirm how licenses were paid for beyond December 2011 but apparently an “official confirmation letter from the Irish headquarters of Microsoft, which does not object to the transfer of CIP licenses” had been sent to the Court.

Responding to Shmuratov’s allegations that software he used hadn’t been activated, Ulyanova said that technical problems had no relationship with the existence of software licenses.

The question of whether the Court is properly licensed will be determined at a later date but observers are already raising questions concerning CIP’s historical dealings with Microsoft not only in terms of licensing, but in cases it handled.

In the period 2014-2017, the Court for Intellectual Property Rights handled around 80 cases involving Microsoft and claims of between 50 thousand ($800) and several million rubles.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Two NSA Algorithms Rejected by the ISO

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/two_nsa_algorit.html

The ISO has rejected two symmetric encryption algorithms: SIMON and SPECK. These algorithms were both designed by the NSA and made public in 2013. They are optimized for small and low-cost processors like IoT devices.

The risk of using NSA-designed ciphers, of course, is that they include NSA-designed backdoors. Personally, I doubt that they’re backdoored. And I always like seeing NSA-designed cryptography (particularly its key schedules). It’s like examining alien technology.

Russia’s Encryption War: 1.8m Google & Amazon IPs Blocked to Silence Telegram

Post Syndicated from Andy original https://torrentfreak.com/russias-encryption-war-1-8m-google-amazon-ips-blocked-to-silence-telegram-180417/

The rules in Russia are clear. Entities operating an encrypted messaging service need to register with the authorities. They also need to hand over their encryption keys so that if law enforcement sees fit, users can be spied on.

Free cross-platform messaging app Telegram isn’t playing ball. An impressive 200,000,000 people used the software in March (including a growing number for piracy purposes) and founder Pavel Durov says he will not compromise their security, despite losing a lawsuit against the Federal Security Service which compels him to do so.

“Telegram doesn’t have shareholders or advertisers to report to. We don’t do deals with marketers, data miners or government agencies. Since the day we launched in August 2013 we haven’t disclosed a single byte of our users’ private data to third parties,” Durov said.

“Above all, we at Telegram believe in people. We believe that humans are inherently intelligent and benevolent beings that deserve to be trusted; trusted with freedom to share their thoughts, freedom to communicate privately, freedom to create tools. This philosophy defines everything we do.”

But by not handing over its keys, Telegram is in trouble with Russia. The FSB says it needs access to Telegram messages to combat terrorism so, in response to its non-compliance, telecoms watchdog Rozcomnadzor filed a lawsuit to degrade Telegram via web-blocking. Last Friday, that process ended in the state’s favor.

After an 18-minute hearing, a Moscow court gave the go-ahead for Telegram to be banned in Russia. The hearing was scheduled just the day before, giving Telegram little time to prepare. In protest, its lawyers didn’t even turn up to argue the company’s position.

Instead, Durov took to his VKontakte account to announce that Telegram would take counter-measures.

“Telegram will use built-in methods to bypass blocks, which do not require actions from users, but 100% availability of the service without a VPN is not guaranteed,” Durov wrote.

Telegram can appeal the blocking decision but Russian authorities aren’t waiting around for a response. They are clearly prepared to match Durov’s efforts, no matter what the cost.

In instructions sent out yesterday nationwide, Rozomnadzor ordered ISPs to block Telegram. The response was immediate and massive. Telegram was using both Amazon and Google to provide service to its users so, within hours, huge numbers of IP addresses belonging to both companies were targeted.

Initially, 655,352 Amazon IP addresses were placed on Russia’s nationwide blacklist. It was later reported that a further 131,000 IP addresses were added to that total. But the Russians were just getting started.

Servers.ru reports that a further 1,048,574 IP addresses belonging to Google were also targeted Monday. Rozcomnadzor said the court ruling against Telegram compelled it to take whatever action is needed to take Telegram down but with at least 1,834,996 addresses now confirmed blocked, it remains unclear what effect it’s had on the service.

Friday’s court ruling states that restrictions against Telegram can be lifted provided that the service hands over its encryption keys to the FSB. However, Durov responded by insisting that “confidentiality is not for sale, and human rights should not be compromised because of fear or greed.”

But of course, money is still part of the Telegram equation. While its business model in terms of privacy stands in stark contrast to that of Facebook, Telegram is also involved in the world’s biggest initial coin offering (ICO). According to media reports, it has raised $1.7 billion in pre-sales thus far.

This week’s action against Telegram is the latest in Russia’s war on ‘unauthorized’ encryption.

At the end of March, authorities suggested that around 15 million IP addresses (13.5 million belonging to Amazon) could be blocked to target chat software Zello. While those measures were averted, a further 500 domains belonging to Google were caught in the dragnet.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Engineering deep dive: Encoding of SCTs in certificates

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org/2018/04/04/sct-encoding.html

<p>Let&rsquo;s Encrypt recently <a href="https://community.letsencrypt.org/t/signed-certificate-timestamps-embedded-in-certificates/57187">launched SCT embedding in
certificates</a>.
This feature allows browsers to check that a certificate was submitted to a
<a href="https://en.wikipedia.org/wiki/Certificate_Transparency">Certificate Transparency</a>
log. As part of the launch, we did a thorough review
that the encoding of Signed Certificate Timestamps (SCTs) in our certificates
matches the relevant specifications. In this post, I&rsquo;ll dive into the details.
You&rsquo;ll learn more about X.509, ASN.1, DER, and TLS encoding, with references to
the relevant RFCs.</p>

<p>Certificate Transparency offers three ways to deliver SCTs to a browser: In a
TLS extension, in stapled OCSP, or embedded in a certificate. We chose to
implement the embedding method because it would just work for Let&rsquo;s Encrypt
subscribers without additional work. In the SCT embedding method, we submit
a &ldquo;precertificate&rdquo; with a <a href="#poison">poison extension</a> to a set of
CT logs, and get back SCTs. We then issue a real certificate based on the
precertificate, with two changes: The poison extension is removed, and the SCTs
obtained earlier are added in another extension.</p>

<p>Given a certificate, let&rsquo;s first look for the SCT list extension. According to CT (<a href="https://tools.ietf.org/html/rfc6962#section-3.3">RFC 6962
section 3.3</a>),
the extension OID for a list of SCTs is <code>1.3.6.1.4.1.11129.2.4.2</code>. An <a href="http://www.hl7.org/Oid/information.cfm">OID (object
ID)</a> is a series of integers, hierarchically
assigned and globally unique. They are used extensively in X.509, for instance
to uniquely identify extensions.</p>

<p>We can <a href="https://acme-v01.api.letsencrypt.org/acme/cert/031f2484307c9bc511b3123cb236a480d451">download an example certificate</a>,
and view it using OpenSSL (if your OpenSSL is old, it may not display the
detailed information):</p>

<pre><code>$ openssl x509 -noout -text -inform der -in Downloads/031f2484307c9bc511b3123cb236a480d451

CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1(0)
Log ID : DB:74:AF:EE:CB:29:EC:B1:FE:CA:3E:71:6D:2C:E5:B9:
AA:BB:36:F7:84:71:83:C7:5D:9D:4F:37:B6:1F:BF:64
Timestamp : Mar 29 18:45:07.993 2018 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:44:02:20:7E:1F:CD:1E:9A:2B:D2:A5:0A:0C:81:E7:
13:03:3A:07:62:34:0D:A8:F9:1E:F2:7A:48:B3:81:76:
40:15:9C:D3:02:20:65:9F:E9:F1:D8:80:E2:E8:F6:B3:
25:BE:9F:18:95:6D:17:C6:CA:8A:6F:2B:12:CB:0F:55:
FB:70:F7:59:A4:19
Signed Certificate Timestamp:
Version : v1(0)
Log ID : 29:3C:51:96:54:C8:39:65:BA:AA:50:FC:58:07:D4:B7:
6F:BF:58:7A:29:72:DC:A4:C3:0C:F4:E5:45:47:F4:78
Timestamp : Mar 29 18:45:08.010 2018 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:21:00:AB:72:F1:E4:D6:22:3E:F8:7F:C6:84:
91:C2:08:D2:9D:4D:57:EB:F4:75:88:BB:75:44:D3:2F:
95:37:E2:CE:C1:02:21:00:8A:FF:C4:0C:C6:C4:E3:B2:
45:78:DA:DE:4F:81:5E:CB:CE:2D:57:A5:79:34:21:19:
A1:E6:5B:C7:E5:E6:9C:E2
</code></pre>

<p>Now let&rsquo;s go a little deeper. How is that extension represented in
the certificate? Certificates are expressed in
<a href="https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One">ASN.1</a>,
which generally refers to both a language for expressing data structures
and a set of formats for encoding them. The most common format,
<a href="https://en.wikipedia.org/wiki/X.690#DER_encoding">DER</a>,
is a tag-length-value format. That is, to encode an object, first you write
down a tag representing its type (usually one byte), then you write
down a number expressing how long the object is, then you write down
the object contents. This is recursive: An object can contain multiple
objects within it, each of which has its own tag, length, and value.</p>

<p>One of the cool things about DER and other tag-length-value formats is that you
can decode them to some degree without knowing what they mean. For instance, I
can tell you that 0x30 means the data type &ldquo;SEQUENCE&rdquo; (a struct, in ASN.1
terms), and 0x02 means &ldquo;INTEGER&rdquo;, then give you this hex byte sequence to
decode:</p>

<pre><code>30 06 02 01 03 02 01 0A
</code></pre>

<p>You could tell me right away that decodes to:</p>

<pre><code>SEQUENCE
INTEGER 3
INTEGER 10
</code></pre>

<p>Try it yourself with this great <a href="https://lapo.it/asn1js/#300602010302010A">JavaScript ASN.1
decoder</a>. However, you wouldn&rsquo;t know
what those integers represent without the corresponding ASN.1 schema (or
&ldquo;module&rdquo;). For instance, if you knew that this was a piece of DogData, and the
schema was:</p>

<pre><code>DogData ::= SEQUENCE {
legs INTEGER,
cutenessLevel INTEGER
}
</code></pre>

<p>You&rsquo;d know this referred to a three-legged dog with a cuteness level of 10.</p>

<p>We can take some of this knowledge and apply it to our certificates. As a first
step, convert the above certificate to hex with
<code>xxd -ps &lt; Downloads/031f2484307c9bc511b3123cb236a480d451</code>. You can then copy
and paste the result into
<a href="https://lapo.it/asn1js">lapo.it/asn1js</a> (or use <a href="https://lapo.it/asn1js/#3082062F30820517A0030201020212031F2484307C9BC511B3123CB236A480D451300D06092A864886F70D01010B0500304A310B300906035504061302555331163014060355040A130D4C6574277320456E6372797074312330210603550403131A4C6574277320456E637279707420417574686F72697479205833301E170D3138303332393137343530375A170D3138303632373137343530375A302D312B3029060355040313223563396137662E6C652D746573742E686F66666D616E2D616E64726577732E636F6D30820122300D06092A864886F70D01010105000382010F003082010A0282010100BCEAE8F504D9D91FCFC69DB943254A7FED7C6A3C04E2D5C7DDD010CBBC555887274489CA4F432DCE6D7AB83D0D7BDB49C466FBCA93102DC63E0EB1FB2A0C50654FD90B81A6CB357F58E26E50F752BF7BFE9B56190126A47409814F59583BDD337DFB89283BE22E81E6DCE13B4E21FA6009FC8A7F903A17AB05C8BED85A715356837E849E571960A8999701EAE9CE0544EAAB936B790C3C35C375DB18E9AA627D5FA3579A0FB5F8079E4A5C9BE31C2B91A7F3A63AFDFEDB9BD4EA6668902417D286BE4BBE5E43CD9FE1B8954C06F21F5C5594FD3AB7D7A9CBD6ABF19774D652FD35C5718C25A3BA1967846CED70CDBA95831CF1E09FF7B8014E63030CE7A776750203010001A382032A30820326300E0603551D0F0101FF0404030205A0301D0603551D250416301406082B0601050507030106082B06010505070302300C0603551D130101FF04023000301D0603551D0E041604148B3A21ABADF50C4B30DCCD822724D2C4B9BA29E3301F0603551D23041830168014A84A6A63047DDDBAE6D139B7A64565EFF3A8ECA1306F06082B0601050507010104633061302E06082B060105050730018622687474703A2F2F6F6373702E696E742D78332E6C657473656E63727970742E6F7267302F06082B060105050730028623687474703A2F2F636572742E696E742D78332E6C657473656E63727970742E6F72672F302D0603551D110426302482223563396137662E6C652D746573742E686F66666D616E2D616E64726577732E636F6D3081FE0603551D200481F63081F33008060667810C0102013081E6060B2B0601040182DF130101013081D6302606082B06010505070201161A687474703A2F2F6370732E6C657473656E63727970742E6F72673081AB06082B0601050507020230819E0C819B54686973204365727469666963617465206D6179206F6E6C792062652072656C6965642075706F6E2062792052656C79696E67205061727469657320616E64206F6E6C7920696E206163636F7264616E636520776974682074686520436572746966696361746520506F6C69637920666F756E642061742068747470733A2F2F6C657473656E63727970742E6F72672F7265706F7369746F72792F30820104060A2B06010401D6790204020481F50481F200F0007500DB74AFEECB29ECB1FECA3E716D2CE5B9AABB36F7847183C75D9D4F37B61FBF64000001627313EB19000004030046304402207E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD30220659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419007700293C519654C83965BAAA50FC5807D4B76FBF587A2972DCA4C30CF4E54547F478000001627313EB2A0000040300483046022100AB72F1E4D6223EF87FC68491C208D29D4D57EBF47588BB7544D32F9537E2CEC10221008AFFC40CC6C4E3B24578DADE4F815ECBCE2D57A579342119A1E65BC7E5E69CE2300D06092A864886F70D01010B0500038201010095F87B663176776502F792DDD232C216943C7803876FCBEB46393A36354958134482E0AFEED39011618327C2F0203351758FEB420B73CE6C797B98F88076F409F3903F343D1F5D9540F41EF47EB39BD61B62873A44F00B7C8B593C6A416458CF4B5318F35235BC88EABBAA34F3E3F81BD3B047E982EE1363885E84F76F2F079F2B6EEB4ECB58EFE74C8DE7D54DE5C89C4FB5BB0694B837BD6F02BAFD5A6C007D1B93D25007BDA9B2BDBF82201FE1B76B628CE34E2D974E8E623EC57A5CB53B435DD4B9993ADF6BA3972F2B29D259594A94E17BBE06F34AAE5CF0F50297548C4DFFC5566136F78A3D3B324EAE931A14EB6BE6DA1D538E48CF077583C67B52E7E8">this handy link</a>). You can also run <code>openssl asn1parse -i -inform der -in Downloads/031f2484307c9bc511b3123cb236a480d451</code> to use OpenSSL&rsquo;s parser, which is less easy to use in some ways, but easier to copy and paste.</p>

<p>In the decoded data, we can find the OID <code>1.3.6.1.4.1.11129.2.4.2</code>, indicating
the SCT list extension. Per <a href="https://tools.ietf.org/html/rfc5280#page-17">RFC 5280, section
4.1</a>, an extension is defined:</p>

<pre><code>Extension ::= SEQUENCE {
extnID OBJECT IDENTIFIER,
critical BOOLEAN DEFAULT FALSE,
extnValue OCTET STRING
— contains the DER encoding of an ASN.1 value
— corresponding to the extension type identified
— by extnID
}
</code></pre>

<p>We&rsquo;ve found the <code>extnID</code>. The &ldquo;critical&rdquo; field is omitted because it has the
default value (false). Next up is the <code>extnValue</code>. This has the type
<code>OCTET STRING</code>, which has the tag &ldquo;0x04&rdquo;. <code>OCTET STRING</code> means &ldquo;here&rsquo;s
a bunch of bytes!&rdquo; In this case, as described by the spec, those bytes
happen to contain more DER. This is a fairly common pattern in X.509
to deal with parameterized data. For instance, this allows defining a
structure for extensions without knowing ahead of time all the structures
that a future extension might want to carry in its value. If you&rsquo;re a C
programmer, think of it as a <code>void*</code> for data structures. If you prefer Go,
think of it as an <code>interface{}</code>.</p>

<p>Here&rsquo;s that <code>extnValue</code>:</p>

<pre><code>04 81 F5 0481F200F0007500DB74AFEECB29ECB1FECA3E716D2CE5B9AABB36F7847183C75D9D4F37B61FBF64000001627313EB19000004030046304402207E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD30220659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419007700293C519654C83965BAAA50FC5807D4B76FBF587A2972DCA4C30CF4E54547F478000001627313EB2A0000040300483046022100AB72F1E4D6223EF87FC68491C208D29D4D57EBF47588BB7544D32F9537E2CEC10221008AFFC40CC6C4E3B24578DADE4F815ECBCE2D57A579342119A1E65BC7E5E69CE2
</code></pre>

<p>That&rsquo;s tag &ldquo;0x04&rdquo;, meaning <code>OCTET STRING</code>, followed by &ldquo;0x81 0xF5&rdquo;, meaning
&ldquo;this string is 245 bytes long&rdquo; (the 0x81 prefix is part of <a href="#variable-length">variable length
number encoding</a>).</p>

<p>According to <a href="https://tools.ietf.org/html/rfc6962#section-3.3">RFC 6962, section
3.3</a>, &ldquo;obtained SCTs
can be directly embedded in the final certificate, by encoding the
SignedCertificateTimestampList structure as an ASN.1 <code>OCTET STRING</code>
and inserting the resulting data in the TBSCertificate as an X.509v3
certificate extension&rdquo;</p>

<p>So, we have an <code>OCTET STRING</code>, all&rsquo;s good, right? Except if you remove the
tag and length from extnValue to get its value, you&rsquo;re left with:</p>

<pre><code>04 81 F2 00F0007500DB74AFEEC…
</code></pre>

<p>There&rsquo;s that &ldquo;0x04&rdquo; tag again, but with a shorter length. Why
do we nest one <code>OCTET STRING</code> inside another? It&rsquo;s because the
contents of extnValue are required by RFC 5280 to be valid DER, but a
SignedCertificateTimestampList is not encoded using DER (more on that
in a minute). So, by RFC 6962, a SignedCertificateTimestampList is wrapped in an
<code>OCTET STRING</code>, which is wrapped in another <code>OCTET STRING</code> (the extnValue).</p>

<p>Once we decode that second <code>OCTET STRING</code>, we&rsquo;re left with the contents:</p>

<pre><code>00F0007500DB74AFEEC…
</code></pre>

<p>&ldquo;0x00&rdquo; isn&rsquo;t a valid tag in DER. What is this? It&rsquo;s TLS encoding. This is
defined in <a href="https://tools.ietf.org/html/rfc5246#section-4">RFC 5246, section 4</a>
(the TLS 1.2 RFC). TLS encoding, like ASN.1, has both a way to define data
structures and a way to encode those structures. TLS encoding differs
from DER in that there are no tags, and lengths are only encoded when necessary for
variable-length arrays. Within an encoded structure, the type of a field is determined by
its position, rather than by a tag. This means that TLS-encoded structures are
more compact than DER structures, but also that they can&rsquo;t be processed without
knowing the corresponding schema. For instance, here&rsquo;s the top-level schema from
<a href="https://tools.ietf.org/html/rfc6962#section-3.3">RFC 6962, section 3.3</a>:</p>

<pre><code> The contents of the ASN.1 OCTET STRING embedded in an OCSP extension
or X509v3 certificate extension are as follows:

opaque SerializedSCT&lt;1..2^16-1&gt;;

struct {
SerializedSCT sct_list &lt;1..2^16-1&gt;;
} SignedCertificateTimestampList;

Here, &quot;SerializedSCT&quot; is an opaque byte string that contains the
serialized TLS structure.
</code></pre>

<p>Right away, we&rsquo;ve found one of those variable-length arrays. The length of such
an array (in bytes) is always represented by a length field just big enough to
hold the max array size. The max size of an <code>sct_list</code> is 65535 bytes, so the
length field is two bytes wide. Sure enough, those first two bytes are &ldquo;0x00
0xF0&rdquo;, or 240 in decimal. In other words, this <code>sct_list</code> will have 240 bytes. We
don&rsquo;t yet know how many SCTs will be in it. That will become clear only by
continuing to parse the encoded data and seeing where each struct ends (spoiler
alert: there are two SCTs!).</p>

<p>Now we know the first SerializedSCT starts with <code>0075…</code>. SerializedSCT
is itself a variable-length field, this time containing <code>opaque</code> bytes (much like <code>OCTET STRING</code>
back in the ASN.1 world). Like SignedCertificateTimestampList, it has a max size
of 65535 bytes, so we pull off the first two bytes and discover that the first
SerializedSCT is 0x0075 (117 decimal) bytes long. Here&rsquo;s the whole thing, in
hex:</p>

<pre><code>00DB74AFEECB29ECB1FECA3E716D2CE5B9AABB36F7847183C75D9D4F37B61FBF64000001627313EB19000004030046304402207E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD30220659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419
</code></pre>

<p>This can be decoded using the TLS encoding struct defined in <a href="https://tools.ietf.org/html/rfc6962#page-13">RFC 6962, section
3.2</a>:</p>

<pre><code>enum { v1(0), (255) }
Version;

struct {
opaque key_id[32];
} LogID;

opaque CtExtensions&lt;0..2^16-1&gt;;

struct {
Version sct_version;
LogID id;
uint64 timestamp;
CtExtensions extensions;
digitally-signed struct {
Version sct_version;
SignatureType signature_type = certificate_timestamp;
uint64 timestamp;
LogEntryType entry_type;
select(entry_type) {
case x509_entry: ASN.1Cert;
case precert_entry: PreCert;
} signed_entry;
CtExtensions extensions;
};
} SignedCertificateTimestamp;
</code></pre>

<p>Breaking that down:</p>

<pre><code># Version sct_version v1(0)
00
# LogID id (aka opaque key_id[32])
DB74AFEECB29ECB1FECA3E716D2CE5B9AABB36F7847183C75D9D4F37B61FBF64
# uint64 timestamp (milliseconds since the epoch)
000001627313EB19
# CtExtensions extensions (zero-length array)
0000
# digitally-signed struct
04030046304402207E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD30220659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419
</code></pre>

<p>To understand the &ldquo;digitally-signed struct,&rdquo; we need to turn back to <a href="https://tools.ietf.org/html/rfc5246#section-4.7">RFC 5246,
section 4.7</a>. It says:</p>

<pre><code>A digitally-signed element is encoded as a struct DigitallySigned:

struct {
SignatureAndHashAlgorithm algorithm;
opaque signature&lt;0..2^16-1&gt;;
} DigitallySigned;
</code></pre>

<p>And in <a href="https://tools.ietf.org/html/rfc5246#section-7.4.1.4.1">section
7.4.1.4.1</a>:</p>

<pre><code>enum {
none(0), md5(1), sha1(2), sha224(3), sha256(4), sha384(5),
sha512(6), (255)
} HashAlgorithm;

enum { anonymous(0), rsa(1), dsa(2), ecdsa(3), (255) }
SignatureAlgorithm;

struct {
HashAlgorithm hash;
SignatureAlgorithm signature;
} SignatureAndHashAlgorithm;
</code></pre>

<p>We have &ldquo;0x0403&rdquo;, which corresponds to sha256(4) and ecdsa(3). The next two
bytes, &ldquo;0x0046&rdquo;, tell us the length of the &ldquo;opaque signature&rdquo; field, 70 bytes in
decimal. To decode the signature, we reference <a href="https://tools.ietf.org/html/rfc4492#page-20">RFC 4492 section
5.4</a>, which says:</p>

<pre><code>The digitally-signed element is encoded as an opaque vector &lt;0..2^16-1&gt;, the
contents of which are the DER encoding corresponding to the
following ASN.1 notation.

Ecdsa-Sig-Value ::= SEQUENCE {
r INTEGER,
s INTEGER
}
</code></pre>

<p>Having dived through two layers of TLS encoding, we are now back in ASN.1 land!
We
<a href="https://lapo.it/asn1js/#304402207E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD30220659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419">decode</a>
the remaining bytes into a SEQUENCE containing two INTEGERS. And we&rsquo;re done! Here&rsquo;s the whole
extension decoded:</p>

<pre><code># Extension SEQUENCE – RFC 5280
30
# length 0x0104 bytes (260 decimal)
820104
# OBJECT IDENTIFIER
06
# length 0x0A bytes (10 decimal)
0A
# value (1.3.6.1.4.1.11129.2.4.2)
2B06010401D679020402
# OCTET STRING
04
# length 0xF5 bytes (245 decimal)
81F5
# OCTET STRING (embedded) – RFC 6962
04
# length 0xF2 bytes (242 decimal)
81F2
# Beginning of TLS encoded SignedCertificateTimestampList – RFC 5246 / 6962
# length 0xF0 bytes
00F0
# opaque SerializedSCT&lt;1..2^16-1&gt;
# length 0x75 bytes
0075
# Version sct_version v1(0)
00
# LogID id (aka opaque key_id[32])
DB74AFEECB29ECB1FECA3E716D2CE5B9AABB36F7847183C75D9D4F37B61FBF64
# uint64 timestamp (milliseconds since the epoch)
000001627313EB19
# CtExtensions extensions (zero-length array)
0000
# digitally-signed struct – RFC 5426
# SignatureAndHashAlgorithm (ecdsa-sha256)
0403
# opaque signature&lt;0..2^16-1&gt;;
# length 0x0046
0046
# DER-encoded Ecdsa-Sig-Value – RFC 4492
30 # SEQUENCE
44 # length 0x44 bytes
02 # r INTEGER
20 # length 0x20 bytes
# value
7E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD3
02 # s INTEGER
20 # length 0x20 bytes
# value
659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419
# opaque SerializedSCT&lt;1..2^16-1&gt;
# length 0x77 bytes
0077
# Version sct_version v1(0)
00
# LogID id (aka opaque key_id[32])
293C519654C83965BAAA50FC5807D4B76FBF587A2972DCA4C30CF4E54547F478
# uint64 timestamp (milliseconds since the epoch)
000001627313EB2A
# CtExtensions extensions (zero-length array)
0000
# digitally-signed struct – RFC 5426
# SignatureAndHashAlgorithm (ecdsa-sha256)
0403
# opaque signature&lt;0..2^16-1&gt;;
# length 0x0048
0048
# DER-encoded Ecdsa-Sig-Value – RFC 4492
30 # SEQUENCE
46 # length 0x46 bytes
02 # r INTEGER
21 # length 0x21 bytes
# value
00AB72F1E4D6223EF87FC68491C208D29D4D57EBF47588BB7544D32F9537E2CEC1
02 # s INTEGER
21 # length 0x21 bytes
# value
008AFFC40CC6C4E3B24578DADE4F815ECBCE2D57A579342119A1E65BC7E5E69CE2
</code></pre>

<p>One surprising thing you might notice: In the first SCT, <code>r</code> and <code>s</code> are twenty
bytes long. In the second SCT, they are both twenty-one bytes long, and have a
leading zero. Integers in DER are two&rsquo;s complement, so if the leftmost bit is
set, they are interpreted as negative. Since <code>r</code> and <code>s</code> are positive, if the
leftmost bit would be a 1, an extra byte has to be added so that the leftmost
bit can be 0.</p>

<p>This is a little taste of what goes into encoding a certificate. I hope it was
informative! If you&rsquo;d like to learn more, I recommend &ldquo;<a href="http://luca.ntop.org/Teaching/Appunti/asn1.html">A Layman&rsquo;s Guide to a
Subset of ASN.1, BER, and DER</a>.&rdquo;</p>

<p><a name="poison"></a>Footnote 1: A &ldquo;poison extension&rdquo; is defined by <a href="https://tools.ietf.org/html/rfc6962#section-3.1">RFC 6962
section 3.1</a>:</p>

<pre><code>The Precertificate is constructed from the certificate to be issued by adding a special
critical poison extension (OID `1.3.6.1.4.1.11129.2.4.3`, whose
extnValue OCTET STRING contains ASN.1 NULL data (0x05 0x00))
</code></pre>

<p>In other words, it&rsquo;s an empty extension whose only purpose is to ensure that
certificate processors will not accept precertificates as valid certificates. The
specification ensures this by setting the &ldquo;critical&rdquo; bit on the extension, which
ensures that code that doesn&rsquo;t recognize the extension will reject the whole
certificate. Code that does recognize the extension specifically as poison
will also reject the certificate.</p>

<p><a name="variable-length"></a>Footnote 2: Lengths from 0-127 are represented by
a single byte (short form). To express longer lengths, more bytes are used (long form).
The high bit (0x80) on the first byte is set to distinguish long form from short
form. The remaining bits are used to express how many more bytes to read for the
length. For instance, 0x81F5 means &ldquo;this is long form because the length is
greater than 127, but there&rsquo;s still only one byte of length (0xF5) to decode.&rdquo;</p>

A geometric Rust adventure

Post Syndicated from Eevee original https://eev.ee/blog/2018/03/30/a-geometric-rust-adventure/

Hi. Yes. Sorry. I’ve been trying to write this post for ages, but I’ve also been working on a huge writing project, and apparently I have a very limited amount of writing mana at my disposal. I think this is supposed to be a Patreon reward from January. My bad. I hope it’s super great to make up for the wait!

I recently ported some math code from C++ to Rust in an attempt to do a cool thing with Doom. Here is my story.

The problem

I presented it recently as a conundrum (spoilers: I solved it!), but most of those details are unimportant.

The short version is: I have some shapes. I want to find their intersection.

Really, I want more than that: I want to drop them all on a canvas, intersect everything with everything, and pluck out all the resulting polygons. The input is a set of cookie cutters, and I want to press them all down on the same sheet of dough and figure out what all the resulting contiguous pieces are. And I want to know which cookie cutter(s) each piece came from.

But intersection is a good start.

Example of the goal.  Given two squares that overlap at their corners, I want to find the small overlap piece, plus the two L-shaped pieces left over from each square

I’m carefully referring to the input as shapes rather than polygons, because each one could be a completely arbitrary collection of lines. Obviously there’s not much you can do with shapes that aren’t even closed, but at the very least, I need to handle concavity and multiple disconnected polygons that together are considered a single input.

This is a non-trivial problem with a lot of edge cases, and offhand I don’t know how to solve it robustly. I’m not too eager to go figure it out from scratch, so I went hunting for something I could build from.

(Infuriatingly enough, I can just dump all the shapes out in an SVG file and any SVG viewer can immediately solve the problem, but that doesn’t quite help me. Though I have had a few people suggest I just rasterize the whole damn problem, and after all this, I’m starting to think they may have a point.)

Alas, I couldn’t find a Rust library for doing this. I had a hard time finding any library for doing this that wasn’t a massive fully-featured geometry engine. (I could’ve used that, but I wanted to avoid non-Rust dependencies if possible, since distributing software is already enough of a nightmare.)

A Twitter follower directed me towards a paper that described how to do very nearly what I wanted and nothing else: “A simple algorithm for Boolean operations on polygons” by F. Martínez (2013). Being an academic paper, it’s trapped in paywall hell; sorry about that. (And as I understand it, none of the money you’d pay to get the paper would even go to the authors? Is that right? What a horrible and predatory system for discovering and disseminating knowledge.)

The paper isn’t especially long, but it does describe an awful lot of subtle details and is mostly written in terms of its own reference implementation. Rather than write my own implementation based solely on the paper, I decided to try porting the reference implementation from C++ to Rust.

And so I fell down the rabbit hole.

The basic algorithm

Thankfully, the author has published the sample code on his own website, if you want to follow along. (It’s the bottom link; the same author has, confusingly, published two papers on the same topic with similar titles, four years apart.)

If not, let me describe the algorithm and how the code is generally laid out. The algorithm itself is based on a sweep line, where a vertical line passes across the plane and ✨ does stuff ✨ as it encounters various objects. This implementation has no physical line; instead, it keeps track of which segments from the original polygon would be intersecting the sweep line, which is all we really care about.

A vertical line is passing rightwards over a couple intersecting shapes.  The line current intersects two of the shapes' sides, and these two sides are the "sweep list"

The code is all bundled inside a class with only a single public method, run, because… that’s… more object-oriented, I guess. There are several helper methods, and state is stored in some attributes. A rough outline of run is:

  1. Run through all the line segments in both input polygons. For each one, generate two SweepEvents (one for each endpoint) and add them to a std::deque for storage.

    Add pointers to the two SweepEvents to a std::priority_queue, the event queue. This queue uses a custom comparator to order the events from left to right, so the top element is always the leftmost endpoint.

  2. Loop over the event queue (where an “event” means the sweep line passed over the left or right end of a segment). Encountering a left endpoint means the sweep line is newly touching that segment, so add it to a std::set called the sweep list. An important point is that std::set is ordered, and the sweep list uses a comparator that keeps segments in order vertically.

    Encountering a right endpoint means the sweep line is leaving a segment, so that segment is removed from the sweep list.

  3. When a segment is added to the sweep list, it may have up to two neighbors: the segment above it and the segment below it. Call possibleIntersection to check whether it intersects either of those neighbors. (This is nearly sufficient to find all intersections, which is neat.)

  4. If possibleIntersection detects an intersection, it will split each segment into two pieces then and there. The old segment is shortened in-place to become the left part, and a new segment is created for the right part. The new endpoints at the point of intersection are added to the event queue.

  5. Some bookkeeping is done along the way to track which original polygons each segment is inside, and eventually the segments are reconstructed into new polygons.

Hopefully that’s enough to follow along. It took me an inordinately long time to tease this out. The comments aren’t especially helpful.

1
    std::deque<SweepEvent> eventHolder;    // It holds the events generated during the computation of the boolean operation

Syntax and basic semantics

The first step was to get something that rustc could at least parse, which meant translating C++ syntax to Rust syntax.

This was surprisingly straightforward! C++ classes become Rust structs. (There was no inheritance here, thankfully.) All the method declarations go away. Method implementations only need to be indented and wrapped in impl.

I did encounter some unnecessarily obtuse uses of the ternary operator:

1
(prevprev != sl.begin()) ? --prevprev : prevprev = sl.end();

Rust doesn’t have a ternary — you can use a regular if block as an expression — so I expanded these out.

C++ switch blocks become Rust match blocks, but otherwise function basically the same. Rust’s enums are scoped (hallelujah), so I had to explicitly spell out where enum values came from.

The only really annoying part was changing function signatures; C++ types don’t look much at all like Rust types, save for the use of angle brackets. Rust also doesn’t pass by implicit reference, so I needed to sprinkle a few &s around.

I would’ve had a much harder time here if this code had relied on any remotely esoteric C++ functionality, but thankfully it stuck to pretty vanilla features.

Language conventions

This is a geometry problem, so the sample code unsurprisingly has its own home-grown point type. Rather than port that type to Rust, I opted to use the popular euclid crate. Not only is it code I didn’t have to write, but it already does several things that the C++ code was doing by hand inline, like dot products and cross products. And all I had to do was add one line to Cargo.toml to use it! I have no idea how anyone writes C or C++ without a package manager.

The C++ code used getters, i.e. point.x (). I’m not a huge fan of getters, though I do still appreciate the need for them in lowish-level systems languages where you want to future-proof your API and the language wants to keep a clear distinction between attribute access and method calls. But this is a point, which is nothing more than two of the same numeric type glued together; what possible future logic might you add to an accessor? The euclid authors appear to side with me and leave the coordinates as public fields, so I took great joy in removing all the superfluous parentheses.

Polygons are represented with a Polygon class, which has some number of Contours. A contour is a single contiguous loop. Something you’d usually think of as a polygon would only have one, but a shape with a hole would have two: one for the outside, one for the inside. The weird part of this arrangement was that Polygon implemented nearly the entire STL container interface, then waffled between using it and not using it throughout the rest of the code. Rust lets anything in the same module access non-public fields, so I just skipped all that and used polygon.contours directly. Hell, I think I made contours public.

Finally, the SweepEvent type has a pol field that’s declared as an enum PolygonType (either SUBJECT or CLIPPING, to indicate which of the two inputs it is), but then some other code uses the same field as a numeric index into a polygon’s contours. Boy I sure do love static typing where everything’s a goddamn integer. I wanted to extend the algorithm to work on arbitrarily many input polygons anyway, so I scrapped the enum and this became a usize.


Then I got to all the uses of STL. I have only a passing familiarity with the C++ standard library, and this code actually made modest use of it, which caused some fun days-long misunderstandings.

As mentioned, the SweepEvents are stored in a std::deque, which is never read from. It took me a little thinking to realize that the deque was being used as an arena: it’s the canonical home for the structs so pointers to them can be tossed around freely. (It can’t be a std::vector, because that could reallocate and invalidate all the pointers; std::deque is probably a doubly-linked list, and guarantees no reallocation.)

Rust’s standard library does have a doubly-linked list type, but I knew I’d run into ownership hell here later anyway, so I think I replaced it with a Rust Vec to start with. It won’t compile either way, so whatever. We’ll get back to this in a moment.

The list of segments currently intersecting the sweep line is stored in a std::set. That type is explicitly ordered, which I’m very glad I knew already. Rust has two set types, HashSet and BTreeSet; unsurprisingly, the former is unordered and the latter is ordered. Dropping in BTreeSet and fixing some method names got me 90% of the way there.

Which brought me to the other 90%. See, the C++ code also relies on finding nodes adjacent to the node that was just inserted, via STL iterators.

1
2
3
next = prev = se->posSL = it = sl.insert(se).first;
(prev != sl.begin()) ? --prev : prev = sl.end();
++next;

I freely admit I’m bad at C++, but this seems like something that could’ve used… I don’t know, 1 comment. Or variable names more than two letters long. What it actually does is:

  1. Add the current sweep event (se) to the sweep list (sl), which returns a pair whose first element is an iterator pointing at the just-inserted event.

  2. Copies that iterator to several other variables, including prev and next.

  3. If the event was inserted at the beginning of the sweep list, set prev to the sweep list’s end iterator, which in C++ is a legal-but-invalid iterator meaning “the space after the end” or something. This is checked for in later code, to see if there is a previous event to look at. Otherwise, decrement prev, so it’s now pointing at the event immediately before the inserted one.

  4. Increment next normally. If the inserted event is last, then this will bump next to the end iterator anyway.

In other words, I need to get the previous and next elements from a BTreeSet. Rust does have bidirectional iterators, which BTreeSet supports… but BTreeSet::insert only returns a bool telling me whether or not anything was inserted, not the position. I came up with this:

1
2
3
let mut maybe_below = active_segments.range(..segment).last().map(|v| *v);
let mut maybe_above = active_segments.range(segment..).next().map(|v| *v);
active_segments.insert(segment);

The range method returns an iterator over a subset of the tree. The .. syntax makes a range (where the right endpoint is exclusive), so ..segment finds the part of the tree before the new segment, and segment.. finds the part of the tree after it. (The latter would start with the segment itself, except I haven’t inserted it yet, so it’s not actually there.)

Then the standard next() and last() methods on bidirectional iterators find me the element I actually want. But the iterator might be empty, so they both return an Option. Also, iterators tend to return references to their contents, but in this case the contents are already references, and I don’t want a double reference, so the map call dereferences one layer — but only if the Option contains a value. Phew!

This is slightly less efficient than the C++ code, since it has to look up where segment goes three times rather than just one. I might be able to get it down to two with some more clever finagling of the iterator, but microsopic performance considerations were a low priority here.

Finally, the event queue uses a std::priority_queue to keep events in a desired order and efficiently pop the next one off the top.

Except priority queues act like heaps, where the greatest (i.e., last) item is made accessible.

Sorting out sorting

C++ comparison functions return true to indicate that the first argument is less than the second argument. Sweep events occur from left to right. You generally implement sorts so that the first thing comes, erm, first.

But sweep events go in a priority queue, and priority queues surface the last item, not the first. This C++ code handled this minor wrinkle by implementing its comparison backwards.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
struct SweepEventComp : public std::binary_function<SweepEvent, SweepEvent, bool> { // for sorting sweep events
// Compare two sweep events
// Return true means that e1 is placed at the event queue after e2, i.e,, e1 is processed by the algorithm after e2
bool operator() (const SweepEvent* e1, const SweepEvent* e2)
{
    if (e1->point.x () > e2->point.x ()) // Different x-coordinate
        return true;
    if (e2->point.x () > e1->point.x ()) // Different x-coordinate
        return false;
    if (e1->point.y () != e2->point.y ()) // Different points, but same x-coordinate. The event with lower y-coordinate is processed first
        return e1->point.y () > e2->point.y ();
    if (e1->left != e2->left) // Same point, but one is a left endpoint and the other a right endpoint. The right endpoint is processed first
        return e1->left;
    // Same point, both events are left endpoints or both are right endpoints.
    if (signedArea (e1->point, e1->otherEvent->point, e2->otherEvent->point) != 0) // not collinear
        return e1->above (e2->otherEvent->point); // the event associate to the bottom segment is processed first
    return e1->pol > e2->pol;
}
};

Maybe it’s just me, but I had a hell of a time just figuring out what problem this was even trying to solve. I still have to reread it several times whenever I look at it, to make sure I’m getting the right things backwards.

Making this even more ridiculous is that there’s a second implementation of this same sort, with the same name, in another file — and that one’s implemented forwards. And doesn’t use a tiebreaker. I don’t entirely understand how this even compiles, but it does!

I painstakingly translated this forwards to Rust. Unlike the STL, Rust doesn’t take custom comparators for its containers, so I had to implement ordering on the types themselves (which makes sense, anyway). I wrapped everything in the priority queue in a Reverse, which does what it sounds like.

I’m fairly pleased with Rust’s ordering model. Most of the work is done in Ord, a trait with a cmp() method returning an Ordering (one of Less, Equal, and Greater). No magic numbers, no need to implement all six ordering methods! It’s incredible. Ordering even has some handy methods on it, so the usual case of “order by this, then by this” can be written as:

1
2
return self.point().x.cmp(&other.point().x)
    .then(self.point().y.cmp(&other.point().y));

Well. Just kidding! It’s not quite that easy. You see, the points here are composed of floats, and floats have the fun property that not all of them are comparable. Specifically, NaN is not less than, greater than, or equal to anything else, including itself. So IEEE 754 float ordering cannot be expressed with Ord. Unless you want to just make up an answer for NaN, but Rust doesn’t tend to do that.

Rust’s float types thus implement the weaker PartialOrd, whose method returns an Option<Ordering> instead. That makes the above example slightly uglier:

1
2
return self.point().x.partial_cmp(&other.point().x).unwrap()
    .then(self.point().y.partial_cmp(&other.point().y).unwrap())

Also, since I use unwrap() here, this code will panic and take the whole program down if the points are infinite or NaN. Don’t do that.

This caused some minor inconveniences in other places; for example, the general-purpose cmp::min() doesn’t work on floats, because it requires an Ord-erable type. Thankfully there’s a f64::min(), which handles a NaN by returning the other argument.

(Cool story: for the longest time I had this code using f32s. I’m used to translating int to “32 bits”, and apparently that instinct kicked in for floats as well, even floats spelled double.)

The only other sorting adventure was this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
// Due to overlapping edges the resultEvents array can be not wholly sorted
bool sorted = false;
while (!sorted) {
    sorted = true;
    for (unsigned int i = 0; i < resultEvents.size (); ++i) {
        if (i + 1 < resultEvents.size () && sec (resultEvents[i], resultEvents[i+1])) {
            std::swap (resultEvents[i], resultEvents[i+1]);
            sorted = false;
        }
    }
}

(I originally misread this comment as saying “the array cannot be wholly sorted” and had no idea why that would be the case, or why the author would then immediately attempt to bubble sort it.)

I’m still not sure why this uses an ad-hoc sort instead of std::sort. But I’m used to taking for granted that general-purpose sorting implementations are tuned to work well for almost-sorted data, like Python’s. Maybe C++ is untrustworthy here, for some reason. I replaced it with a call to .sort() and all seemed fine.

Phew! We’re getting there. Finally, my code appears to type-check.

But now I see storm clouds gathering on the horizon.

Ownership hell

I have a problem. I somehow run into this problem every single time I use Rust. The solutions are never especially satisfying, and all the hacks I might use if forced to write C++ turn out to be unsound, which is even more annoying because rustc is just sitting there with this smug “I told you so expression” and—

The problem is ownership, which Rust is fundamentally built on. Any given value must have exactly one owner, and Rust must be able to statically convince itself that:

  1. No reference to a value outlives that value.
  2. If a mutable reference to a value exists, no other references to that value exist at the same time.

This is the core of Rust. It guarantees at compile time that you cannot lose pointers to allocated memory, you cannot double-free, you cannot have dangling pointers.

It also completely thwarts a lot of approaches you might be inclined to take if you come from managed languages (where who cares, the GC will take care of it) or C++ (where you just throw pointers everywhere and hope for the best apparently).

For example, pointer loops are impossible. Rust’s understanding of ownership and lifetimes is hierarchical, and it simply cannot express loops. (Rust’s own doubly-linked list type uses raw pointers and unsafe code under the hood, where “unsafe” is an escape hatch for the usual ownership rules. Since I only recently realized that pointers to the inside of a mutable Vec are a bad idea, I figure I should probably not be writing unsafe code myself.)

This throws a few wrenches in the works.

Problem the first: pointer loops

I immediately ran into trouble with the SweepEvent struct itself. A SweepEvent pulls double duty: it represents one endpoint of a segment, but each left endpoint also handles bookkeeping for the segment itself — which means that most of the fields on a right endpoint are unused. Also, and more importantly, each SweepEvent has a pointer to the corresponding SweepEvent at the other end of the same segment. So a pair of SweepEvents point to each other.

Rust frowns upon this. In retrospect, I think I could’ve kept it working, but I also think I’m wrong about that.

My first step was to wrench SweepEvent apart. I moved all of the segment-stuff (which is virtually all of it) into a single SweepSegment type, and then populated the event queue with a SweepEndpoint tuple struct, similar to:

1
2
3
4
5
6
enum SegmentEnd {
    Left,
    Right,
}

struct SweepEndpoint<'a>(&'a SweepSegment, SegmentEnd);

This makes SweepEndpoint essentially a tuple with a name. The 'a is a lifetime and says, more or less, that a SweepEndpoint cannot outlive the SweepSegment it references. Makes sense.

Problem solved! I no longer have mutually referential pointers. But I do still have pointers (well, references), and they have to point to something.

Problem the second: where’s all the data

Which brings me to the problem I always run into with Rust. I have a bucket of things, and I need to refer to some of them multiple times.

I tried half a dozen different approaches here and don’t clearly remember all of them, but I think my core problem went as follows. I translated the C++ class to a Rust struct with some methods hanging off of it. A simplified version might look like this.

1
2
3
4
struct Algorithm {
    arena: LinkedList<SweepSegment>,
    event_queue: BinaryHeap<SweepEndpoint>,
}

Ah, hang on — SweepEndpoint needs to be annotated with a lifetime, so Rust can enforce that those endpoints don’t live longer than the segments they refer to. No problem?

1
2
3
4
struct Algorithm<'a> {
    arena: LinkedList<SweepSegment>,
    event_queue: BinaryHeap<SweepEndpoint<'a>>,
}

Okay! Now for some methods.

1
2
3
4
5
6
7
8
fn run(&mut self) {
    self.arena.push_back(SweepSegment{ data: 5 });
    self.event_queue.push(SweepEndpoint(self.arena.back().unwrap(), SegmentEnd::Left));
    self.event_queue.push(SweepEndpoint(self.arena.back().unwrap(), SegmentEnd::Right));
    for event in &self.event_queue {
        println!("{:?}", event)
    }
}

Aaand… this doesn’t work. Rust “cannot infer an appropriate lifetime for autoref due to conflicting requirements”. The trouble is that self.arena.back() takes a reference to self.arena, and then I put that reference in the event queue. But I promised that everything in the event queue has lifetime 'a, and I don’t actually know how long self lives here; I only know that it can’t outlive 'a, because that would invalidate the references it holds.

A little random guessing let me to change &mut self to &'a mut self — which is fine because the entire impl block this lives in is already parameterized by 'a — and that makes this compile! Hooray! I think that’s because I’m saying self itself has exactly the same lifetime as the references it holds onto, which is true, since it’s referring to itself.

Let’s get a little more ambitious and try having two segments.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
fn run(&'a mut self) {
    self.arena.push_back(SweepSegment{ data: 5 });
    self.event_queue.push(SweepEndpoint(self.arena.back().unwrap(), SegmentEnd::Left));
    self.event_queue.push(SweepEndpoint(self.arena.back().unwrap(), SegmentEnd::Right));
    self.arena.push_back(SweepSegment{ data: 17 });
    self.event_queue.push(SweepEndpoint(self.arena.back().unwrap(), SegmentEnd::Left));
    self.event_queue.push(SweepEndpoint(self.arena.back().unwrap(), SegmentEnd::Right));
    for event in &self.event_queue {
        println!("{:?}", event)
    }
}

Whoops! Rust complains that I’m trying to mutate self.arena while other stuff is referring to it. And, yes, that’s true — I have references to it in the event queue, and Rust is preventing me from potentially deleting everything from the queue when references to it still exist. I’m not actually deleting anything here, of course (though I could be if this were a Vec!), but Rust’s type system can’t encode that (and I dread the thought of a type system that can).

I struggled with this for a while, and rapidly encountered another complete showstopper:

1
2
3
4
5
6
fn run(&'a mut self) {
    self.mutate_something();
    self.mutate_something();
}

fn mutate_something(&'a mut self) {}

Rust objects that I’m trying to borrow self mutably, twice — once for the first call, once for the second.

But why? A borrow is supposed to end automatically once it’s no longer used, right? Maybe if I throw some braces around it for scope… nope, that doesn’t help either.

It’s true that borrows usually end automatically, but here I have explicitly told Rust that mutate_something() should borrow with the lifetime 'a, which is the same as the lifetime in run(). So the first call explicitly borrows self for at least the rest of the method. Removing the lifetime from mutate_something() does fix this error, but if that method tries to add new segments, I’m back to the original problem.

Oh no. The mutation in the C++ code is several calls deep. Porting it directly seems nearly impossible.

The typical solution here — at least, the first thing people suggest to me on Twitter — is to wrap basically everything everywhere in Rc<RefCell<T>>, which gives you something that’s reference-counted (avoiding questions of ownership) and defers borrow checks until runtime (avoiding questions of mutable borrows). But that seems pretty heavy-handed here — not only does RefCell add .borrow() noise anywhere you actually want to interact with the underlying value, but do I really need to refcount these tiny structs that only hold a handful of floats each?

I set out to find a middle ground.

Solution, kind of

I really, really didn’t want to perform serious surgery on this code just to get it to build. I still didn’t know if it worked at all, and now I had to rearrange it without being able to check if I was breaking it further. (This isn’t Rust’s fault; it’s a natural problem with porting between fairly different paradigms.)

So I kind of hacked it into working with minimal changes, producing a grotesque abomination which I’m ashamed to link to. Here’s how!

First, I got rid of the class. It turns out this makes lifetime juggling much easier right off the bat. I’m pretty sure Rust considers everything in a struct to be destroyed simultaneously (though in practice it guarantees it’ll destroy fields in order), which doesn’t leave much wiggle room. Locals within a function, on the other hand, can each have their own distinct lifetimes, which solves the problem of expressing that the borrows won’t outlive the arena.

Speaking of the arena, I solved the mutability problem there by switching to… an arena! The typed-arena crate (a port of a type used within Rust itself, I think) is an allocator — you give it a value, and it gives you back a reference, and the reference is guaranteed to be valid for as long as the arena exists. The method that does this is sneaky and takes &self rather than &mut self, so Rust doesn’t know you’re mutating the arena and won’t complain. (One drawback is that the arena will never free anything you give to it, but that’s not a big problem here.)


My next problem was with mutation. The main loop repeatedly calls possibleIntersection with pairs of segments, which can split either or both segment. Rust definitely doesn’t like that — I’d have to pass in two &muts, both of which are mutable references into the same arena, and I’d have a bunch of immutable references into that arena in the sweep list and elsewhere. This isn’t going to fly.

This is kind of a shame, and is one place where Rust seems a little overzealous. Something like this seems like it ought to be perfectly valid:

1
2
3
4
let mut v = vec![1u32, 2u32];
let a = &mut v[0];
let b = &mut v[1];
// do stuff with a, b

The trouble is, Rust only knows the type signature, which here is something like index_mut(&'a mut self, index: usize) -> &'a T. Nothing about that says that you’re borrowing distinct elements rather than some core part of the type — and, in fact, the above code is only safe because you’re borrowing distinct elements. In the general case, Rust can’t possibly know that. It seems obvious enough from the different indexes, but nothing about the type system even says that different indexes have to return different values. And what if one were borrowed as &mut v[1] and the other were borrowed with v.iter_mut().next().unwrap()?

Anyway, this is exactly where people start to turn to RefCell — if you’re very sure you know better than Rust, then a RefCell will skirt the borrow checker while still enforcing at runtime that you don’t have more than one mutable borrow at a time.

But half the lines in this algorithm examine the endpoints of a segment! I don’t want to wrap the whole thing in a RefCell, or I’ll have to say this everywhere:

1
if segment1.borrow().point.x < segment2.borrow().point.x { ... }

Gross.

But wait — this code only mutates the points themselves in one place. When a segment is split, the original segment becomes the left half, and a new segment is created to be the right half. There’s no compelling need for this; it saves an allocation for the left half, but it’s not critical to the algorithm.

Thus, I settled on a compromise. My segment type now looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
struct SegmentPacket {
    // a bunch of flags and whatnot used in the algorithm
}
struct SweepSegment {
    left_point: MapPoint,
    right_point: MapPoint,
    faces_outwards: bool,
    index: usize,
    order: usize,
    packet: RefCell<SegmentPacket>,
}

I do still need to call .borrow() or .borrow_mut() to get at the stuff in the “packet”, but that’s far less common, so there’s less noise overall. And I don’t need to wrap it in Rc because it’s part of a type that’s allocated in the arena and passed around only via references.


This still leaves me with the problem of how to actually perform the splits.

I’m not especially happy with what I came up with, I don’t know if I can defend it, and I suspect I could do much better. I changed possibleIntersection so that rather than performing splits, it returns the points at which each segment needs splitting, in the form (usize, Option<MapPoint>, Option<MapPoint>). (The usize is used as a flag for calling code and oughta be an enum, but, isn’t yet.)

Now the top-level function is responsible for all arena management, and all is well.

Except, er. possibleIntersection is called multiple times, and I don’t want to copy-paste a dozen lines of split code after each call. I tried putting just that code in its own function, which had the world’s most godawful signature, and that didn’t work because… uh… hm. I can’t remember why, exactly! Should’ve written that down.

I tried a local closure next, but closures capture their environment by reference, so now I had references to a bunch of locals for as long as the closure existed, which meant I couldn’t mutate those locals. Argh. (This seems a little silly to me, since the closure’s references cannot possibly be used for anything if the closure isn’t being called, but maybe I’m missing something. Or maybe this is just a limitation of lifetimes.)

Increasingly desperate, I tried using a macro. But… macros are hygienic, which means that any new name you use inside a macro is different from any name outside that macro. The macro thus could not see any of my locals. Usually that’s good, but here I explicitly wanted the macro to mess with my locals.

I was just about to give up and go live as a hermit in a cabin in the woods, when I discovered something quite incredible. You can define local macros! If you define a macro inside a function, then it can see any locals defined earlier in that function. Perfect!

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
macro_rules! _split_segment (
    ($seg:expr, $pt:expr) => (
        {
            let pt = $pt;
            let seg = $seg;
            // ... waaay too much code ...
        }
    );
);

loop {
    // ...
    // This is possibleIntersection, renamed because Rust rightfully complains about camelCase
    let cross = handle_intersections(Some(segment), maybe_above);
    if let Some(pt) = cross.1 {
        segment = _split_segment!(segment, pt);
    }
    if let Some(pt) = cross.2 {
        maybe_above = Some(_split_segment!(maybe_above.unwrap(), pt));
    }
    // ...
}

(This doesn’t actually quite match the original algorithm, which has one case where a segment can be split twice. I realized that I could just do the left-most split, and a later iteration would perform the other split. I sure hope that’s right, anyway.)

It’s a bit ugly, and I ran into a whole lot of implicit behavior from the C++ code that I had to fix — for example, the segment is sometimes mutated just before it’s split, purely as a shortcut for mutating the left part of the split. But it finally compiles! And runs! And kinda worked, a bit!

Aftermath

I still had a lot of work to do.

For one, this code was designed for intersecting two shapes, not mass-intersecting a big pile of shapes. The basic algorithm doesn’t care about how many polygons you start with — all it sees is segments — but the code for constructing the return value needed some heavy modification.

The biggest change by far? The original code traced each segment once, expecting the result to be only a single shape. I had to change that to trace each side of each segment once, since the vast bulk of the output consists of shapes which share a side. This violated a few assumptions, which I had to hack around.

I also ran into a couple very bad edge cases, spent ages debugging them, then found out that the original algorithm had a subtle workaround that I’d commented out because it was awkward to port but didn’t seem to do anything. Whoops!

The worst was a precision error, where a vertical line could be split on a point not quite actually on the line, which wreaked all kinds of havoc. I worked around that with some tasteful rounding, which is highly dubious but makes the output more appealing to my squishy human brain. (I might switch to the original workaround, but I really dislike that even simple cases can spit out points at 1500.0000000000003. The whole thing is parameterized over the coordinate type, so maybe I could throw a rational type in there and cross my fingers?)

All that done, I finally, finally, after a couple months of intermittent progress, got what I wanted!

This is Doom 2’s MAP01. The black area to the left of center is where the player starts. Gray areas indicate where the player can walk from there, with lighter shades indicating more distant areas, where “distance” is measured by the minimum number of line crossings. Red areas can’t be reached at all.

(Note: large playable chunks of the map, including the exit room, are red. That’s because those areas are behind doors, and this code doesn’t understand doors yet.)

(Also note: The big crescent in the lower-right is also black because I was lazy and looked for the player’s starting sector by checking the bbox, and that sector’s bbox happens to match.)

The code that generated this had to go out of its way to delete all the unreachable zones around solid walls. I think I could modify the algorithm to do that on the fly pretty easily, which would probably speed it up a bit too. Downside is that the algorithm would then be pretty specifically tied to this problem, and not usable for any other kind of polygon intersection, which I would think could come up elsewhere? The modifications would be pretty minor, though, so maybe I could confine them to a closure or something.

Some final observations

It runs surprisingly slowly. Like, multiple seconds. Unless I add --release, which speeds it up by a factor of… some number with multiple digits. Wahoo. Debug mode has a high price, especially with a lot of calls in play.

The current state of this code is on GitHub. Please don’t look at it. I’m very sorry.

Honestly, most of my anguish came not from Rust, but from the original code relying on lots of fairly subtle behavior without bothering to explain what it was doing or even hint that anything unusual was going on. God, I hate C++.

I don’t know if the Rust community can learn from this. I don’t know if I even learned from this. Let’s all just quietly forget about it.

Now I just need to figure this one out…

Major Pirate Site Operators’ Sentences Increased on Appeal

Post Syndicated from Andy original https://torrentfreak.com/major-pirate-site-operators-sentences-increased-on-appeal-180330/

With The Pirate Bay the most famous pirate site in Swedish history still in full swing, a lesser known streaming platform started to gain traction more than half a decade ago.

From humble beginnings, Swefilmer eventually grew to become Sweden’s most popular movie and TV show streaming site. At one stage it was credited alongside another streaming portal for serving up to 25% of all online video streaming in Sweden.

But in 2015, everything came crashing down. An operator of the site in his early twenties was raided by local police and arrested. An older Turkish man, who was accused of receiving donations from users and setting up Swefilmer’s deals with advertisers, was later arrested in Germany.

Their activities between November 2013 and June 2015 landed them an appearance before the Varberg District Court last January, where they were accused of making more than $1.5m in advertising revenue from copyright infringement.

The prosecutor described the site as being like “organized crime”. The then 26-year-old was described as the main player behind the site, with the then 23-year-old playing a much smaller role. The latter received an estimated $4,000 of the proceeds, the former was said to have pocketed more than $1.5m.

As expected, things didn’t go well. The older man, who was described as leading a luxury lifestyle, was convicted of 1,044 breaches of copyright law and serious money laundering offenses. He was sentenced to three years in prison and ordered to forfeit 14,000,000 SEK (US$1.68m).

Due to his minimal role, the younger man was given probation and ordered to complete 120 hours of community service. Speaking with TorrentFreak at the time, the 23-year-old said he was relieved at the relatively light sentence but noted it may not be over yet.

Indeed, as is often the case with these complex copyright prosecutions, the matter found itself at the Court of Appeal of Western Sweden. On Wednesday its decision was handed down and it’s bad news for both men.

“The Court of Appeal, like the District Court, judges the men for breach of copyright law,” the Court said in a statement.

“They are judged to have made more than 1,400 copyrighted films available through the Swefilmer streaming service, without obtaining permission from copyright holders. One of the men is also convicted of gross money laundering because he received revenues from the criminal activity.”

In respect of the now 27-year-old, the Court decided to hand down a much more severe sentence, extending the term of imprisonment from three to four years.

There was some better news in respect of the amount he has to forfeit to the state, however. The District Court set this amount at 14,000,000 SEK (US$1.68m) but the Court of Appeal reduced it to ‘just’ 4,000,000 SEK (US$482,280).

The younger man’s conditional sentence was upheld but community service was replaced with a fine of 10,000 SEK (US$1,200). Also, along with his accomplice, he must now pay significant damages to a Norwegian plaintiff in the case.

“Both men will jointly pay damages of NOK 2.2 million (US$283,000) together with interest to Nordisk Film A / S for copyright infringement in one of the films posted on the website,” the Court writes in its decision.

But even now, the matter may not be closed. Ansgar Firsching, the older man’s lawyer, told SVT that the case could go all the way to the Supreme Court.

“I have informed my client about the content of the judgment and it is highly likely that he will turn to the Supreme Court,” Firsching said.

It appears that the 27-year-old will argue that at the time of the alleged offenses, merely linking to copyrighted content was not a criminal offense but whether this approach will succeed is seriously up for debate.

While linking was previously considered by some to sit in a legal gray area, the District Court drew heavily on the GS Media ruling handed down by the European Court of Justice in September 2016.

In that case, the EU Court found that those who post links to content they do not know is infringing in a non-commercial environment usually don’t commit infringement. The Swefilmer case doesn’t immediately appear to fit either of those parameters.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

New – Amazon DynamoDB Continuous Backups and Point-In-Time Recovery (PITR)

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-amazon-dynamodb-continuous-backups-and-point-in-time-recovery-pitr/

The Amazon DynamoDB team is back with another useful feature hot on the heels of encryption at rest. At AWS re:Invent 2017 we launched global tables and on-demand backup and restore of your DynamoDB tables and today we’re launching continuous backups with point-in-time recovery (PITR).

You can enable continuous backups with a single click in the AWS Management Console, a simple API call, or with the AWS Command Line Interface (CLI). DynamoDB can back up your data with per-second granularity and restore to any single second from the time PITR was enabled up to the prior 35 days. We built this feature to protect against accidental writes or deletes. If a developer runs a script against production instead of staging or if someone fat-fingers a DeleteItem call, PITR has you covered. We also built it for the scenarios you can’t normally predict. You can still keep your on-demand backups for as long as needed for archival purposes but PITR works as additional insurance against accidental loss of data. Let’s see how this works.

Continuous Backup

To enable this feature in the console we navigate to our table and select the Backups tab. From there simply click Enable to turn on the feature. I could also turn on continuous backups via the UpdateContinuousBackups API call.

After continuous backup is enabled we should be able to see an Earliest restore date and Latest restore date

Let’s imagine a scenario where I have a lot of old user profiles that I want to delete.

I really only want to send service updates to our active users based on their last_update date. I decided to write a quick Python script to delete all the users that haven’t used my service in a while.

import boto3
table = boto3.resource("dynamodb").Table("VerySuperImportantTable")
items = table.scan(
    FilterExpression="last_update >= :date",
    ExpressionAttributeValues={":date": "2014-01-01T00:00:00"},
    ProjectionExpression="ImportantId"
)['Items']
print("Deleting {} Items! Dangerous.".format(len(items)))
with table.batch_writer() as batch:
    for item in items:
        batch.delete_item(Key=item)

Great! This should delete all those pesky non-users of my service that haven’t logged in since 2013. So,— CTRL+C CTRL+C CTRL+C CTRL+C (interrupt the currently executing command).

Yikes! Do you see where I went wrong? I’ve just deleted my most important users! Oh, no! Where I had a greater-than sign, I meant to put a less-than! Quick, before Jeff Barr can see, I’m going to restore the table. (I probably could have prevented that typo with Boto 3’s handy DynamoDB conditions: Attr("last_update").lt("2014-01-01T00:00:00"))

Restoring

Luckily for me, restoring a table is easy. In the console I’ll navigate to the Backups tab for my table and click Restore to point-in-time.

I’ll specify the time (a few seconds before I started my deleting spree) and a name for the table I’m restoring to.

For a relatively small and evenly distributed table like mine, the restore is quite fast.

The time it takes to restore a table varies based on multiple factors and restore times are not neccesarily coordinated with the size of the table. If your dataset is evenly distributed across your primary keys you’ll be able to take advanatage of parallelization which will speed up your restores.

Learn More & Try It Yourself
There’s plenty more to learn about this new feature in the documentation here.

Pricing for continuous backups varies by region and is based on the current size of the table and all indexes.

A few things to note:

  • PITR works with encrypted tables.
  • If you disable PITR and later reenable it, you reset the start time from which you can recover.
  • Just like on-demand backups, there are no performance or availability impacts to enabling this feature.
  • Stream settings, Time To Live settings, PITR settings, tags, Amazon CloudWatch alarms, and auto scaling policies are not copied to the restored table.
  • Jeff, it turns out, knew I restored the table all along because every PITR API call is recorded in AWS CloudTrail.

Let us know how you’re going to use continuous backups and PITR on Twitter and in the comments.
Randall

Conundrum

Post Syndicated from Eevee original https://eev.ee/blog/2018/03/20/conundrum/

Here’s a problem I’m having. Or, rather, a problem I’m solving, but so slowly that I wonder if I’m going about it very inefficiently.

I intended to just make a huge image out of this and tweet it, but it takes so much text to explain that I might as well put it on my internet website.

The setup

I want to do pathfinding through a Doom map. The ultimate goal is to be able to automatically determine the path the player needs to take to reach the exit — what switches to hit in what order, what keys to get, etc.

Doom maps are 2D planes cut into arbitrary shapes. Everything outside a shape is the void, which we don’t care about. Here are some shapes.

The shapes are defined implicitly by their edges. All of the edges touching the red area, for example, say that they’re red on one side.

That’s very nice, because it means I don’t have to do any geometry to detect which areas touch each other. I can tell at a glance that the red and blue areas touch, because the line between them says it’s red on one side and blue on the other.

Unfortunately, this doesn’t seem to be all that useful. The player can’t necessarily move from the red area to the blue area, because there’s a skinny bottleneck. If the yellow area were a raised platform, the player couldn’t fit through the gap. Worse, if there’s a switch somewhere that lowers that platform, then the gap is conditionally passable.

I thought this would be uncommon enough that I could get started only looking at neighbors and do actual geometry later, but that “conditionally passable” pattern shows up all the time in the form of locked “bars” that let you peek between or around them. So I might as well just do the dang geometry.


The player is a 32×32 square and always axis-aligned (i.e., the hitbox doesn’t actually rotate). That’s very convenient, because it means I can “dilate the world” — expand all the walls by 16 units in both directions, while shrinking the player to a single point. That expansion eliminates narrow gaps and leaves a map of everywhere the player’s center is allowed to be. Allegedly this is how Quake did collision detection — but in 3D! How hard can it be in 2D?

The plan, then, is to do this:

This creates a bit of an unholy mess. (I could avoid some of the overlap by being clever at points where exactly two lines touch, but I have to deal with a ton of overlap anyway so I’m not sure if that buys anything.)

The gray outlines are dilations of inner walls, where both sides touch a shape. The black outlines are dilations of outer walls, touching the void on one side. This map tells me that the player’s center can never go within 16 units of an outer wall, which checks out — their hitbox would get in the way! So I can delete all that stuff completely.

Consider that bottom-left outline, where red and yellow touch horizontally. If the player is in the red area, they can only enter that outlined part if they’re also allowed to be in the yellow area. Once they’re inside it, though, they can move around freely. I’ll color that piece orange, and similarly blend colors for the other outlines. (A small sliver at the top requires access to all three areas, so I colored it gray, because I can’t be bothered to figure out how to do a stripe pattern in Inkscape.)

This is the final map, and it’s easy to traverse because it works like a graph! Each contiguous region is a node, and each border is an edge. Some of the edges are one-way (falling off a ledge) or conditional (walking through a door), but the player can move freely within a region, so I don’t need to care about world geometry any more.

The problem

I’m having a hell of a time doing this mass-intersection of a big pile of shapes.

I’m writing this in Rust, and I would very very very strongly prefer not to wrap a C library (or, god forbid, a C++ library), because that will considerably complicate actually releasing this dang software. Unfortunately, that also limits my options rather a lot.

I was referred to a paper (A simple algorithm for Boolean operations on polygons, Martínez et al, 2013) that describes doing a Boolean operation (union, intersection, difference, xor) on two shapes, and works even with self-intersections and holes and whatnot.

I spent an inordinate amount of time porting its reference implementation from very bad C++ to moderately bad Rust, and I extended it to work with an arbitrary number of polygons and to spit out all resulting shapes. It has been a very bumpy ride, and I keep hitting walls — the latest is that it panics when intersecting everything results in two distinct but exactly coincident edges, which obviously happens a lot with this approach.

So the question is: is there some better way to do this that I’m overlooking, or should I just keep fiddling with this algorithm and hope I come out the other side with something that works?


Bear in mind, the input shapes are not necessarily convex, and quite frequently aren’t. Also, they can have holes, and quite frequently do. That rules out most common algorithms. It’s probably possible to triangulate everything, but I’m a little wary of cutting the map into even more microscopic shards; feel free to convince me otherwise.

Also, the map format technically allows absolutely any arbitrary combination of lines, so all of these are possible:

It would be nice to handle these gracefully somehow, or at least not crash on them. But they’re usually total nonsense as far as the game is concerned. But also that middle one does show up in the original stock maps a couple times.

Another common trick is that lines might be part of the same shape on both sides:

The left example suggests that such a line is redundant and can simply be ignored without changing anything. The right example shows why this is a problem.

A common trick in vanilla Doom is the so-called self-referencing sector. Here, the edges of the inner yellow square all claim to be yellow — on both sides. The outer edges all claim to be blue only on the inside, as normal. The yellow square therefore doesn’t neighbor the blue square at all, because no edges that are yellow on one side and blue on the other. The effect in-game is that the yellow area is invisible, but still solid, so it can be used as an invisible bridge or invisible pit for various effects.

This does raise the question of exactly how Doom itself handles all these edge cases. Vanilla maps are preprocessed by a node builder and split into subsectors, which are all convex polygons. So for any given weird trick or broken geometry, the answer to “how does this behave” is: however the node builder deals with it.

Subsectors are built right into vanilla maps, so I could use those. The drawback is that they’re optional for maps targeting ZDoom (and maybe other ports as well?), because ZDoom has its own internal node builder. Also, relying on built nodes in general would make this code less useful for map editing, or generating, or whatever.

ZDoom’s node builder is open source, so I could bake it in? Or port it to Rust? (It’s only, ah, ten times bigger than the shape algorithm I ported.) It’d be interesting to have a fairly-correct reflection of how the game sees broken geometry, which is something no map editor really tries to do. Is it fast enough? Running it on the largest map I know to exist (MAP14 of Sunder) takes 1.4 seconds, which seems like a long time, but also that’s from scratch, and maybe it could be adapted to work incrementally…? Christ.

I’m not sure I have the time to dedicate to flesh this out beyond a proof of concept anyway, so maybe this is all moot. But all the more reason to avoid spending a lot of time on dead ends.

Founder of Fan-Made Subtitle Site Lose Copyright Infringement Appeal

Post Syndicated from Andy original https://torrentfreak.com/founder-of-fan-made-subtitle-site-lose-copyright-infringement-appeal-180318/

For millions of people around the world, subtitles are the only way to enjoy media in languages other than that in the original production. For the deaf and hard of hearing, they are absolutely essential.

Movie and TV show companies tend to be quiet good at providing subtitles eventually but in line with other restrictive practices associated with their industry, it can often mean a long wait for the consumer, particularly in overseas territories.

For this reason, fan-made subtitles have become somewhat of a cottage industry in recent years. Where companies fail to provide subtitles quickly enough, fans step in and create them by hand. This has led to the rise of a number of subtitling platforms, including the now widely recognized Undertexter.se in Sweden.

The platform had its roots back in 2003 but first hit the headlines in 2013 when Swedish police caused an uproar by raiding the site and seizing its servers.

“The people who work on the site don’t consider their own interpretation of dialog to be something illegal, especially when we’re handing out these interpretations for free,” site founder Eugen Archy said at the time.

Vowing to never give up in the face of pressure from the authorities, anti-piracy outfit Rättighetsalliansen (Rights Alliance), and companies including Nordisk Film, Paramount, Universal, Sony and Warner, Archy said that the battle over what began as a high school project would continue.

“No Hollywood, you played the wrong card here. We will never give up, we live in a free country and Swedish people have every right to publish their own interpretations of a movie or TV show,” he said.

It took four more years but in 2017 the Undertexter founder was prosecuted for distributing copyright-infringing subtitles while facing a potential prison sentence.

Things didn’t go well and last September the Attunda District Court found him guilty and sentenced the then 32-year-old operator to probation. In addition, he was told to pay 217,000 Swedish krona ($26,400) to be taken from advertising and donation revenues collected through the site.

Eugen Archy took the case to appeal, arguing that the Svea Hovrätt (Svea Court of Appeal) should acquit him of all the charges and dismiss or at least reduce the amount he was ordered to pay by the lower court. Needless to say, this was challenged by the prosecution.

On appeal, Archy agreed that he was the person behind Undertexter but disputed that the subtitle files uploaded to his site infringed on the plaintiffs’ copyrights, arguing they were creative works in their own right.

While to an extent that may have been the case, the Court found that the translations themselves depended on the rights connected to the original work, which were entirely held by the relevant copyright holders. While paraphrasing and parody might be allowed, pure translations are completely covered by the rights in the original and cannot be seen as new and independent works, the Court found.

The Svea Hovrätt also found that Archy acted intentionally, noting that in addition to administering the site and doing some translating work himself, it was “inconceivable” that he did not know that the subtitles made available related to copyrighted dialog found in movies.

In conclusion, the Court of Appeal upheld Archy’s copyright infringement conviction (pdf, Swedish) and sentenced him to probation, as previously determined by the Attunda District Court.

Last year, the legal status of user-created subtitles was also tested in the Netherlands. In response to local anti-piracy outfit BREIN forcing several subtitling groups into retreat, a group of fansubbers decided to fight back.

After raising their own funds, in 2016 the “Free Subtitles Foundation” (Stichting Laat Ondertitels Vrij – SLOV) took the decision to sue BREIN with the hope of obtaining a favorable legal ruling.

In 2017 it all fell apart when the Amsterdam District Court handed down its decision and sided with BREIN on each count.

The Court found that subtitles can only be created and distributed after permission has been obtained from copyright holders. Doing so outside these parameters amounts to copyright infringement.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Dolby Labs Sues Adobe For Copyright Infringement

Post Syndicated from Andy original https://torrentfreak.com/dolby-labs-sues-adobe-for-copyright-infringement-180314/

Adobe has some of the most recognized software products on the market today, including Photoshop which has become a household name.

While the company has been subjected to more than its fair share of piracy over the years, a new lawsuit accuses the software giant itself of infringement.

Dolby Laboratories is best known as a company specializing in noise reduction and audio encoding and compression technologies. Its reversed double ‘D’ logo is widely recognized after appearing on millions of home hi-fi systems and film end credits.

In a complaint filed this week at a federal court in California, Dolby Labs alleges that after supplying its products to Adobe for 15 years, the latter has failed to live up to its licensing obligations and is guilty of copyright infringement and breach of contract.

“Between 2002 and 2017, Adobe designed and sold its audio-video content creation and editing software with Dolby’s industry-leading audio processing technologies,” Dolby’s complaint reads.

“The basic terms of Adobe’s licenses for products containing Dolby technologies are clear; when Adobe granted its customer a license to any Adobe product that contained Dolby technology, Adobe was contractually obligated to report the sale to Dolby and pay the agreed-upon royalty.”

Dolby says that Adobe promised it wouldn’t sell its any of its products (such as Audition, After Effects, Encore, Lightroom, and Premiere Pro) outside the scope of its licenses with Dolby. Those licenses included clauses which grant Dolby the right to inspect Adobe’s records through a third-party audit, in order to verify the accuracy of Adobe’s sales reporting and associated payment of royalties.

Over the past several years, however, things didn’t go to plan. The lawsuit claims that when Dolby tried to audit Adobe’s books, Adobe refused to “engage in even basic auditing and information sharing practices,” a rather ironic situation given the demands that Adobe places on its own licensees.

Dolby’s assessment is that Adobe spent years withholding this information in an effort to hide the full scale of its non-compliance.

“The limited information that Dolby has reviewed to-date demonstrates that Adobe included Dolby technologies in numerous Adobe software products and collections of products, but refused to report each sale or pay the agreed-upon royalties owed to Dolby,” the lawsuit claims.

Due to the lack of information in Dolby’s possession, the company says it cannot determine the full scope of Adobe’s infringement. However, Dolby accuses Adobe of multiple breaches including bundling licensed products together but only reporting one sale, selling multiple products to one customer but only paying a single license, failing to pay licenses on product upgrades, and even selling products containing Dolby technology without paying a license at all.

Dolby entered into licensing agreements with Adobe in 2003, 2012 and 2013, with each agreement detailing payment of royalties by Adobe to Dolby for each product licensed to Adobe’s customers containing Dolby technology. In the early days when the relationship between the companies first began, Adobe sold either a physical product in “shrink-wrap” form or downloads from its website, a position which made reporting very easy.

In late 2011, however, Adobe began its transition to offering its Creative Cloud (SaaS model) under which customers purchase a subscription to access Adobe software, some of which contains Dolby technology. Depending on how much the customer pays, users can select up to thirty Adobe products. At this point, things appear to have become much more complex.

On January 15, 2015, Dolby tried to inspect Adobe’s books for the period 2012-2014 via a third-party auditing firm. But, according to Dolby, over the next three years “Adobe employed various tactics to frustrate Dolby’s right to audit Adobe’s inclusion of Dolby Technologies in Adobe’s products.”

Dolby points out that under Adobe’s own licensing conditions, businesses must allow Adobe’s auditors to allow the company to inspect their records on seven days’ notice to confirm they are not in breach of Adobe licensing terms. Any discovered shortfalls in licensing must then be paid for, at a rate higher than the original license. This, Dolby says, shows that Adobe is clearly aware of why and how auditing takes place.

“After more than three years of attempting to audit Adobe’s Sales of products containing Dolby Technologies, Dolby still has not received the information required to complete an audit for the full time period,” Dolby explains.

But during this period, Adobe didn’t stand still. According to Dolby, Adobe tried to obtain new licensing from Dolby at a lower price. Dolby stood its ground and insisted on an audit first but despite an official demand, Adobe didn’t provide the complete set of books and records requested.

Eventually, Dolby concluded that Adobe had “no intention to fully comply with its audit obligations” so called in its lawyers to deal with the matter.

“Adobe’s direct and induced infringements of Dolby Licensing’s copyrights in the Asserted Dolby Works are and have been knowing, deliberate, and willful. By its unauthorized copying, use, and distribution of the Asserted Dolby Works and the Adobe Infringing Products, Adobe has violated Dolby Licensing’s exclusive rights..,” the lawsuit reads.

Noting that Adobe has profited and gained a commercial advantage as a result of its alleged infringement, Dolby demands injunctive relief restraining the company from any further breaches in violation of US copyright law.

“Dolby now brings this action to protect its intellectual property, maintain fairness across its licensing partnerships, and to fund the next generations of technology that empower the creative community which Dolby serves,” the company concludes.

Dolby’s full complaint can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Dutch Continue to Curb Illegal Downloading But What About Streaming?

Post Syndicated from Andy original https://torrentfreak.com/dutch-continue-to-curb-illegal-downloading-but-what-about-streaming-180222/

After many years of downloading content with impunity, 2014 brought a culture shock to the Dutch.

Citizens were previously allowed to obtain content for their own use due to a levy on blank media that compensated rightsholders. However, the European Court of Justice found that system to be illegal and the government quickly moved to ban downloading from unauthorized sources.

In the four years that have passed since the ban, the downloading landscape has undergone change. That’s according to a study published by the Consumer Insights panel at Telecompaper which found that while 41% of respondents downloaded movies, TV shows, music and games from unauthorized sources in 2013, the figure had plunged to 27% at the end of 2016. There was a further drop to 24% by the end of 2017.

Of the people who continue to download illegally, men are overrepresented, the study found. While 27% of men obtained media for free during the last year to October 2017, only 21% of women did likewise.

While as many as 150 million people still use P2P technologies such as BitTorrent worldwide, there is a general decline in usage and this is reflected in the report.

In 2013, 18% of Dutch respondents used torrent-like systems to download, a figure that had fallen to 8% in 2016 and 6% last year. Again, male participants were overrepresented, outnumbering women by two to one. However, people appear to be visiting P2P networks less.

“The study showed that people who reported using P2P to download content, have done so on average 37 times a year [to October 2017]. In January of 2017 it was significantly higher, 61 times,” the study notes. P2P usage in November 2015 was rated at 98 instances per year.

Perhaps surprisingly, one of the oldest methods of downloading content has maintained its userbase in more recent years. Usenet, otherwise known as the newsgroups, accounted for 9% of downloaders in 2013 but after falling to around 6% of downloaders in 2016, that figure remained unchanged in 2017. Almost five times more men used newsgroups than women.

At the same time as showing a steady trend in terms of users, instances of newsgroup downloading are reportedly up in the latest count. In November 2015, people used the system an average of 98 times per year but in January 2017 that had fallen to 66 times. The latest figures find an average use of 68 times per year.

Drilling down into more obscure systems, 2% of respondents told Telecompaper that they’d used an FTP server during the past year, a method that was entirely dominated by men.

While the Dutch downloading ban in 2013 may have played some part in changing perceptions, the increased availability of legal offers cannot be ignored. Films and TV shows are now widely available on services such as Netflix and Amazon, while music is strongly represented via Spotify, Apple, Deezer and similar platforms.

Indeed, 12% of respondents said they are now downloading less illegally because it’s easier to obtain paid content, that’s versus 11% at the start of 2017 and just 3% in 2013. Interestingly, 14% of respondents this time around said their illegal downloads are down because they have more restrictions on their time.

Another interesting reason given for downloading less is that pirate content is becoming harder to find. In 2013, just 4% cited this as a cause for reduction yet in 2017, this had jumped to 8% of respondents, with blocked sites proving a stumbling block for some users.

On the other hand, 3% of respondents said that since content had become easier to find, they are now downloading more. However, that figure is down from 13% in November 2013 and 6% in January 2017.

But with legal streaming certainly making its mark in the Netherlands, the illegal streaming phenomenon isn’t directly addressed in the report. It is likely that a considerable number of citizens are now using this method to obtain their content fix in a way that’s not as easily trackable as torrent-like systems.

Furthermore, given the plans of local film distribution Dutch FilmWorks to chase and demand cash settlements from BitTorrent users, it’s likely that traffic to streaming sites will only increase in the months to come, at least for those looking to consume TV shows and movies.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

[$] Open-source trusted computing for IoT

Post Syndicated from jake original https://lwn.net/Articles/747564/rss

At this year’s FOSDEM in Brussels,
Jan Tobias Mühlberg gave a talk on the
latest work on Sancus, a
project that was originally presented
at the USENIX Security Symposium in 2013. The project is a fully
open-source hardware platform to support “trusted
computing
” and other security functionality. It is designed to be used for
internet of things (IoT)
devices, automotive applications, critical infrastructure, and other
embedded devices where trusted code is expected to be run.