Tag Archives: chrome

QA Mastermind Needed!

Post Syndicated from Yev original https://www.backblaze.com/blog/qa-mastermind-needed/

hiring-eng-desk

Backblaze’s expansion continues and we’re in need of a QA mastermind to help us make sure our GUIs are prim and proper! If you’re wanting to join a fast-paced team and help us bring cloud storage to the world, read below and join up!

Here’s what you’ll be responsible for:

  • Identify, record, document reproducible steps thoroughly, and track defects to resolution.
  • Perform thorough regression testing when defects are resolved.
  • Interact with developers to resolve defects, clarify functionality and resolve usability issues.
  • Ability to write scripts and design test scenarios.
  • Contribute to continual QA process improvement efforts.
  • Ability to take initiative like suggesting and completing test plans without being told to do so.
  • Will be testing on Mac and Windows desktops running multiple browsers like Chrome, Edge, Safari, Opera.
  • Will test some native applications on Mac, Windows, Android, and iOS (iPhone).
  • Experience with performance and security testing is a plus**

Please be proficient in:

  • Enough familiarity with HTML, JSP, and the Java programming language to read engineer’s source code submissions and guess which areas of the product are changing and need additional testing.
  • Cross platform (Linux/Macintosh/Windows) — don’t need to be an expert on all three, but cannot be afraid of any.
  • Familiarity with test automation software (such as Selenium or LeanFT) is a plus**

Required for all Backblaze Employees:

  • Good attitude and willingness to do whatever it takes to get the job done.
  • Strong desire to work for a small fast paced company.
  • Desire to learn and adapt to rapidly changing technologies and work environment.
  • Occasional visits to Backblaze datacenters necessary.
  • Rigorous adherence to best practices.
  • Relentless attention to detail.
  • Excellent interpersonal skills and good oral/written communication.
  • Excellent troubleshooting and problem solving skills.
  • OK with pets in office.

This position is located in San Mateo, California. Regular attendance in the office is expected. Backblaze is an Equal Opportunity Employer and we offer competitive salary and benefits, including our no policy vacation policy.

If this sounds like you — follow these steps:

  • Send an email to jobscontact@backblaze.com with the position in the subject line.
  • Include your resume.
  • Tell us a bit about your QA experience.

The post QA Mastermind Needed! appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Рецепта за бисквитки

Post Syndicated from Илия Горанов original http://9ini.babailiica.com/cookies/

Не става дума за захарни изделия, а за информационни технологии. Терминът е заимстван с директен превод от английския език, където се използва думата cookies (въпреки, че буквалният превод е курабийки, а не бисквитки).

И все пак, каква е целта на бисквитките? Бисквитките представляват порции структурирана информация, която съдържа различни параметри. Те се създават от сървърите, които предоставят достъп до уеб страници. Предава се чрез служебните части на HTTP протокола (т. нар. HTTP headers) – това е трансферен протокол, който се използва от браузърите за обмен на информация със сървърите. Бисквитките са добавени в спецификацията на HTTP протокола във версия 1.0 в началото на 90те години. По това време Интернет не беше толкова развит, колкото е в момента и затова HTTP протоколът има някои специфични особености. Протоколът е базиран на заявки (от клиента) и отговори (от сървъра), като всяка двойка заявка и отговор се правят в отделна връзка (socket) към между клиента и сървъра. Тази схема на работа е изключително удобна, тъй като не изисква постоянна и стабилна връзка с Интернет, тъй като всъщност връзката се използва само за кратък момент. За съжаление, заради тази особеност често HTTP протоколът е наричан state less протокол (протокол без запазване на състоянието). А именно – сървърът няма как да знае, че редица от последователни заявки са изпратени от един и същи клиент. За разлика от IRC, SMTP, FTP и други протоколи създадени през същите години, които отварят една връзка и предават и приемат данни двупосочно. При такива протоколи комуникацията започва с hand shake или аутентикация между участниците в комуникацията, след което и за двете страни е ясно, че докато връзката остава отворена, комуникацията тече с конкретен участник.

За да бъде преодолян този недостатък на протокола, след версия 0.9 (първата версия навлязла в реална експлоатация), във версия 1.0 създават механизма на бисквитките. Кръщават технологията cookies заради приказката на братя Грим за Хенцел и Гретел, които маркират пътя, по който са минали пре тъмната гора, като поръсват трохи. В повечето български преводи в приказката се използват трохи от хляб (bread crumbs) термин, който намира друго място в IT сферата и уеб, но в действителност приказката е германска народна приказка с много различни версии през годините. Сравнението е очевидно – cookies позволяват да се проследи пътя на потребителя в тъмната гора на state less протокола HTTP.

Как работят бисквитките? Бисквитките се създават от сървърите и се изпращат на потребителите. При всяка следваща заявка от потребителя към същия сървър, той трябва да изпраща обратно копие на получените от сървъра бисквитки. По този начин, сървърът получава механизъм по който да проследи потребителя по пътя му, т.е. да знае, когато един и същи потребител е направил поредица от множество, иначе несвързани, заявки.

Представете си, че изпращате първа заявка към сървъра и той ви връща отговор, че иска да се аутентикирате – да посочите потребителско име и парола. Вие ги изпращате и сървърът верифицира, че това наистина сте вие. Но заради особеностите на HTTP протокола, след като изпратите името и паролата, връзката между вас и сървъра се прекъсва. По-късно изпращате нова заявка. Сървърът няма как да знае, че това отново сте вие, тъй като за новата заявка е направена нова връзка.

Сега си представете същата схема, но с добавена технологията на бисквитите. Изпращате заявката към сървъра, той ви връща отговор, че иска име и парола. Изпращате име и парола и сървърът ви връща бисквитка в която записва че собственикът на тази бисквитка вече е дал име и парола. Връзката прекъсва. По-късно изпращате нова заявка и тъй като тя е към същия сървър, клиентът трябва да върне обратно копие на получените от сървъра бисквитки. Тогава новата заявка съдържа и бисквитката, в която пише, че по-рано сте предоставили име и парола, които са били валидни.

И така, бисквитките се създават от сървърите, като данните се изпращат от уеб сървъра заедно с отговора на заявка за достъп до ресурс (например отделна уеб страница, текст, картинка или друг файл). Бисквитката се връща като част от служебната информация на протокола. Когато клиент (клиентският софтуер – браузър) получи отговор от сървър, който съдържа бисквитка, той следва да я обработи. Ако клиентът вече разполага с подобна бисквитка – следва да си обнови информацията за нея, ако не разполага с нея, следва да я създаде. Ако срокът на бисквитката е изтекъл – следва да я унищожи и т.н.

Често се споменава, че бисквитките са малки текстови файлове, които се съхраняват на компютъра на потребителя. Това не винаги е вярно – бисквитките представляваха отделни текстови файлове в ранните версии на някои браузъри (например в Internet Explorer и Netscape Navigator), повечето съвременни браузъри съхраняват бисквитките по различен начин. Например съвременните версии на браузърите Mozilla Firefox и Google Chrome съхранява всички бисквитки в един файл, който представлява sqlite база данни. Подходът с база данни е доста подходящ, тъй като бисквитките представляват структурирана информация, която е удобно да се съхранява в база, а достъпът до информацията е доста по ефективен. Въпреки това, браузърът Microsoft Edge продължава да съхранява бисквитките във вид на текстови файлове в AppData директорията.

Какви параметри съдържат бисквитките, които сървърите изпращат? Всяка бисквитка може да съдържа: име, стойност, домейн, адрес (път), срок на варидност и някои параметри за сигурност (да важат само по https и да бъдат достъпни само от трансфертия протокол, т.е. на не бъдат достъпни за javascript и други приложни слоеве при клиента). Името и стойността са задължителни за всяка бисквитка – те задават името на променливата, в която ще се съхранява информацията и съответната стойност – съхранената информация.

Домейнът е основната част от адреса на интернет сайта, който изпраща бисквитката – частта, която идентифицира машината (сървъра) в мрежата. Според техническите спецификации домейнът на бисквитката задължително трябва да съвпада с домейна на сървъра, който ги изпраща, но има някои изключения. Първото изключение е, чекогато се използват няколко нива домейни, те могат да създават бисквитки за различни части от нивата. Например отговорът на заявка към домейна example.com може да създаде бисквитка само за домейна example.com, както и бисквитка валидна за същия домейн и всички негови поддомейни. Ако бисквитката е валидна за всички поддомейни, изписването става с точка пред името на домейна или .example.com. Второто изключение е валидно, когато бисквитката не се създава от сървъра, а от приложен слой при клиента (например от javascript). Тогава е възможно js файлът да е зареден от един домейн, но в html страница от друг домейн – сървърът, който изпраща бисквитка може да я изпрати от името на домейна където е js файлът, но самият js, докато е зареден в страница от друг домейн може да създаде (и прочете) бисквитка от домейна на html страницата.

Адресът (или пътят) е останалата част от URL адреса. По подризбиране бисквитките са валидни за път / (т.е. за корена на уеб сайта). Това означава, че бисквитката е валидна за всички възможни адреси на сървъра. Въпреки това, има възможност изрично да се укаже конкретен адрес, за който да бъдат валидни бисквитките.По идея, адресите са репрезентация на път в йерархична файлова структура върху сървъра (въпреки, че не е задължително). Затова и адресите на бисквитките представят мястото на тяхното създаване в йерархична файлова система. Например ако пътят на бисквитката е /dir/ – това означава, че тя е валидна в директорията с име dir, включително и всички нейни поддиректории.

Да дадем малко по-реалистичен пример, ако имаме бисквитки, които използваме за съпраняване на информация за аутентикацията на потребителите в администраторски панел на уеб сайт, който е разположен в директория /admin/ – можем да посочим, че дадената бисквитка е валидна само за адрес /admin/, по този начин бисквитките създадени от сървъра за нуждите на администраторския панел няма да се използват при заявки за други ресурси от същия сървър.

Срокът на валидност определя колко време потребителят трябва да пази бисквитката при себе си и да я връща с всяка следваща заявка към същия сървър и адрес (път). Когато сървърът иска да изтрие бисквитка при потребителя, той трябва да я изпрати със срок на валидност в миналото, по този начин предизвиква изтичане и автоматично изтриване на бисквитката при потребителя.

Бисквитките имат и параметри, които имат грижата да осигурят сигурността на предаваните данни. Това включва два булеви параметъра – единият определя, дали бисквитката да бъде достъпна (както за четене, така и за писане) само от http протокола или да бъде достъпна и за приложния слой при клиента (например за javascript). Вторият параметър определя, дали бисквитката да се предава по всички протоколи или само по https (защитен http).

Както се досещате, в рамките на една и съща комуникация може да има повече от една бисквитка. Както сървърът може да създаде повече от една бисквитка едновременно, така и клиентът може да върне повече от една бисквитка обратно към сървъра. Именно затова освен домейни и адреси, бисквитките имат и имена.

В допълнение бисквитките имат и редица ограничения. Повечето браузъри не позволяват да има повече от 20 едновременно валидни бисквитки за един и същи домейн. Във Mozilla Firefox това ограничение е 50 броя, а в Opera 30 броя. Също така е ограничен и размерът на всяка отделна бисквитка – не повече от 4KB (4096 Bytes). В спецификациите за бисквитки RFC2109 от 1997 г. е посочено че клиентът може да съхранява до 300 бисквитки по 20 за един и същи домейн и всяка с размер до 4KB. В по-късната спецификация Rfc6265 от 2011 г. лимитите са увеличение до 3000 броя общо и 50 броя за един домейн. Все пак, не трябва да се забравя, че всяка бисквитка се изпраща от клиента при всяка следваща заявка към сървъра, ако чукнем тавана на лимитите и имаме 50 бисквитки по 4KB, това означава, че с всяка заявка ще пътуват близо 200KB само под формата на бисквитки, което може да се окаже сериозен товар за трафика, дори и при техническите възможности на съвременния достъп до Интернет.

Разбира се, по-рано приведеният пример, при който запазваме успешната аутентикация на потребителя в бисквитка има множество особености свързани с гарантиране на сигурността. На първо място – не е добра идея да запазим потребителското име и паролата на потребителя в бисквитка, тъй като тя се запазва на неговия компютър. Това означава, че по всяко време, ако злонамерено лице получи достъп до компютъра на потребителя, може да прочете потребителското име и паролата от записаните там бисквитки. От друга страна, ако в бисквитката съхраним само името на потребителя, без неговата парола – няма как да се предпазим от фалшиви бисквитки – всяко злонамерено лице може да създаде фалшива бисквитка, в която да посочи произволно (чуждо) потребителско име и да се представи пред сървъра от чуждо име.

Затова най-често използваният механизъм е, че при всяка аутентикация с име и парола пред сървъра, след като той ги верифицира, създава някакъв временно валиден идентификатор, който изпраща като бисквитка. В различните технологии този идентификатор може да се намери с различни имена, one time password (OTP), token, session или по друг начин. При тази схема сървърът съхранява за ограничено време (живот на сесията) информация за потребителя. Всяка такава информация (често наричана сесия) получава идентификационен номер, който се изпраща като бисквитка на потребителя. Тъй като той ще връща този идентификатор с всяка следваща заявка, сървърът ще може да възстановява съхранената информация за потребителя и тя да бъде достъпна при всяка следваща заявка. В същото време, информацията е съхранена на сървъра, а не при клиента, което не позволява на злонамерен потребител да я модифицира или фалшифицира. Освен това, идентификаторът е валиден за ограничен период от време (например за 30 минути). Дори и бисквитката с идентификатора да остане на компютъра на потребителя, заисаният в нея идентификатор няма да върши работа след половин час. Не на последно място, при натискане на бутона за изход съхранените на сървъра данни за потребителя се изтриват дори и да не е изтекъл срокът от 30 минути. Именно затова е важно винаги да се използват бутоните за изход при излизане от онлайн системи.

Какво друго се съхранява в бисквитки? На практика всичко! Много често бисквитките се използват за съхраняване на информация за индивидуални настройки на потребителя. Когато потребителят промени някоя настройка сървърът му изпраща бисквитка със съответната настройка и дълъг срок на валидност. При всяко следващо посещение на същия потребител на същия сайт, той ще изпраща запазената в бисквитка настройка, заедно със заявката към сървъра. Сървърът ще знае за желаната настройка и ще я приложи при връщането на отговор на изпратената заявка. Пример за такава настройка е броят на записите които се показват на страница. Ако веднъж промените този брой, избраната стойност може да се запази в бисквитка и при всяка следваща заявка сървърът винаги ще знае за настройката и ще връща правилен отговор.

В бисквитки се съхранява и друга информация, например поръчани стоки в пазарската кошница на онлайн магазин, статистическа информация кога сте посетили даден сайт за последно, колко пъти сте го посетили, кои страници сте посещавали, колко време сте се задържали на всяка от страниците и т.н.

Забранява ли Европейският съюз бисквитките? Бисквитеното чудовище от улица Сезам би било доста разстроено, ако разбере, че ЕС иска да ограничи използването на бисквитки. В действителност истината е доста по-различна, но информацията е масово грешно интепретирана. Да излезем от технологичната сфера и да навлезем малко в юридическата. На първо място, кои са нормативните документи в тази връзка? Масово се цитира европейската Директива 2009/136/ЕО от 25 ноември 2009 г. Истината е, че тази директива не засяга директно бисквитките. Директивата внася изменения в друга Директива 2002/22/ЕО от 7 март 2002 г. относно универсалната услуга (час от която е и достъпът до Интернет) и правата на потребителите. Изменението от директивата от 2009 г. гласи следното (чл. 5, стр. 30):

5. Член 5, параграф 3 се заменя със следния текст:

„3. Държавите-членки гарантират, че съхраняването на информация или получаването на достъп до информация, вече съхранявана в крайното оборудване на абоната или ползвателя, е позволено само при условие, че съответният абонат или ползвател е дал своето съгласие след получаване на предоставена ясна и изчерпателна информация в съответствие с Директива 95/46/ЕО, inter alia, относно целите на обработката. Това не пречи на всякакво техническо съхранение или достъп с единствена цел осъществяване на предаването на съобщение по електронна съобщителна мрежа или доколкото е строго необходимо, за да може доставчикът да предостави услуга на информационното общество, изрично поискана от абоната или ползвателя.“

Също така, в увода на същата директива се споменава още:

(66) Трети страни може да желаят да съхраняват информация върху оборудване на даден ползвател или да получат достъп до вече съхраняваната информация за различни цели, които варират от легитимни (някои видове „бисквитки“ (cookies) до такива, включващи непозволено навлизане в личната сфера (като шпионски софтуер или вируси). Следователно е от първостепенно значение ползвателите да получат ясна и всеобхватна информация при извършване на дейност, която би могла да доведе до подобно съхраняване или получаване на достъп. Методите на предоставяне на информация и на предоставяне на правото на отказ следва да се направят колкото може по-удобни за ползване. Изключенията от задължението за предоставяне на информация и на право на отказ следва да се ограничават до такива ситуации, в които техническото съхранение или достъп е стриктно необходимо за легитимната цел на даване възможност за ползване на специфична услуга, изрично поискана от абоната или ползвателя. Когато е технически възможно и ефективно, съгласно приложимите разпоредби на Директива 95/46/ЕО, съгласието на ползвателя с обработката може да бъде изразено чрез използване на съответните настройки на браузер или друго приложение. Прилагането на тези изисквания следва да се направи по-ефективно чрез разширените правомощия, дадени на съответните национални органи.

От двете цитирания следва да обърнем внамание и на още нещо – цитира се и Директива 95/46/ЕО от 24 октомври 1995 г, която пък третира защитата на физическите лица при обработването на лични данни. Разбира се, трудно е да се каже, че директивата от средата на 90те години засяга директно функционирането на Интернет, който по това време е доста слабо разпространен, а технологията на бисквитките – появила се само преди няколко години, все още изключително нова и рядко използвана.

Преди да продължим с анализа нататък, трябва да отбележим, че и трите цитирани директиви, по отношение на защитата на потребителите от бисквитки, засега не са транспонирани в националното законодателство (поне не в Закона за електронното управление, Закона за електронните съобщения и Закона за защита на личните данни). Слава богу, механизмът на директивите в Европейския съюз е така създаден, че ги прави задължителни за спазване от всички държави-членки, независимо дали има национално законодателство за съответната сфера или не.

И все пак – защо ЕС иска да ограничи използването на бисквитките? Сред изброените множество приложения на технологията – за съхраняване на аутентикация, за съхраняване на настройки, за следене на поведението на потребителя, за следене на статистика за потребителя, за маркетингови анализи, за съхраняване на данни за пазарски колички и др. (някои от изброените се припокриват или допълват), бисквитките още може да се използват и за сериозно навлизане в личното пространство на потребителите, като се извършват различни форми на следене и анализ на потреблението и поведението на потребителите в Интернет с цел предоставяне на таргетирана реклама или с други, дори и незаконни, цели.

Да разгледаме един по-реалистичен пример на базата на вече разгледаната по-рано технология. Имаме прост информационен сайт с автомобилна тематика, който няма никакви специфични функции, не предоставя услуги или нещо друго. Сайтът обаче използва Google Analytics (безплатен инструмент за събиране на статистика за посетителите, предоставят от Google), също така, собственикът на сайта, с цел да монетизира поне в някаква минимална степен събраната на сайта си информация е пуснал и Google Adwords (услуга за публикуване на рекламни банери предоставяна от Google). Също така имаме и потребител, който търси в Google информация за ремонт на спукана автомобилна гума. Потребителят открива цитирания по-горе сайт в Google, където кликва линк и отива на сайта. Същият потребител има и email в Gmail (безплатна email услуга предоставяна от Google). Както забелязвате, до момента имаме един неизменно преследващ ни по целия път общ знаменател – Google. В случая това далеч не е единствения голям играч на този пазар, просто примерът с него е най-достъпен за широката публика. Всъщност Google едновременно има достъп до писмата, които потребителят е изпращал и получавал, до това какво е търсил, до това в кой сайт е влязъл, какво е чел там (точно кои страници), колко време е прекарал на този сайт, също така и информация за всеки друг сайт, който същият потребител е посещавал в миналото, независимо дали ги е посещавал от същия компютър или от друг, достатъчно е всички сайтове да използват Google Analytics за статистиката си. Ако потребителят има служебен и домашен компютър и е влизал в Gmail пощата си и от двата компютъра, тогава Google Analytics е в състояние да направи връзка, че сайтовете, които сапосещавани на двата компютъра, които иначе нямат никаква друга връзка помежду си, са посещавани от един и същи потребител. Тогава, потребителят не трябва да се учудва, ако отиде на трето място, несвързано по никакъв начин с предишните две (домашния и служебния компютър), влезе си в пощата и по-късно посети произволен сайт, на който види реклама за продажба на нови автомобилни гуми.

Проследяването на всичко описано до момента е възможно именно чрез механизмите на бисквитките. Неслучайно те се наричат механизъм за запазване на състоянието и са създадени за „следене на потребителите на протокола”. Разбира се – следене в позитивния и чисто технологичен смисъл на термина, но все пак следене, което в ръцете на недобронамерени лица може да придобие съвсем различни мащаби и да се извършва със съвсем различни цели.

Всичко това може (и се) комбинира и с други данни, които се събират за потребителите – IP гео локация, информация за интернет връзката, използваното устройство, размер на дисплея, операционна система, използван софтуер и много други данни, които се предоставят автоматизирано, докато браузваме в мрежата.

Отново, сама по себе си, технологията на функциониране на бисквитките има предвидени защитни механизми. Например бисквитките от един уеб сайт не могат да бъдат прочетени от друг сайт. Но тук цялата схема се чупи поради факта, че бисквитките са достъпни за приложния слой на клиента (javascript), като в същото време милиони сайтове по света се възползват от иначе безплатната услуга за събиране на статистика за посещенията от един и същи доставчик. Именно този доставчик е свързващото звено в цялата схема. Всеки от сайтовете и всеки от потребителите поотделно не разполагат с особено полезна информация, но свързващото звено разполага с цялата информация и при добро желание, а повярвайте ми, за да бъдат безплатни всички тези услуги, желанието е преминало в нужда, тази натрупана информация може да бъде анализирана, обработена и използвана за всякакви нужди.

И тъй като често тези действия могат да навлязат доста надълбоко в личния живот на хората, Европейският съюз е предприел необходимите мерки, да задължи доставчиците на услуги в Интернет (собствениците на уеб сайтове), да информират потребителите за това какви данни се съхраняват за тях на крайните устройства (т.е. на самите компютри на клиентите) и за какви цели се използват тези данни.

Тук е много важно да уточним няколко аспекта. На първо място – използването на бисквитки, както и проследяването като процес не са забранени, просто се изисква потребителите да бъдат информирани какво се прави и с каква цел, както и потребителят изрично да е дал съгласието си за това. Другата важна подробност е, че бисквитките не са единственят механизъм за съхраняване на информация на крайните устройства и европейското законодателство не се ограничава до използването именно на бисквитки. Local Storage е съвременна алтернатива на бисквитките и въпреки, че функционира по различен начин и предоставя съвсем различни възможности, но също съхранява информация на крайните устройства и реално може да бъде използвана за следене на потребителите и може да засегне правата им по отношение на обработка на личните им данни. В този смисъл европейските директиви засягат всяка форма на съхраняване на информация на устройствата на потребителите, а не само бисквитките. Също така – директивите разглеждат съхраняването на данни за проследяване на потребителите отделно от бисквитките необходими за технологичното функциониране на системите в Интернет. Също така се прави и разлика между бисквитки от трети страни и бисквитки от собствениците на сайтовете, като в примера с бисквитките оставяни от Google Analytics, Google е трета страна.

Когато съхраняването на данните (били те бисквитки или други) се извършва от трети страни (различни от доставчика на услугата и крайния потребител), това не отменя ангажиментите на доставчика на услугата да информира потребителите, както и да им иска съгласието. Казано накратко, ако имам сайт, който използва Google Analytics, ангажиментът да информирам потребителите на сайта, както и да поискам тяхното съгласие, си остава мой (т.е. на собственика на сайта – лицето, което предоставя услугата), а не на третата страна (т.е. не е ангажимент на Google.

Също интересен факт е, че директивата разглежда като допустимо, съгласието на потребителя да бъде получено и чрез настройка на браузъра или друго приложение. Тук можем обратно да се върнем към технологиите. Преди време имаше P3P (пи три пи като Platform for Privacy Preferences, не ръ зъ ръ)– технология, която започна обещаващо, но в последствие беше имплементирана единствено от Internet Explorer и в крайна сметка разработката на спецификацията беше прекратена от W3C. Една от сочените причини е, че планираната технология беше относително сложна за имплементиране. Към днешна дата повечето браузъри поддържат Do Not Track (DNT), което представлява един HTTP header с име DNT, който ако присъства в заявката на клиента със стойност 1, посочва, че потребителят не е съгласен да бъде проследяван. Разбира се, проследяването, което се визира от DNT и запазването и достъпването на информация на крайните устройства на потребителите, което се визира в европейските директиви на се едно и също. Може да записваш и четеш информация на клиента, без да го проследяваш, което би нарушило европейските директиви, както и можеш да проследиш клиента, без да му записваш и четеш данни локално (например чрез Browser Fingerprint и с дънни съхранявани изцяло на сървъра).

Накрая, нека обобщим:

  1. Европейският съюз не забранява бисквитките;
  2. Европейският съюз предвижда мерки за защита на личните данни, като налага правила за искане на позволение от потребителите, когато се записват и достъпват данни на крайните им устройства, когато тези данни;
  3. Изискването за информирано съгласие не се ограничава единствено до бисквитките, а покрива всички съществуващи и евентуално бъдещи технологии позволяващи записване и достъпване на информация на крайните устройства на потребителите;
  4. Информиране на потребителите следва да има, както и трябва да им се поиска, независимо дали данните се записват или достъпват директно от доставчика на услугата или чрез услугите на трета страна. Или по-просто казано, ако използваме Google Analytics, ние трябва да предупредим потребителя и да му поискаме съгласието, а не Google. В този случай доставчикът на услугата (самият сайт) се явява като оператор на лични данни (не по смисъла на българския ЗЗЛД, но по смисъла на европейските директиви, а Google се явява трета страна оторизирана от администратора да управлява тези данни от негово име и за негова сметка – ерго отговорността е на администратора;
  5. Информиране на потребителя и искане на съгласие не е необходимо, когато записваните и четените данни се използват за технологични нужди и това е свързано с предоставянето на услугата, която потребителят изрично е поискал да използва. Бих казал, че когато случаят е такъв, лично аз бих избрал да информирам потребителя какво и защо се записва и чете, без обаче да му искам съгласието;
  6. Според Европейските директиви съгласието може да се изрази от потребителите и чрез специфични технологии създадени за тази цел, но използването на технологиите не отменя необходимостта от информиране на потребителя относно това какви данни се съхраняват и с каква цел се обработват;
  7. Европейската нормативна уредба в това отношение изглежда не е траспонирана в националното законодателство на България, което не означава, че може да не се спазва;

Canadian Man Behind Popular ‘Orcus RAT’

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/07/canadian-man-is-author-of-popular-orcus-rat/

Far too many otherwise intelligent and talented software developers these days apparently think they can get away with writing, selling and supporting malicious software and then couching their commerce as a purely legitimate enterprise. Here’s the story of how I learned the real-life identity of Canadian man who’s laboring under that same illusion as proprietor of one of the most popular and affordable tools for hacking into someone else’s computer.

Earlier this week I heard from Daniel Gallagher, a security professional who occasionally enjoys analyzing new malicious software samples found in the wild. Gallagher said he and members of @malwrhunterteam and @MalwareTechBlog recently got into a Twitter fight with the author of Orcus RAT, a tool they say was explicitly designed to help users remotely compromise and control computers that don’t belong to them.

A still frame from a Youtube video showing Orcus RAT's keylogging ability to steal passwords from Facebook users and other credentials.

A still frame from a Youtube video demonstrating Orcus RAT’s keylogging ability to steal passwords from Facebook and other sites.

The author of Orcus — a person going by the nickname “Ciriis Mcgraw” a.k.a. “Armada” on Twitter and other social networks — claimed that his RAT was in fact a benign “remote administration tool” designed for use by network administrators and not a “remote access Trojan” as critics charged. Gallagher and others took issue with that claim, pointing out that they were increasingly encountering computers that had been infected with Orcus unbeknownst to the legitimate owners of those machines.

The malware researchers noted another reason that Mcgraw couldn’t so easily distance himself from how his clients used the software: He and his team are providing ongoing technical support and help to customers who have purchased Orcus and are having trouble figuring out how to infect new machines or hide their activities online.

What’s more, the range of features and plugins supported by Armada, they argued, go well beyond what a system administrator would look for in a legitimate remote administration client like Teamviewer, including the ability to launch a keylogger that records the victim’s every computer keystroke, as well as a feature that lets the user peek through a victim’s Web cam and disable the light on the camera that alerts users when the camera is switched on.

A new feature of Orcus announced July 7 lets users configure the RAT so that it evades digital forensics tools used by malware researchers, including an anti-debugger and an option that prevents the RAT from running inside of a virtual machine.

Other plugins offered directly from Orcus’s tech support page (PDF) and authored by the RAT’s support team include a “survey bot” designed to “make all of your clients do surveys for cash;” a “USB/.zip/.doc spreader,” intended to help users “spread a file of your choice to all clients via USB/.zip/.doc macros;” a “Virustotal.com checker” made to “check a file of your choice to see if it had been scanned on VirusTotal;” and an “Adsense Injector,” which will “hijack ads on pages and replace them with your Adsense ads and disable adblocker on Chrome.”

WHO IS ARMADA?

Gallagher said he was so struck by the guy’s “smugness” and sheer chutzpah that he decided to look closer at any clues that Ciriis Mcgraw might have left behind as to his real-world identity and location. Sure enough, he found that Ciriis Mcgraw also has a Youtube account under the same name, and that a video Mcgraw posted in July 2013 pointed to a 33-year-old security guard from Toronto, Canada.

ciriis-youtubeGallagher noticed that the video — a bystander recording on the scene of a police shooting of a Toronto man — included a link to the domain policereview[dot]info. A search of the registration records attached to that Web site name show that the domain was registered to a John Revesz in Toronto and to the email address john.revesz@gmail.com.

A reverse WHOIS lookup ordered from Domaintools.com shows the same john.revesz@gmail.com address was used to register at least 20 other domains, including “thereveszfamily.com,” “johnrevesz.com, revesztechnologies[dot]com,” and — perhaps most tellingly —  “lordarmada.info“.

Johnrevesz[dot]com is no longer online, but this cached copy of the site from the indispensable archive.org includes his personal résumé, which states that John Revesz is a network security administrator whose most recent job in that capacity was as an IT systems administrator for TD Bank. Revesz’s LinkedIn profile indicates that for the past year at least he has served as a security guard for GardaWorld International Protective Services, a private security firm based in Montreal.

Revesz’s CV also says he’s the owner of the aforementioned Revesz Technologies, but it’s unclear whether that business actually exists; the company’s Web site currently redirects visitors to a series of sites promoting spammy and scammy surveys, come-ons and giveaways.

IT’S IN THE EULA, STUPID!

Contacted by KrebsOnSecurity, Revesz seemed surprised that I’d connected the dots, but beyond that did not try to disavow ownership of the Orcus RAT.

“Profit was never the intentional goal, however with the years of professional IT networking experience I have myself, knew that proper correct development and structure to the environment is no free venture either,” Revesz wrote in reply to questions about his software. “Utilizing my 15+ years of IT experience I have helped manage Orcus through its development.”

Revesz continued:

“As for your legalities question.  Orcus Remote Administrator in no ways violates Canadian laws for software development or sale.  We neither endorse, allow or authorize any form of misuse of our software.  Our EULA [end user license agreement] and TOS [terms of service] is very clear in this matter. Further we openly and candidly work with those prudent to malware removal to remove Orcus from unwanted use, and lock out offending users which may misuse our software, just as any other company would.”

Revesz said none of the aforementioned plugins were supported by Orcus, and were all developed by third-party developers, and that “Orcus will never allow implementation of such features, and or plugins would be outright blocked on our part.”

In an apparent contradiction to that claim, plugins that allow Orcus users to disable the Webcam light on a computer running the software and one that enables the RAT to be used as a “stresser” to knock sites and individuals users offline are available directly from Orcus Technologies’ Github page.

Revesz’s also offers a service to help people cover their tracks online. Using his alter ego “Armada” on the hacker forum Hackforums[dot]net, Revesz also sells a “bulletproof dynamic DNS service” that promises not to keep records of customer activity.

Dynamic DNS services allow users to have Web sites hosted on servers that frequently change their Internet addresses. This type of service is useful for people who want to host a Web site on a home-based Internet address that may change from time to time, because dynamic DNS services can be used to easily map the domain name to the user’s new Internet address whenever it happens to change.

armadadyndns

Unfortunately, these dynamic DNS providers are extremely popular in the attacker community, because they allow bad guys to keep their malware and scam sites up even when researchers manage to track the attacking IP address and convince the ISP responsible for that address to disconnect the malefactor. In such cases, dynamic DNS allows the owner of the attacking domain to simply re-route the attack site to another Internet address that he controls.

Free dynamic DNS providers tend to report or block suspicious or outright malicious activity on their networks, and may well share evidence about the activity with law enforcement investigators. In contrast, Armada’s dynamic DNS service is managed solely by him, and he promises in his ad on Hackforums that the service — to which he sells subscriptions of various tiers for between $30-$150 per year — will not log customer usage or report anything to law enforcement.

According to writeups by Kaspersky Lab and Heimdal Security, Revesz’s dynamic DNS service has been seen used in connection with malicious botnet activity by another RAT known as Adwind.  Indeed, Revesz’s service appears to involve the domain “nullroute[dot]pw”, which is one of 21 domains registered to a “Ciriis Mcgraw,” (as well as orcus[dot]pw and orcusrat[dot]pw).

I asked Gallagher (the researcher who originally tipped me off about Revesz’s activities) whether he was persuaded at all by Revesz’s arguments that Orcus was just a tool and that Revesz wasn’t responsible for how it was used.

Gallagher said he and his malware researcher friends had private conversations with Revesz in which he seemed to acknowledge that some aspects of the RAT went too far, and promised to release software updates to remove certain objectionable functionalities. But Gallagher said those promises felt more like the actions of someone trying to cover himself.

“I constantly try to question my assumptions and make sure I’m playing devil’s advocate and not jumping the gun,” Gallagher said. “But I think he’s well aware that what he’s doing is hurting people, it’s just now he knows he’s under the microscope and trying to do and say enough to cover himself if it ever comes down to him being questioned by law enforcement.”

Some stuff about color

Post Syndicated from Eevee original https://eev.ee/blog/2016/07/16/some-stuff-about-color/

I’ve been trying to paint more lately, which means I have to actually think about color. Like an artist, I mean. I’m okay at thinking about color as a huge nerd, but I’m still figuring out how to adapt that.

While I work on that, here is some stuff about color from the huge nerd perspective, which may or may not be useful or correct.

Hue

Hues are what we usually think of as “colors”, independent of how light or dim or pale they are: general categories like purple and orange and green.

Strictly speaking, a hue is a specific wavelength of light. I think it’s really weird to think about light as coming in a bunch of wavelengths, so I try not to think about the precise physical mechanism too much. Instead, here’s a rainbow.

rainbow spectrum

These are all the hues the human eye can see. (Well, the ones this image and its colorspace and your screen can express, anyway.) They form a nice spectrum, which wraps around so the two red ends touch.

(And here is the first weird implication of the physical interpretation: purple is not a real color, in the sense that there is no single wavelength of light that we see as purple. The actual spectrum runs from red to blue; when we see red and blue simultaneously, we interpret it as purple.)

The spectrum is divided by three sharp lines: yellow, cyan, and magenta. The areas between those lines are largely dominated by red, green, and blue. These are the two sets of primary colors, those hues from which any others can be mixed.

Red, green, and blue (RGB) make up the additive primary colors, so named because they add light on top of black. LCD screens work exactly this way: each pixel is made up of three small red, green, and blue rectangles. It’s also how the human eye works, which is fascinating but again a bit too physical.

Cyan, magenta, and yellow are the subtractive primary colors, which subtract light from white. This is how ink, paint, and other materials work. When you look at an object, you’re seeing the colors it reflects, which are the colors it doesn’t absorb. A red ink reflects red light, which means it absorbs green and blue light. Cyan ink only absorbs red, and yellow ink only absorbs blue; if you mix them, you’ll get ink that absorbs both red and blue green, and thus will appear green. A pure black is often included to make CMYK; mixing all three colors would technically get you black, but it might be a bit muddy and would definitely use three times as much ink.

The great kindergarten lie

Okay, you probably knew all that. What confused me for the longest time was how no one ever mentioned the glaring contradiction with what every kid is taught in grade school art class: that the primary colors are red, blue, and yellow. Where did those come from, and where did they go?

I don’t have a canonical answer for that, but it does make some sense. Here’s a comparison: the first spectrum is a full rainbow, just like the one above. The second is the spectrum you get if you use red, blue, and yellow as primary colors.

a full spectrum of hues, labeled with color names that are roughly evenly distributed
a spectrum of hues made from red, blue, and yellow

The color names come from xkcd’s color survey, which asked a massive number of visitors to give freeform names to a variety of colors. One of the results was a map of names for all the fully-saturated colors, providing a rough consensus for how English speakers refer to them.

The first wheel is what you get if you start with red, green, and blue — but since we’re talking about art class here, it’s really what you get if you start with cyan, magenta, and yellow. The color names are spaced fairly evenly, save for blue and green, which almost entirely consume the bottom half.

The second wheel is what you get if you start with red, blue, and yellow. Red has replaced magenta, and blue has replaced cyan, so neither color appears on the wheel — red and blue are composites in the subtractive model, and you can’t make primary colors like cyan or magenta out of composite colors.

Look what this has done to the distribution of names. Pink and purple have shrunk considerably. Green is half its original size and somewhat duller. Red, orange, and yellow now consume a full half of the wheel.

There’s a really obvious advantage here, if you’re a painter: people are orange.

Yes, yes, we subdivide orange into a lot of more specific colors like “peach” and “brown”, but peach is just pale orange, and brown is just dark orange. Everyone, of every race, is approximately orange. Sunburn makes you redder; fear and sickness make you yellower.

People really like to paint other people, so it makes perfect sense to choose primary colors that easily mix to make people colors.

Meanwhile, cyan and magenta? When will you ever use those? Nothing in nature remotely resembles either of those colors. The true color wheel is incredibly, unnaturally bright. The reduced color wheel is much more subdued, with only one color that stands out as bright: yellow, the color of sunlight.

You may have noticed that I even cheated a little bit. The blue in the second wheel isn’t the same as the blue from the first wheel; it’s halfway between cyan and blue, a tertiary color I like to call azure. True pure blue is just as unnatural as true cyan; azure is closer to the color of the sky, which is reflected as the color of water.

People are orange. Sunlight is yellow. Dirt and rocks and wood are orange. Skies and oceans are blue. Blush and blood and sunburn are red. Sunsets are largely red and orange. Shadows are blue, the opposite of yellow. Plants are green, but in sun or shade they easily skew more blue or yellow.

All of these colors are much easier to mix if you start with red, blue, and yellow. It may not match how color actually works, but it’s a useful approximation for humans. (Anyway, where will you find dyes that are cyan or magenta? Blue is hard enough.)

I’ve actually done some painting since I first thought about this, and would you believe they sell paints in colors other than bright red, blue, and yellow? You can just pick whatever starting colors you want and the whole notion of “primary” goes a bit out the window. So maybe this is all a bit moot.

More on color names

The way we name colors fascinates me.

A “basic color term” is a single, unambiguous, very common name for a group of colors. English has eleven: red, orange, yellow, green, blue, purple, black, white, gray, pink, and brown.

Of these, orange is the only tertiary hue; brown is the only name for a specifically low-saturation color; pink and grey are the only names for specifically light shades. I can understand grey — it’s handy to have a midpoint between black and white — but the other exceptions are quite interesting.

Looking at the first color wheel again, “blue” and “green” together consume almost half of the spectrum. That seems reasonable, since they’re both primary colors, but “red” is relatively small; large chunks of it have been eaten up by its neighbors.

Orange is a tertiary color in either RGB or CMYK: it’s a mix of red and yellow, a primary and secondary color. Yet we ended up with a distinct name for it. I could understand if this were to give white folks’ skin tones their own category, similar to the reasons for the RBY art class model, but we don’t generally refer to white skin as “orange”. So where did this color come from?

Sometimes I imagine a parallel universe where we have common names for other tertiary colors. How much richer would the blue/green side of the color wheel be if “chartreuse” or “azure” were basic color terms? Can you even imagine treating those as distinct colors, not just variants or green or blue? That’s exactly how we treat orange, even though it’s just a variant of red.

I can’t speak to whether our vocabulary truly influences how we perceive or think (and that often-cited BBC report seems to have no real source). But for what it’s worth, I’ve been trying to think of “azure” as distinct for a few years now, and I’ve had a much easier time dealing with blues in art and design. Giving the cyan end of blue a distinct and common name has given me an anchor, something to arrange thoughts around.

Come to think of it, yellow is an interesting case as well. A decent chunk of the spectrum was ultimately called “yellow” in the xkcd map; here’s that chunk zoomed in a bit.

full range of xkcd yellows

How much of this range would you really call yellow, rather than green (or chartreuse!) or orange? Yellow is a remarkably specific color: mixing it even slightly with one of its neighbors loses some of its yellowness, and darkening it moves it swiftly towards brown.

I wonder why this is. When we see a yellowish-orange, are we inclined to think of it as orange because it looks like orange under yellow sunlight? Is it because yellow is between red and green, and the red and green receptors in the human eye pick up on colors that are very close together?


Most human languages develop their color terms in a similar order, with a split between blue and green often coming relatively late in a language’s development. Of particular interest to me is that orange and pink are listed as a common step towards the end — I’m really curious as to whether that happens universally and independently, or it’s just influence from Western color terms.

I’d love to see a list of the basic color terms in various languages, but such a thing is proving elusive. There’s a neat map of how many colors exist in various languages, but it doesn’t mention what the colors are. It’s easy enough to find a list of colors in various languages, like this one, but I have no idea whether they’re basic in each language. Note also that this chart only has columns for English’s eleven basic colors, even though Russian and several other languages have a twelfth basic term for azure. The page even mentions this, but doesn’t include a column for it, which seems ludicrous in an “omniglot” table.

The only language I know many color words in is Japanese, so I went delving into some of its color history. It turns out to be a fascinating example, because you can see how the color names developed right in the spelling of the words.

See, Japanese has a couple different types of words that function like adjectives. Many of the most common ones end in -i, like kawaii, and can be used like verbs — we would translate kawaii as “cute”, but it can function just as well as “to be cute”. I’m under the impression that -i adjectives trace back to Old Japanese, and new ones aren’t created any more.

That’s really interesting, because to my knowledge, only five Japanese color names are in this form: kuroi (black), shiroi (white), akai (red), aoi (blue), and kiiroi (yellow). So these are, necessarily, the first colors the language could describe. If you compare to the chart showing progression of color terms, this is the bottom cell in column IV: white, red, yellow, green/blue, and black.

A great many color names are compounds with iro, “color” — for example, chairo (brown) is cha (tea) + iro. Of the five basic terms above, kiiroi is almost of that form, but unusually still has the -i suffix. (You might think that shiroi contains iro, but shi is a single character distinct from i. kiiroi is actually written with the kanji for iro.) It’s possible, then, that yellow was the latest of these five words — and that would give Old Japanese words for white, red/yellow, green/blue, and black, matching the most common progression.

Skipping ahead some centuries, I was surprised to learn that midori, the word for green, was only promoted to a basic color fairly recently. It’s existed for a long time and originally referred to “greenery”, but it was considered to be a shade of blue (ao) until the Allied occupation after World War II, when teaching guidelines started to mention a blue/green distinction. (I would love to read more details about this, if you have any; the West’s coming in and adding a new color is a fascinating phenomenon, and I wonder what other substantial changes were made to education.)

Japanese still has a number of compound words that use ao (blue!) to mean what we would consider green: aoshingou is a green traffic light, aoao means “lush” in a natural sense, aonisai is a greenhorn (presumably from the color of unripe fruit), aojiru is a drink made from leafy vegetables, and so on.

This brings us to at least six basic colors, the fairly universal ones: black, white, red, yellow, blue, and green. What others does Japanese have?

From here, it’s a little harder to tell. I’m not exactly fluent and definitely not a native speaker, and resources aimed at native English speakers are more likely to list colors familiar to English speakers. (I mean, until this week, I never knew just how common it was for aoi to mean green, even though midori as a basic color is only about as old as my parents.)

I do know two curious standouts: pinku (pink) and orenji (orange), both English loanwords. I can’t be sure that they’re truly basic color terms, but they sure do come up a lot. The thing is, Japanese already has names for these colors: momoiro (the color of peach — flowers, not the fruit!) and daidaiiro (the color of, um, an orange). Why adopt loanwords for concepts that already exist?

I strongly suspect, but cannot remotely qualify, that pink and orange weren’t basic colors until Western culture introduced the idea that they could be — and so the language adopted the idea and the words simultaneously. (A similar thing happened with grey, natively haiiro and borrowed as guree, but in my limited experience even the loanword doesn’t seem to be very common.)

Based on the shape of the words and my own unqualified guesses of what counts as “basic”, the progression of basic colors in Japanese seems to be:

  1. black, white, red (+ yellow), blue (+ green) — Old Japanese
  2. yellow — later Old Japanese
  3. brown — sometime in the past millenium
  4. green — after WWII
  5. pink, orange — last few decades?

And in an effort to put a teeny bit more actual research into this, I searched the Leeds Japanese word frequency list (drawn from websites, so modern Japanese) for some color words. Here’s the rank of each. Word frequency is generally such that the actual frequency of a word is inversely proportional to its rank — so a word in rank 100 is twice as common as a word in rank 200. The five -i colors are split into both noun and adjective forms, so I’ve included an adjusted rank that you would see if they were counted as a single word, using ab / (a + b).

  • white: 1010 ≈ 1959 (as a noun) + 2083 (as an adjective)
  • red: 1198 ≈ 2101 (n) + 2790 (adj)
  • black: 1253 ≈ 2017 (n) + 3313 (adj)
  • blue: 1619 ≈ 2846 (n) + 3757 (adj)
  • green: 2710
  • yellow: 3316 ≈ 6088 (n) + 7284 (adj)
  • orange: 4732 (orenji), n/a (daidaiiro)
  • pink: 4887 (pinku), n/a (momoiro)
  • purple: 6502 (murasaki)
  • grey: 8472 (guree), 10848 (haiiro)
  • brown: 10622 (chairo)
  • gold: 12818 (kin’iro)
  • silver: n/a (gin’iro)
  • navy: n/a (kon)

n/a” doesn’t mean the word is never used, only that it wasn’t in the top 15,000.

I’m not sure where the cutoff is for “basic” color terms, but it’s interesting to see where the gaps lie. I’m especially surprised that yellow is so far down, and that purple (which I hadn’t even mentioned here) is as high as it is. Also, green is above yellow, despite having been a basic color for less than a century! Go, green.

For comparison, in American English:

  • black: 254
  • white: 302
  • red: 598
  • blue: 845
  • green: 893
  • yellow: 1675
  • brown: 1782
  • golden: 1835
  • græy: 1949
  • pink: 2512
  • orange: 3171
  • purple: 3931
  • silver: n/a
  • navy: n/a

Don’t read too much into the actual ranks; the languages and corpuses are both very different.

Color models

There are numerous ways to arrange and identify colors, much as there are numerous ways to identify points in 3D space. There are also benefits and drawbacks to each model, but I’m often most interested in how much sense the model makes to me as a squishy human.

RGB is the most familiar to anyone who does things with computers — it splits a color into its red, green, and blue channels, and measures the amount of each from “none” to “maximum”. (HTML sets this range as 0 to 255, but you could just as well call it 0 to 1, or -4 to 7600.)

RGB has a couple of interesting problems. Most notably, it’s kind of difficult to read and write by hand. You can sort of get used to how it works, though I’m still not particularly great at it. I keep in mind these rules:

  1. The largest channel is roughly how bright the color is.

    This follows pretty easily from the definition of RGB: it’s colored light added on top of black. The maximum amount of every color makes white, so less than the maximum must be darker, and of course none of any color stays black.

  2. The smallest channel is how pale (desaturated) the color is.

    Mixing equal amounts of red, green, and blue will produce grey. So if the smallest channel is green, you can imagine “splitting” the color between a grey (green, green, green), and the leftovers (red – green, 0, blue – green). Mixing grey with a color will of course make it paler — less saturated, closer to grey — so the bigger the smallest channel, the greyer the color.

  3. Whatever’s left over tells you the hue.

It might be time for an illustration. Consider the color (50%, 62.5%, 75%). The brightness is “capped” at 75%, the largest channel; the desaturation is 50%, the smallest channel. Here’s what that looks like.

illustration of the color (50%, 62.5%, 75%) split into three chunks of 50%, 25%, and 25%

Cutting out the grey and the darkness leaves a chunk in the middle of actual differences between the colors. Note that I’ve normalized it to (0%, 50%, 100%), which is the percentage of that small middle range. Removing the smallest and largest channels will always leave you with a middle chunk where at least one channel is 0% and at least one channel is 100%. (Or it’s grey, and there is no middle chunk.)

The odd one out is green at 50%, so the hue of this color is halfway between cyan (green + blue) and blue. That hue is… azure! So this color is a slightly darkened and fairly dull azure. (The actual amount of “greyness” is the smallest relative to the largest, so in this case it’s about ⅔ grey, or about ⅓ saturated.) Here’s that color.

a slightly darkened, fairly dull azure

This is a bit of a pain to do in your head all the time, so why not do it directly?

HSV is what you get when you directly represent colors as hue, saturation, and value. It’s often depicted as a cylinder, with hue represented as an angle around the color wheel: 0° for red, 120° for green, and 240° for blue. Saturation ranges from grey to a fully-saturated color, and value ranges from black to, er, the color. The azure above is (210°, ⅓, ¾) in HSV — 210° is halfway between 180° (cyan) and 240° (blue), ⅓ is the saturation measurement mentioned before, and ¾ is the largest channel.

It’s that hand-waved value bit that gives me trouble. I don’t really know how to intuitively explain what value is, which makes it hard to modify value to make the changes I want. I feel like I should have a better grasp of this after a year and a half of drawing, but alas.

I prefer HSL, which uses hue, saturation, and lightness. Lightness ranges from black to white, with the unperturbed color in the middle. Here’s lightness versus value for the azure color. (Its lightness is ⅝, the average of the smallest and largest channels.)

comparison of lightness and value for the azure color

The lightness just makes more sense to me. I can understand shifting a color towards white or black, and the color in the middle of that bar feels related to the azure I started with. Value looks almost arbitrary; I don’t know where the color at the far end comes from, and it just doesn’t seem to have anything to do with the original azure.

I’d hoped Wikipedia could clarify this for me. It tells me value is the same thing as brightness, but the mathematical definition on that page matches the definition of intensity from the little-used HSI model. I looked up lightness instead, and the first sentence says it’s also known as value. So lightness is value is brightness is intensity, but also they’re all completely different.

Wikipedia also says that HSV is sometimes known as HSB (where the “B” is for “brightness”), but I swear I’ve only ever seen HSB used as a synonym for HSL. I don’t know anything any more.

Oh, and in case you weren’t confused enough, the definition of “saturation” is different in HSV and HSL. Good luck!

Wikipedia does have some very nice illustrations of HSV and HSL, though, including depictions of them as a cone and double cone.

(Incidentally, you can use HSL directly in CSS now — there are hsl() and hsla() CSS3 functions which evaluate as colors. Combining these with Sass’s scale-color() function makes it fairly easy to come up with decent colors by hand, without having to go back and forth with an image editor. And I can even sort of read them later!)

An annoying problem with all of these models is that the idea of “lightness” is never quite consistent. Even in HSL, a yellow will appear much brighter than a blue with the same saturation and lightness. You may even have noticed in the RGB split diagram that I used dark red and green text, but light blue — the pure blue is so dark that a darker blue on top is hard to read! Yet all three colors have the same lightness in HSL, and the same value in HSV.

Clearly neither of these definitions of lightness or brightness or whatever is really working. There’s a thing called luminance, which is a weighted sum of the red, green, and blue channels that puts green as a whopping ten times brighter than blue. It tends to reflect how bright colors actually appear.

Unfortunately, luminance and related values are only used in fairly obscure color models, like YUV and Lab. I don’t mean “obscure” in the sense that nobody uses them, but rather that they’re very specialized and not often seen outside their particular niches: YUV is very common in video encoding, and Lab is useful for serious photo editing.

Lab is pretty interesting, since it’s intended to resemble how human vision works. It’s designed around the opponent process theory, which states that humans see color in three pairs of opposites: black/white, red/green, and yellow/blue. The idea is that we perceive color as somewhere along these axes, so a redder color necessarily appears less green — put another way, while it’s possible to see “yellowish green”, there’s no such thing as a “yellowish blue”.

(I wonder if that explains our affection for orange: we effectively perceive yellow as a fourth distinct primary color.)

Lab runs with this idea, making its three channels be lightness (but not the HSL lightness!), a (green to red), and b (blue to yellow). The neutral points for a and b are at zero, with green/blue extending in the negative direction and red/yellow extending in the positive direction.

Lab can express a whole bunch of colors beyond RGB, meaning they can’t be shown on a monitor, or even represented in most image formats. And you now have four primary colors in opposing pairs. That all makes it pretty weird, and I’ve actually never used it myself, but I vaguely aspire to do so someday.

I think those are all of the major ones. There’s also XYZ, which I think is some kind of master color model. Of course there’s CMYK, which is used for printing, but it’s effectively just the inverse of RGB.

With that out of the way, now we can get to the hard part!

Colorspaces

I called RGB a color model: a way to break colors into component parts.

Unfortunately, RGB alone can’t actually describe a color. You can tell me you have a color (0%, 50%, 100%), but what does that mean? 100% of what? What is “the most blue”? More importantly, how do you build a monitor that can display “the most blue” the same way as other monitors? Without some kind of absolute reference point, this is meaningless.

A color space is a color model plus enough information to map the model to absolute real-world colors. There are a lot of these. I’m looking at Krita’s list of built-in colorspaces and there are at least a hundred, most of them RGB.

I admit I’m bad at colorspaces and have basically done my best to not ever have to think about them, because they’re a big tangled mess and hard to reason about.

For example! The effective default RGB colorspace that almost everything will assume you’re using by default is sRGB, specifically designed to be this kind of global default. Okay, great.

Now, sRGB has gamma built in. Gamma correction means slapping an exponent on color values to skew them towards or away from black. The color is assumed to be in the range 0–1, so any positive power will produce output from 0–1 as well. An exponent greater than 1 will skew towards black (because you’re multiplying a number less than 1 by itself), whereas an exponent less than 1 will skew away from black.

What this means is that halfway between black and white in sRGB isn’t (50%, 50%, 50%), but around (73%, 73%, 73%). Here’s a great example, borrowed from this post (with numbers out of 255):

alternating black and white lines alongside gray squares of 128 and 187

Which one looks more like the alternating bands of black and white lines? Surely the one you pick is the color that’s actually halfway between black and white.

And yet, in most software that displays or edits images, interpolating white and black will give you a 50% gray — much darker than the original looked. A quick test is to scale that image down by half and see whether the result looks closer to the top square or the bottom square. (Firefox, Chrome, and GIMP get it wrong; Krita gets it right.)

The right thing to do here is convert an image to a linear colorspace before modifying it, then convert it back for display. In a linear colorspace, halfway between white and black is still 50%, but it looks like the 73% grey. This is great fun: it involves a piecewise function and an exponent of 2.4.

It’s really difficult to reason about this, for much the same reason that it’s hard to grasp text encoding problems in languages with only one string type. Ultimately you still have an RGB triplet at every stage, and it’s very easy to lose track of what kind of RGB that is. Then there’s the fact that most images don’t specify a colorspace in the first place so you can’t be entirely sure whether it’s sRGB, linear sRGB, or something entirely; monitors can have their own color profiles; you may or may not be using a program that respects an embedded color profile; and so on. How can you ever tell what you’re actually looking at and whether it’s correct? I can barely keep track of what I mean by “50% grey”.

And then… what about transparency? Should a 50% transparent white atop solid black look like 50% grey, or 73% grey? Krita seems to leave it to the colorspace: sRGB gives the former, but linear sRGB gives the latter. Does this mean I should paint in a linear colorspace? I don’t know! (Maybe I’ll give it a try and see what happens.)

Something I genuinely can’t answer is what effect this has on HSV and HSL, which are defined in terms of RGB. Is there such a thing as linear HSL? Does anyone ever talk about this? Would it make lightness more sensible?

There is a good reason for this, at least: the human eye is better at distinguishing dark colors than light ones. I was surprised to learn that, but of course, it’s been hidden from me by sRGB, which is deliberately skewed to dedicate more space to darker colors. In a linear colorspace, a gradient from white to black would have a lot of indistinguishable light colors, but appear to have severe banding among the darks.

several different black to white gradients

All three of these are regular black-to-white gradients drawn in 8-bit color (i.e., channels range from 0 to 255). The top one is the naïve result if you draw such a gradient in sRGB: the midpoint is the too-dark 50% grey. The middle one is that same gradient, but drawn in a linear colorspace. Obviously, a lot of dark colors are “missing”, in the sense that we could see them but there’s no way to express them in linear color. The bottom gradient makes this more clear: it’s a gradient of all the greys expressible in linear sRGB.

This is the first time I’ve ever delved so deeply into exactly how sRGB works, and I admit it’s kind of blowing my mind a bit. Straightforward linear color is so much lighter, and this huge bias gives us a lot more to work with. Also, 73% being the midpoint certainly explains a few things about my problems with understanding brightness of colors.

There are other RGB colorspaces, of course, and I suppose they all make for an equivalent CMYK colorspace. YUV and Lab are families of colorspaces, though I think most people talking about Lab specifically mean CIELAB (or “L*a*b*”), and there aren’t really any competitors. HSL and HSV are defined in terms of RGB, and image data is rarely stored directly as either, so there aren’t really HSL or HSV colorspaces.

I think that exhausts all the things I know.

Real world color is also a lie

Just in case you thought these problems were somehow unique to computers. Surprise! Modelling color is hard because color is hard.

I’m sure you’ve seen the checker shadow illusion, possibly one of the most effective optical illusions, where the presence of a shadow makes a gray square look radically different than a nearby square of the same color.

Our eyes are very good at stripping away ambient light effects to tell what color something “really” is. Have you ever been outside in bright summer weather for a while, then come inside and everything is starkly blue? Lingering compensation for the yellow sunlight shifting everything to be slightly yellow; the opposite of yellow is blue.

Or, here, I like this. I’m sure there are more drastic examples floating around, but this is the best I could come up with. Here are some Pikachu I found via GIS.

photo of Pikachu plushes on a shelf

My question for you is: what color is Pikachu?

Would you believe… orange?

photo of Pikachu plushes on a shelf, overlaid with color swatches; the Pikachu in the background are orange

In each box, the bottom color is what I color-dropped, and the top color is the same hue with 100% saturation and 50% lightness. It’s the same spot, on the same plush, right next to each other — but the one in the background is orange, not yellow. At best, it’s brown.

What we see as “yellow in shadow” and interpret to be “yellow, but darker” turns out to be another color entirely. (The grey whistles are, likewise, slightly blue.)

Did you know that mirrors are green? You can see it in a mirror tunnel: the image gets slightly greener as it goes through the mirror over and over.

Distant mountains and other objects, of course, look bluer.

This all makes painting rather complicated, since it’s not actually about painting things the color that they “are”, but painting them in such a way that a human viewer will interpret them appropriately.

I, er, don’t know enough to really get very deep here. I really should, seeing as I keep trying to paint things, but I don’t have a great handle on it yet. I’ll have to defer to Mel’s color tutorial. (warning: big)

Blending modes

You know, those things in Photoshop.

I’ve always found these remarkably unintuitive. Most of them have names that don’t remotely describe what they do, the math doesn’t necessarily translate to useful understanding, and they’re incredibly poorly-documented. So I went hunting for some precise definitions, even if I had to read GIMP’s or Krita’s source code.

In the following, A is a starting image, and B is something being drawn on top with the given blending mode. (In the case of layers, B is the layer with the mode, and A is everything underneath.) Generally, the same operation is done on each of the RGB channels independently. Everything is scaled to 0–1, and results are generally clamped to that range.

I believe all of these treat layer alpha the same way: linear interpolation between A and the combination of A and B. If B has alpha t, and the blending mode is a function f, then the result is t × f(A, B) + (1 - t) × A.

If A and B themselves have alpha, the result is a little more complicated, and probably not that interesting. It tends to work how you’d expect. (If you’re really curious, look at the definition of BLEND() in GIMP’s developer docs.)

  • Normal: B. No blending is done; new pixels replace old pixels.

  • Multiply: A × B. As the name suggests, the channels are multiplied together. This is very common in digital painting for slapping on a basic shadow or tinting a whole image.

    I think the name has always thrown me off just a bit because “Multiply” sounds like it should make things bigger and thus brighter — but because we’re dealing with values from 0 to 1, Multiply can only ever make colors darker.

    Multiplying with black produces black. Multiplying with white leaves the other color unchanged. Multiplying with a gray is equivalent to blending with black. Multiplying a color with itself squares the color, which is similar to applying gamma correction.

    Multiply is commutative — if you swap A and B, you get the same result.

  • Screen: 1 - (1 - A)(1 - B). This is sort of an inverse of Multiply; it multiplies darkness rather than lightness. It’s defined as inverting both colors, multiplying, and inverting the result. Accordingly, Screen can only make colors lighter, and is also commutative. All the properties of Multiply apply to Screen, just inverted.

  • Hard Light: Equivalent to Multiply if B is dark (i.e., less than 0.5), or Screen if B is light. There’s an additional factor of 2 included to compensate for how the range of B is split in half: Hard Light with B = 0.4 is equivalent to Multiply with B = 0.8, since 0.4 is 0.8 of the way to 0.5. Right.

    This seems like a possibly useful way to apply basic highlights and shadows with a single layer? I may give it a try.

    The math is commutative, but since B is checked and A is not, Hard Light is itself not commutative.

  • Soft Light: Like Hard Light, but softer. No, really. There are several different versions of this, and they’re all a bit of a mess, not very helpful for understanding what’s going on.

    If you graphed the effect various values of B had on a color, you’d have a straight line from 0 up to 1 (at B = 0.5), and then it would abruptly change to a straight line back down to 0. Soft Light just seeks to get rid of that crease. Here’s Hard Light compared with GIMP’s Soft Light, where A is a black to white gradient from bottom to top, and B is a black to white gradient from left to right.

    graphs of combinations of all grays with Hard Light versus Soft Light

    You can clearly see the crease in the middle of Hard Light, where B = 0.5 and it transitions from Multiply to Screen.

  • Overlay: Equivalent to either Hard Light or Soft Light, depending on who you ask. In GIMP, it’s Soft Light; in Krita, it’s Hard Light except the check is done on A rather than B. Given the ambiguity, I think I’d rather just stick with Hard Light or Soft Light explicitly.

  • Difference: abs(A - B). Does what it says on the tin. I don’t know why you would use this? Difference with black causes no change; Difference with white inverts the colors. Commutative.

  • Addition and Subtract: A + B and A - B. I didn’t think much of these until I discovered that Krita has a built-in brush that uses Addition mode. It’s essentially just a soft spraypaint brush, but because it uses Addition, painting over the same area with a dark color will gradually turn the center white, while the fainter edges remain dark. The result is a fiery glow effect, which is pretty cool. I used it manually as a layer mode for a similar effect, to make a field of sparkles. I don’t know if there are more general applications.

    Addition is commutative, of course, but Subtract is not.

  • Divide: A ÷ B. Apparently this is the same as changing the white point to 1 - B. Accordingly, the result will blow out towards white very quickly as B gets darker.

  • Dodge and Burn: A ÷ (1 - B) and 1 - (1 - A) ÷ B. Inverses in the same way as Multiply and Screen. Similar to Divide, but with B inverted — so Dodge changes the white point to B, with similar caveats as Divide. I’ve never seen either of these effects not look horrendously gaudy, but I think photographers manage to use them, somehow.

  • Darken Only and Lighten Only: min(A, B) and max(A, B). Commutative.

  • Linear Light: (2 × A + B) - 1. I think this is the same as Sai’s “Lumi and Shade” mode, which is very popular, at least in this house. It works very well for simple lighting effects, and shares the Soft/Hard Light property that darker colors darken and lighter colors lighten, but I don’t have a great grasp of it yet and don’t know quite how to explain what it does. So I made another graph:

    graph of Linear Light, with a diagonal band of shading going from upper left to bottom right

    Super weird! Half the graph is solid black or white; you have to stay in that sweet zone in the middle to get reasonable results.

    This is actually a combination of two other modes, Linear Dodge and Linear Burn, combined in much the same way as Hard Light. I’ve never encountered them used on their own, though.

  • Hue, Saturation, Value: Work like you might expect: converts A to HSV and replaces either its hue, saturation, or value with Bs.

  • Color: Uses HSL, unlike the above three. Combines Bs hue and saturation with As lightness.

  • Grain Extract and Grain Merge: A - B + 0.5 and A + B - 0.5. These are clearly related to film grain, somehow, but their exact use eludes me.

    I did find this example post where someone combines a photo with a blurred copy using Grain Extract and Grain Merge. Grain Extract picked out areas of sharp contrast, and Grain Merge emphasized them, which seems relevant enough to film grain. I might give these a try sometime.

Those are all the modes in GIMP (except Dissolve, which isn’t a real blend mode; also, GIMP doesn’t have Linear Light). Photoshop has a handful more. Krita has a preposterous number of other modes, no, really, it is absolutely ridiculous, you cannot even imagine.

I may be out of things

There’s plenty more to say about color, both technically and design-wise — contrast and harmony, color blindness, relativity, dithering, etc. I don’t know if I can say any of it with any particular confidence, though, so perhaps it’s best I stop here.

I hope some of this was instructive, or at least interesting!

Adobe, Microsoft Patch Critical Security Bugs

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/07/adobe-microsoft-patch-critical-security-bugs/

Adobe has pushed out a critical update to plug at least 52 security holes in its widely-used Flash Player browser plugin, and another update to patch holes in Adobe Reader. Separately, Microsoft released 11 security updates to fix vulnerabilities more than 40 flaws in Windows and related software.

brokenflash-aFirst off, if you have Adobe Flash Player installed and haven’t yet hobbled this insecure program so that it runs only when you want it to, you are playing with fire. It’s bad enough that hackers are constantly finding and exploiting zero-day flaws in Flash Player before Adobe even knows about the bugs.

The bigger issue is that Flash is an extremely powerful program that runs inside the browser, which means users can compromise their computer just by browsing to a hacked or malicious site that targets unpatched Flash flaws.

The smartest option is probably to ditch this insecure program once and for all and significantly increase the security of your system in the process. I’ve got more on that approach — as well as slightly less radical solutions — in A Month Without Adobe Flash Player.

If you choose to update, please do it today. The most recent versions of Flash should be available from this Flash distribution page or the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). Chrome and IE should auto-install the latest Flash version on browser restart.

Happily, Adobe has delayed plans to stop distributing direct download links to its Flash Player program. The company had said it would decommission the direct download page on June 30, 2016, but the latest, patched Flash version 22.0.0.209 for Windows and Mac systems is still available there. The wording on the site has been changed to indicate the download links will be decommissioned “soon.”

Adobe’s advisory on the Flash flaws is here. The company also released a security update that addresses at least 30 security holes in Adobe Reader. The latest version of Reader for most Windows and Mac users is v. 15.017.20050.

brokenwindowsSix of the 11 patches Microsoft issued this month earned its most dire “critical” rating, which Microsoft assigns to software bugs that can be exploited to remotely commandeer vulnerable machines with little to no help from users, save from perhaps browsing to a hacked or malicious site.

In fact, most of the vulnerabilities Microsoft fixed this Patch Tuesday are in the company’s Web browsers — i.e., Internet Explorer (15 vulnerabilities) and its newer Edge browser (13 flaws). Both patches address numerous browse-and-get-owned issues.

Another critical patch from Redmond tackles problems in Microsoft Office that could be exploited through poisoned Office documents.

For further breakdown on the patches this month from Adobe and Microsoft, check out these blog posts from security vendors Qualys and Shavlik. And as ever, if you encounter any problems downloading or installing any of the updates mentioned above please leave a note about your experience in the comments below.

Google’s Post-Quantum Cryptography

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/googles_post-qu.html

News has been bubbling about an announcement by Google that it’s starting to experiment with public-key cryptography that’s resistant to cryptanalysis by a quantum computer. Specifically, it’s experimenting with the New Hope algorithm.

It’s certainly interesting that Google is thinking about this, and probably okay that it’s available in the Canary version of Chrome, but this algorithm is by no means ready for operational use. Secure public-key algorithms are very hard to create, and this one has not had nearly enough analysis to be trusted. Lattice-based public-key cryptosystems such as New Hope are particularly subtle — and we cryptographers are still learning a lot about how they can be broken.

Targets are important in cryptography, and Google has turned New Hope into a good one. Consider this an opportunity to advance our cryptographic knowledge, not an offer of a more-secure encryption option. And this is the right time for this area of research, before quantum computers make discrete-logarithm and factoring algorithms obsolete.

Adobe Update Plugs Flash Player Zero-Day

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/06/adobe-update-plugs-flash-player-zero-day/

Adobe on Thursday issued a critical update for its ubiquitous Flash Player software that fixes three dozen security holes in the widely-used browser plugin, including at least one vulnerability that is already being exploited for use in targeted attacks.

brokenflash-aThe latest update brings Flash to v. 22.0.0.192 for Windows and Mac users alike. If you have Flash installed, you should update, hobble or remove Flash as soon as possible.

The smartest option is probably to ditch the program once and for all and significantly increase the security of your system in the process. I’ve got more on that approach (as well as slightly less radical solutions ) in A Month Without Adobe Flash Player.

If you choose to update, please do it today. The most recent versions of Flash should be available from this Flash distribution page or the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). Chrome and IE should auto-install the latest Flash version on browser restart (I had to manually check for updates in Chrome an restart the browser to get the latest Flash version).

For some reason that probably has nothing to do with security, Adobe has decided to stop distributing direct links to its Flash Player software. According to the company’s Flash distribution page, on June 30, 2016 Adobe will decommission direct links to various Flash Player downloads. This will essentially force Flash users to update the program using its built-in automatic updates feature (which sometimes takes days to notice a new security update is available), or to install the program from the company’s Flash Home page — a download that currently bundles McAfee Security Scan Plus and a product called True Key by Intel Security.

Anything that makes it less likely users will update Flash seems like a bad idea, especially when we’re talking about a program that often needs security fixes more than once a month.

YouTube Threatens Legal Action Against Video Downloader

Post Syndicated from Ernesto original https://torrentfreak.com/youtube-threatens-legal-action-video-downloader-160530/

sadyoutubeWith over a billion users YouTube is the largest video portal on the Internet.

Every day users watch hundreds of millions of hours of video on the site, and for many it’s a prime source to enjoy music as well.

While YouTube is a blessing to thousands of content creators, there are also concerns among rightsholders. Music labels in particular are not happy with the fact that music videos can be easily downloaded from the site with help from external services.

To address the problem YouTube is contacting these third party sites, urging them to shut down this functionality. Most recently, YouTube’s legal team contacted the popular download service TubeNinja.

“It appears from your website and other marketing materials that TubeNinja is designed to allow users to download content from YouTube,” the email from YouTube’s legal team reads.

According to YouTube the video downloader violates the terms of service (ToS) of both the site and the API. Among other things, YouTube’s ToS prohibits the downloading of any video that doesn’t have a download link listed on the site.

Later, Google’s video service adds that if the site owner continues to operate the service this “may result in legal consequences.”

Email from YouTube’s legal team

youlett2

Despite the threatening language, TubeNinja owner Nathan doesn’t plan to take the functionality offline. He informed YouTube that his service doesn’t use YouTube’s API and says that it’s the responsibility of his users to ensure that they don’t violate the ToS of YouTube and TubeNinja.

“Our own ToS clearly states that the user is responsible for the legitimacy of the content they use our service for,” Nathan tells us.

TubeNinja doesn’t believe that YouTube has a very strong case and Nathan has asked the company for clarification. He also mentions that Google’s very own Chrome service lists many plugins that offer the exact same functionality.

“They don’t even seem to enforce removal of Chrome plugins that enable users to do the exact same thing,” Nathan says.

“Also the fact that services like Savefrom, Keepvid, clipconverter etc have been around since 2008, we find it hard to believe that there is any legal case at all. Kind of like suing a maker of VHS-recorders for users taping the television broadcast,” he adds.

This isn’t the first time that YouTube has taken action against download services. The site has gone after similar sites in the past

In 2012 Google went after Youtube-mp3.org with a message similar to the one TubeNinja received, but despite these efforts the site remains one of the most used conversion tools with millions of visitors per day.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Under Construction, our PICO-8 game

Post Syndicated from Eevee original https://eev.ee/blog/2016/05/25/under-construction-our-pico-8-game/

Mel and I made a game!

We’d wanted to a small game together for a while. Last month’s post about embedding Lua reminded me of the existence of the PICO-8, a “fantasy console” with 8-bit-ish limitations and built-in editing tools. Both of us have a bad habit of letting ambitions spiral way out of control, so “built-in limitations” sounded pretty good to me. I bought the console ($15, or free with the $20 Voxatron alpha) on a whim and started tinkering with it.

The result: Under Construction!

pico-8 cartridge

You can play in your very own web browser, assuming you have a keyboard. Also, that image is the actual cartridge, which you can save and play directly if you happen to have PICO-8. It’s also in the PICO-8 BBS.

(A couple people using Chrome on OS X have reported a very early crash, which seems to be a bug outside of my control. Safari works, and merely restarting Chrome has fixed it for at least one person.)

I don’t have too much to say about the game itself; hopefully, it speaks for itself. If not, there’s a little more on its Floraverse post.

I do have some things to say about making it. Also I am really, really tired, so apologies if this is even more meandering than usual.

The PICO-8

You can get a quick idea of what the console is all about from the homepage. I say “console”, but of course, there’s no physical hardware — the PICO-8 is a game engine with some severe artificial restrictions. It’s kind of like an emulator for a console that never existed.

Feel free to pore over the manual, but the most obvious limitations are:

  • Screen resolution of 128×128, usually displayed scaled up, since that’s microscopic on modern monitors.

  • 16-color fixed palette. The colors you see in the screenshots are the only colors you can use, period.

  • A spritesheet with enough room for 256 8×8 sprites. (You can make and use larger sprites, or use the space for something else entirely if you want, but the built-in tools assume you generally want that sprite size.)

  • A 128×64 map, as measured in tiles. The screen is 16×16 tiles, so that’s enough space for 32 screenfuls.

  • Alas! Half of the sprite space and half of the map space are shared, so you can’t actually have both 256 sprites and 32 map screens. You can have 256 sprites and 16 map screens, or 128 sprites and 32 map screens (which is what we did), or split the shared space some other way.

  • 64 chiptuney sound effects, complete with a tiny tracker for creating them.

  • 64 music tracks, built out of loops of up to four sound effects. There are only four sound channels total, so having four-channel background music means you can’t play any sound effects on top of it.

The programming language is a modified Lua, which comes with its own restrictions, but I’ll get to those later.

The restrictions are “carefully chosen to be fun to work with, [and] encourage small but expressive designs”. For the most part, they succeeded.

Our process

I bought PICO-8 at the beginning of the month. I spent a few hours a day over the next several days messing around, gradually producing a tiny non-game I called “eeveequest” and tweeting out successive iterations. I added the basics as they came to mind, without any real goal: movement, collision, jumping, music, sound effects, scrolling, camera movement.

The PICO-8 did let me create something with all the usual parts of a game in a matter of hours, and that’s pretty cool. I’ve never made music before, save for an hour or two trying to get audio working with LMMS, but I managed even a simple tune and some chimes here.

Meanwhile, Mel was independently trying it out, drawing sprites and populating a map. I copied my movement code over to their cartridge so we could walk around in this little world.

Then, uh, they gave me an avalanche of text describing what they wanted, and I vanished from the mortal realm for ten days while I set about making it happen.

Look, I didn’t say this was a good process. Our next endeavor should be a little more balanced, since we won’t be eight hours apart, and a good chunk of engine code is already written.

One particularly nice thing: PICO-8 cartridges are very simple text files with the code at the top. I eventually migrated to writing code in vim rather than the (extremely simple) built-in code editor, and if Mel had been working on the map at the same time, I could just copy-paste everything except the code into my own cart. It would play decently well with source control, but for a two-person project where I’m the only one editing the code, I couldn’t really justify inflicting git on a non-programmer. We did have one minor incident where a few map changes were lost, but for the most part, it worked surprisingly well for collaborating between a programmer and an artist.

There was a lot more back-and-forth towards the end, once the bulk of the code was written (and Mel was no longer busy selling at three conventions), when the remaining work was subtler design issues. That was probably the most fun I had.

Programming the PICO-8

I picked on Lua a bit in that post last month. Having now worked with it for two solid weeks, I regret what I said. Because now I really want to say it all again, more forcefully. Silently ignoring errors (including division by zero!) and having no easy way to print values for debugging are my top two least favorite programming language “features”.

Plenty of Lua is just plain weird, but forgiveable. Those two things make it pretty aggravating. It’s not even for any good reason — Lua clearly has an error-reporting mechanism, so it’s entirely capable of having division by zero be an error. And half the point of a dynamic runtime is that you can easily examine values at runtime, yet Lua makes doing that as difficult as possible. Argh.

PICO-8’s Lua

The PICO-8 uses a slightly modified Lua 5.2. The documentation isn’t precise about what parts of Lua are still available, so I had some minor misadventures, like thinking varargs weren’t supported because the arg global was missing. (Turns out varargs work fine, but Lua 5.2 changed the syntax and got rid of the clumsy global.)

If it weren’t yet obvious, I’m no Lua expert. The most obvious differences to me are:

  • Numbers are signed 16.16 fixed-point, rather than Lua’s double-precision floats. (This has the curious side effect that dividing by zero produces 32768.) I have a soft spot for fixed-point! It’s nice sometimes to have a Planck length that doesn’t depend on where you are in the game world.

  • Most of the standard library is missing. There’s no collectgarbage, error, ipairs, pcall, tostring, unpack, or other more obscure stuff. The bit32, debug, math, os, string, and table libraries are gone. The _G superglobal is gone.

  • Several built-ins have been changed or replaced with alternatives: all iterates over a sequence-like table; pairs iterates over all keys and values in a table; print now prints to a particular screen coordinate. A handful of substitute math and string functions are built in. There are functions for bit operations, which I guess were in the bit32 library.

  • There are some mutating assignment operators: +=, -=, *=, /=, %=. Lua doesn’t have these.

The PICO-8 does still have metatables and coroutines. Metatables were a blessing. Coroutines are nice, but I never came up with a good excuse to use them.

Writing the actual game stuff

Each of the PICO-8’s sprites has a byte’s worth of flags — eight color-coded toggles that can be turned on or off. They don’t mean anything to the console, and you’re free to use them however you want in your game.

I decided early on, even for EeveeQuest™, to use one of the flags to indicate a block was solid. It’s the most obvious approach, and it turned out to have the happy side effect that Mel could change the physics of the game world without touching the code at all. I ended up using five more flags to indicate which acts a tile should appear in.

I love rigging game engines so that people can modify as much stuff as possible without touching code. I already prefer writing code in a more declarative style, and this is even better: the declarations are out of code entirely and in a graphical tool instead.


Game code is generally awful, so I did my best to keep things tidy. First, some brief background.

A game has two major concerns: it has to simulate the world, and it has to draw that world to the screen. The easiest and most obvious approach, then, is to alternate those steps: do a single update pass (where everyone takes one step, say), then draw, then do another update, and so on. One round of updating and drawing is a frame, and the length of time it takes is a tic. If you want your game to run at 60 fps, then a tic is 1/60 seconds, and you have to be sure you can both update once and draw once in that amount of time.

You get into trouble when drawing to the screen starts to take long. Say it takes two tics. Now not only is the display laggy, but the actual game world is running at half speed, because you only run an update pass every two tics instead of every one.

The PICO-8 is not the kind of platform you might expect to have this problem, but nonetheless it offers a very simple solution. It asks your code to update, then it asks your code to draw, separately. If the console detects that the game is running too slowly to hit its usual 30 fps, it’ll automatically slow down its framerate to 15fps, but it’ll ask you to update twice between draws. (Put differently, it skips every other draw.)

This turned out to be handy in act 4, where the fog effect is sometimes a little too intensive. The framerate drops, but the world still moves along at the right speed, so it just looks like the animation isn’t quite as smooth.

I copied this approach all the way down. I have a simple object called a “scene”, which is what you might also call a “screen”: the title screen, playable area, text between acts, and ending credits are all different scenes. The playable scene has a handful of layers, each of which updates and then draws like a stack.

The map is kind of interesting. The PICO-8 has a function for drawing an entire block of the map to the screen, which works pretty well for static stuff like background tiles and solid walls. There’s no built-in support for animation, though, and certainly nothing for handling player movement or tiles that only sometimes exist.

So I made an “actor” type, which can be anything that wants to participate in the usual update/draw cycle. There are three major kinds of actor:

  • Decor shouldn’t move and doesn’t draw itself; instead, it edits its own position on the map. This is used for all the animated sprites, as well as a few static objects like radios.

  • Mobs aren’t on the map at all and can move around freely, possibly colliding with parts of the map and each other. Only the player and the rocks are mobs; each additional one makes collision detection significantly more expensive. (I never got around to adding a blockmap.)

  • Transient actors don’t actually update or draw, and in fact they don’t permanently exist at all. They’re only created temporarily, when the player or another mob bumps into a map tile. When you walk into a wall, for example, the game creates a temporary wall actor at that position that checks its sprite’s flags to see if it blocks you. This is how the spikes work, too; they don’t need to update every tic, because they don’t actually do anything on their own, but they can still have custom behavior when you bump into them. They also have a custom collision box.

You can declare that a specific sprite on the map will always be turned into a particular kind of actor, and give it its own behavior. Virtually everything works this way, even the player, which means you can move the mole sprite anywhere on the map and it’ll automatically become the player’s start point.

You can also define an animation, and it’ll automatically be turned into a type of decor that doesn’t do anything except edit the map every few frames to animate itself.


By far, the worst thing to deal with was collision. I must’ve rewritten it four or five times.

I’ve written collision code before, but apparently I’d never done it from complete scratch for a world with free movement and polished it to a mirror sheen.

Maybe I’m missing something here, but the usual simple approach to collision is terrible. It tends to go like this.

  1. Move the player some amount in the direction they’re moving.
  2. See if they’re now overlapping anything.
  3. If they are, move them back to where they started.

That’s some hot garbage. What if you’re falling, and you’re a pixel above the ground, but you’re moving at two pixels per tic? The game will continually try to move you two pixels, see you’re stuck in the ground, and then put you back where you started. You’ll hover a pixel above the ground forever.

I tried doing some research, by which I mean I googled for three minutes and got sick of finding “tutorials” that didn’t go any further than this useless algorithm, so I just beat on it until I got what I wanted. My approach:

  1. Get the player’s bounding box. Extend that box in the direction they’re moving, so it covers both their old and new position.
  2. Get a list of all the actors that overlap that box, including transient actors for every tile. (This is the main place they’re used, so I can handle them the same way as any other actor.)
  3. Sort all those actors in the order the player will hit them, starting with the closest.
  4. Look at the first actor. Move the player towards it until they touch it, then stop. If the actor has any collision behavior, activate it now.
  5. If the actor is solid, you’re done, at least in this direction; drop the actor’s velocity to zero. Otherwise, repeat with the next actor.
  6. If there’s any movement left at the end, tack that on too, since nothing else can be in the way.

I had some fun bugs along the way, like the one where the collision detection was almost perfect, unless you hit a solid block exactly on its corner, in which case you would pass right through it. Or the multiple times you couldn’t walk along the ground because you were colliding with the corner of the next tile of ground.

This seems to be pretty rock solid now, though it’s a bit slow when there are more than half a dozen or so mobs wandering around. It works well enough for this game.


The runner-up for awkward things to deal with: time. Even a simple animation is mildly annoying. It’s not that the math is difficult; it’s more that you can’t look at the math and immediately understand what it’s doing.

I wished numerous times that I had a more straightforward way to say “fade this in, wait 10 tics, then fade it back out”, but I never had the time to sit down and come up with something nice. I considered using coroutines for this, since they’re naturally sleep-able, but I don’t think they’d work so well for anything beyond the most trivial operations. cocos2d has a concept of “actions”, where you can animate a change over time and even schedule several changes to happen in sequence; maybe I’ll give something like that a try next time.

The PICO-8’s limits

Working within limitations has a unique way of inspiring creative solutions. It can even help you get started in a medium you’ve never tried before. I know. It’s the whole point of the console. I only got around to making some music because I was handed a very limited tracker.

Sixteen colors? Well, that’s okay; now I don’t have to worry about picking my own colors, which is something I’m not terribly good at.

Small screen and sprite size? That puts a cap on how good the art can possibly be expected to look anyway, so I can make simple sprites fairly easily and have a decent-looking game.

Limited map space? Well… now we’re getting into fuzzier territory. A limit on map space is less of a creative constraint and more of a hard cap on how much game you can fit in your game. At least you can make up for it in a few ways with some creative programming.

Hmm. About that.

There are three limits on the amount of code you can have:

  • No more than 8192 tokens. A token is a single identifier, string, number, or operator. A pair of brackets counts as one token. Commas, periods, colons, and the local and end keywords are free.
  • No more than 65536 bytes. That’s a pretty high limit, and I think it really only exists to prevent you from cramming ludicrous amounts of data into a cartridge via strings.
  • No more than 15360 bytes, after compression. More on that in a moment.

I thought 8192 sounded like plenty. Then I went and wrote a game. Our final tally, for my 2779 lines of Lua:

  • 8183/8192 tokens
  • 56668/65536 bytes
  • 22574/15360 compressed bytes

That’s not a typo; I was well over the compressed limit. I only even found out about the compressed limit a few days ago — the documentation never actually mentions it, and the built-in editor only tracks code size and token count! It came as a horrible surprise. For the released version of the game, I had to delete every single comment, remove every trace of debugging code, and rename several local variables. I was resorting to deleting extra blank lines before I finally got it to fit.

Trying to squeeze code into a compressed limit is maddening. Most of the obvious ways to significantly shorten code involve removing duplication, but duplication is exactly what compression algorithms are good at dealing with! Several times I tried an approach that made the code shorter in both absolute size and token count, only to find that the compressed size had grown slightly larger.

As for tokens, wow. I’ve never even had to think in tokens before. Here are some token costs from the finished game.

  • 65 — updating the camera to follow the player
  • 95 — sort function
  • 148 — simple vector type
  • 198 — function to print text on the screen aligned to a point
  • 201 — debugging function that prints values to stdout
  • 256 — perlin noise implementation
  • 261 — screen fade
  • 280 — fog effect from act 4
  • 370 — title screen
  • 690 — credits in their entirety
  • 703 — physics and collision detection

It adds up pretty quickly. We essentially sliced off two entire endings, because I just didn’t have any space left.

Incredibly, the token limitation used to be worse. I went searching for people talking about this, and I mostly found posts from several releases ago, when the byte limit was 32K and there weren’t any freebies in the token count — parentheses counted as two, dots counted as one.

This kind of sucks, for several reasons.

  • The most obvious way to compensate for the graphical limitations is with code: procedural generation, effects, and the like. Both of those eat up tokens fast. I spent a total of 536 tokens, 6.5% of my total space, just on adding a fog overlay.

  • I’m effectively punished for organizing code. Some 13% of my token count goes towards dotted names, i.e., foo.bar instead of foo_bar.

  • It’s just not the kind of limitation that really inspires creativity. If you limit the number of sprites I have, okay, I can at least visually see I only have so much space and adjust accordingly. I don’t have any sense for how many tokens I’d need to write some code. If I hit the limit and the game isn’t done, that’s just a wall. I don’t have a whole lot of creative options here: I can waste time playing code golf (which here is more of an annoying puzzle than a source of inspiration), or I can delete a chunk of the game.

I looked at the most popular cartridges, and a couple depressing themes emerged. Several games are merely demos of interesting ideas that could be expanded into a game, except that the ideas alone already take half of the available tokens. Quite a few have resorted to a strong arcade vibe, with little distinction between “levels” except that there are more or different enemies. There’s a little survival crafting game which seems like it could be interesting, but it only has four objects you can build and it’s already out of tokens. Very few games have any sort of instructions (which would eat precious tokens!), and several of them have left me frustrated from the outset, until I read through the forum thread in search of someone explaining what I’m supposed to be doing.

And keep in mind, you’re starting pretty much from scratch. The most convenience the PICO-8 affords is copying a whole block of the map to the screen at once. Everything else is your problem, and it all has to fit under the cap.

The PICO-8 is really appealing overall: it has little built-in tools for creating all the resources, it’s pretty easy to do stuff with, its plaintext cartridge format makes collaboration simple, its resource limits are inspiring, it can compile to a ready-to-go browser page with no effort at all, it can even spit out short gameplay GIFs with the press of a button. And yet I’m a little apprehensive about trying anything very ambitious, now that I know how little code I’m allowed to have. The relatively small map is a bit of a shame, but the code limit is really killer.

I’ll certainly be making a few more things with this. At the same time, I can’t help but feel that some of the potential has been squeezed out of it.

Adobe, Microsoft Push Critical Updates

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/05/adobe-microsoft-push-critical-updates-2/

Adobe has issued security updates to fix weaknesses in its PDF Reader and Cold Fusion products, while pointing to an update to be released later this week for its ubiquitous Flash Player browser plugin. Microsoft meanwhile today released 16 update bundles to address dozens of security flaws in Windows, Internet Explorer and related software.

Microsoft’s patch batch includes updates for “zero-day” vulnbrokenwindowserabilities (flaws that attackers figure out how to exploit before before the software maker does) in Internet Explorer (IE) and in Windows. Half of the 16 patches that Redmond issued today earned its “critical” rating, meaning the vulnerabilities could be exploited remotely through no help from the user, save for perhaps clicking a link, opening a file or visiting a hacked or malicious Web site.

According to security firm Shavlik, two of the Microsoft patches tackle issues that were publicly disclosed prior to today’s updates, including bugs in IE and the Microsoft .NET Framework.

Anytime there’s a .NET Framework update available, I always uncheck those updates to install and then reboot and install the .NET updates; I’ve had too many .NET update failures muddy the process of figuring out which update borked a Windows machine after a batch of patches to do otherwise, but your mileage may vary.

On the Adobe side, the pending Flash update fixes a single vulnerability that apparently is already being exploited in active attacks online. However, Shavlik says there appears to be some confusion about how many bugs are fixed in the Flash update.

“If information gleaned from [Microsoft’s account of the Flash Player update] MS16-064 is accurate, this Zero Day will be accompanied by 23 additional CVEs, with the release expected on May 12th,” Shavlik wrote. “With this in mind, the recommendation is to roll this update out immediately.”

brokenflash-a

Adobe says the vulnerability is included in Adobe Flash Player 21.0.0.226 and earlier versions for Windows, Macintosh, Linux, and Chrome OS, and that the flaw will be fixed in a version of Flash to be released May 12.

As far as Flash is concerned, the smartest option is probably best to hobble or ditch the program once and for all — and significantly increase the security of your system in the process. I’ve got more on that approach (as well as slightly less radical solutions ) in A Month Without Adobe Flash Player.

If you use Adobe Reader to display PDF documents, you’ll need to update that, too. Alternatively, consider switching to another reader that is perhaps less targeted. Adobe Reader comes bundled with a number of third-party software products, but many Windows users may not realize there are alternatives, including some good free ones. For a time I used Foxit Reader, but that program seems to have grown more bloated with each release. My current preference is Sumatra PDF; it is lightweight (about 40 times smaller than Adobe Reader) and quite fast.

Finally, if you run a Web site that in any way relies on Adobe’s Cold Fusion technology, please update your software soon. Cold Fusion vulnerabilities have traditionally been targeted by cyber thieves to compromise countless online shops.

Chrome, Firefox and Safari Block Pirate Bay as “Phishing” Site

Post Syndicated from Ernesto original https://torrentfreak.com/chrome-and-firefox-block-tpb-as-phishing-site-160507/

thepirateThere’s a slight panic breaking out among Pirate Bay users, who are having a hard time accessing the site.

Over the past few hours Chrome and Firefox and Safari have started to block access to Thepiratebay.se due to reported security issues.

Instead of a page displaying the iconic pirate ship, visitors are presented with an ominous red warning banner.

“Deceptive site ahead: Attackers on Thepiratebay.se may trick you into doing something dangerous like installing software or revealing your personal information,” the Chrome warning reads.

tpbchrome block May2016

Firefox users may encounter a similar banner, branding The Pirate Bay as a “web forgery” which may trick users into sharing personal information. This may lead to identity theft or other fraud.

“Web forgeries are designed to trick you into revealing personal or financial information by imitating sources you may trust. Entering any information on this web page may result in identity theft or other fraud,” the browser warns.

tpb firefox block May2016

Google’s safebrowsing page for TPB currently lists the site as dangerous. It is likely that the current problems are related to issues caused by third-party advertisers. Just last week Malwarebytes reported that some ads were dropping ransomware through the site.

The issue appears to be limited to the desktop versions of most browsers, and not everyone is seeing the error pages yet.

This is not the first time browsers have flagged The Pirate Bay. The same issue has come up before supposedly due to malicious advertisers.

The Pirate Bay team is aware of the issues and will probably resolve it sooner than later. Impatient or adventurous users who want to bypass the warning can do so by disabling their browser’s security warnings altogether in the settings, at their own risk of course.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Vulns are sparse, code is dense

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/05/vulns-are-sparse-code-is-dense.html

The question posed by Bruce Schneier is whether vulnerabilities are “sparse” or “dense”. If they are sparse, then finding and fixing them will improve things. If they are “dense”, then all this work put into finding/disclosing/fixing them is really doing nothing to improve things.

I propose a third option: vulns are sparse, but code is dense.

In other words, we can secure specific things, like OpenSSL and Chrome, by researching the heck out of them, finding vulns, and patching them. The vulns in those projects are sparse.

But, the amount of code out there is enormous, considering all software in the world. And it changes fast — adding new vulns faster than our feeble efforts at disclosing/fixing them.

So measured across all software, no, the secure community hasn’t found any significant amount of bugs. But when looking at critical software, like OpenSSL and Chrome, I think we’ve made great strides forward.

More importantly, let’s ignore the actual benefits/costs of fixing bugs for the moment. What all this effort has done is teach us about the nature of vulns. Critical software is written to day in a vastly more secure manner than it was in the 1980s, 1990s, or even the 2000s. Windows, for example, is vastly more secure. Sure, others are still lagging (car makers, medical device makers), but they are quickly learning the lessons and catching up. Finding a vuln in an iPhone is hard — so hard that hackers will earn $1 million doing it from the NSA rather than stealing your credit card info. 15 years ago, the opposite was true. The NSA didn’t pay for Windows vulns because they fell out of trees, and hackers made more money from hacking your computer.

My point is this: the “are vulns sparse/dense?” puts a straight-jacket around the debate. I see it form an orthogonal point of view.

‘Badlock’ Bug Tops Microsoft Patch Batch

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/04/badlock-bug-tops-microsoft-patch-batch/

Microsoft released fixes on Tuesday to plug critical security holes in Windows and other software. The company issued 13 patches to tackle dozens of vulnerabilities, including a much-hyped “Badlock” file-sharing bug that appears ripe for exploitation. Also, Adobe updated its Flash Player release to address at least two-dozen flaws — in addition to the zero-day vulnerability Adobe patched last week.

Source: badlock.org

Source: badlock.org

The Windows patch that seems to be getting the most attention this month remedies seven vulnerabilities in Samba, a service used to manage file and print services across networks and multiple operating systems. This may sound innocuous enough, but attackers who gain access to private or corporate network could use these flaws to intercept traffic, view or modify user passwords, or shut down critical services.

According to badlock.org, a Web site set up to disseminate information about the widespread nature of the threat that this vulnerability poses, we are likely to see active exploitation of the Samba vulnerabilities soon.

Two of the Microsoft patches address flaws that were disclosed prior to Patch Tuesday. One of them is included in a bundle of fixes for Internet Explorer. A critical update for the Microsoft Graphics Component targets four vulnerabilities, two of which have been detected already in exploits in the wild, according to Chris Goettl at security vendor Shavlik.

Just a reminder: If you use Windows and haven’t yet taken advantage of the Enhanced Mitigation Experience Toolkit, a.k.a. “EMET,” you should definitely consider it. I describe the basic features and benefits of running EMET in this blog post from 2014 (yes, it’s time to revisit EMET in a future post), but the gist of it is that EMET helps block or blunt exploits against known and unknown Windows vulnerabilities and flaws in third-party applications that run on top of Windows. The latest version, v. 5.5, is available here

brokenflash-aOn Friday, Adobe released an emergency update for Flash Player to fix a vulnerability that is being actively exploited in the wild and used to foist malware (such as ransomware). Adobe updated its advisory for that release to include fixes for 23 additional flaws.

As I noted in last week’s piece on the emergency Flash Patch, most users are better off hobbling or removing Flash altogether. I’ve got more on that approach (as well as slightly less radical solutions ) in A Month Without Adobe Flash Player.

If you choose to update, please do it today. The most recent version for Mac and Windows users is 21.0.0.213, and should be available from the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). Chrome and IE should auto-install the latest Flash version on browser restart (I had to manually restart Chrome to get the latest Flash version).

Apple did not invent emoji

Post Syndicated from Eevee original https://eev.ee/blog/2016/04/12/apple-did-not-invent-emoji/

I love emoji. I love Unicode in general. I love seeing plain text become more expressive and more universal.

But, Internet, I’ve noticed a worrying trend. Both popular media and a lot of tech circles tend to assume that “emoji” de facto means Apple’s particular font.

I have some objections.

A not so brief history of emoji

The Unicode Technical Report on emoji also goes over some of this.

Emoji are generally traced back to the Japanese mobile carrier NTT DoCoMo, which in February 1999 released a service called i-mode which powered a line of wildly popular early smartphones. Its messenger included some 180 small pixel-art images you could type as though they were text, because they were text, encoded using unused space in Shift JIS.

(Quick background, because I’d like this to be understandable by a general audience: computers only understand numbers, not text, so we need a “character set” that lists all the characters you can type and what numbers represent them. So in ASCII, for example, a capital “A” is passed around as the number 65. Computers always deal with bytes, which can go up to 255, but ASCII only lists characters up to 127 — so everything from 128 to 255 is just unused space. Shift JIS is Japan’s equivalent to ASCII, and had a lot more unused space, and that’s where early emoji were put.)

Naturally, other carriers added their own variations. Naturally, they used different sets of images, but often in a different order, so the same character might be an apple on one phone and a banana on another. They came up with tables for translating between carriers, but that wouldn’t help if your friend tried to send you an image that your phone just didn’t have. And when these characters started to leak outside of Japan, they had no hope whatsoever of displaying as anything other than garbage.

This is kind of like how Shift JIS is mostly compatible with ASCII, except that for some reason it has the yen sign ¥ in place of the ASCII backslash , producing hilarious results. Also, this is precisely the problem that Unicode was invented to solve.

I’ll get back to all this in a minute, but something that’s left out of emoji discussions is that the English-speaking world was developing a similar idea. As far as I can tell, we got our first major exposure to graphical emoticons with the release of AIM 4.0 circa May 2000 and these infamous “smileys”:

Pixellated, 4-bit smileys from 2000

Even though AIM was a closed network where there was little risk of having private characters escape, these were all encoded as ASCII emoticons. That simple smiley on the very left would be sent as :-) and turned into an image on your friend’s computer, which meant that if you literally typed :-) in a message, it would still render graphically. Rather than being an extension to regular text, these images were an enhancement of regular text, showing a graphical version of something the text already spelled out. A very fancy ligature.

Little ink has been spilled over this, but those humble 4-bit graphics became a staple of instant messaging, by which I mean everyone immediately ripped them off. ICQ, MSN Messenger, Yahoo! Messenger, Pidgin (then Gaim), Miranda, Trillian… I can’t name a messenger since 2003 that didn’t have smileys included. All of them still relied on the same approach of substituting graphics for regular ASCII sequences. That had made sense for AIM’s limited palette of faces, but during its heyday MSN Messenger included 67 graphics, most of them not faces. If you sent a smiling crescent moon to someone who had the graphics disabled (or used an alternative client), all they’d see was a mysterious (S).

So while Japan is generally credited as the source of emoji, the US was quite busy making its own mess of things.

Anyway, Japan had this mess of several different sets of emoji in common use, being encoded in several different incompatible ways. That’s exactly the sort of mess Unicode exists to sort out, so in mid-2007, several Google employees (one of whom was the co-founder of the Unicode Consortium, which surely helped) put together a draft proposal for adding the emoji to Unicode. The idea was to combine all the sets, drop any duplicates, and add to Unicode whatever wasn’t already there.

(Unicode is intended as a unification of all character sets. ASCII has , Shift JIS has ¥, but Unicode has both — so an English speaker and a Japanese speaker can both use both characters without getting confused, as long as they’re using Unicode. And so on, for thousands of characters in dozens of character sets. Part of the problem with sending the carriers’ emoji to American computers was that the US was pretty far along in shifting everything to use Unicode, but the emoji simply didn’t exist in Unicode. Obvious solution: add them!)

Meanwhile, the iPhone launched in Japan in 2008. iOS 2.2, released in November, added the first implementation of emoji — but using SoftBank’s invented encoding, since they were only on one carrier and the characters weren’t yet in Unicode. A couple Apple employees jumped on the bandwagon around that time and coauthored the first official proposal, published in January 2009. Unicode 6.0, the first version to include emoji, was released in October 2010.

iPhones worldwide gained the ability to use its emoji (now mapped to Unicode) with the release of iOS 5.0 in October 2011.

Android didn’t get an emoji font at all until version 4.3, in July 2013. I’m at a loss for why, given that Google had proposed emoji in the first place, and Android had been in Japan since the HTC Magic in May 2009. It was even on NTT DoCoMo, the carrier that first introduced emoji! What the heck, Google.

The state of things

Consider this travesty of an article from last week. This Genius Theory Will Change the Way You Use the “Pink Lady” Emoji:

Unicode, creators of the emoji app, call her the “Information Desk Person.”

Oh, dear. Emoji aren’t an “app”, Unicode didn’t create them, and the person isn’t necessarily female. But the character is named “Information Desk Person”, so at least that part is correct.

It’s non-technical clickbait, sure. But notice that neither “Apple” nor the names of any of its platforms appear in the text. As far as this article and author are concerned, emoji are Apple’s presentation of them.

I see also that fileformat.info is now previewing emoji using Apple’s font. Again, there’s no mention of Apple that I can find here; even the page that credits the data and name sources doesn’t mention Apple. The font is even called “Apple Color Emoji”, so you’d think that might show up somewhere.

Telegram and WhatsApp both use Apple’s font for emoji on every platform; you cannot use your system font. Slack lets you choose, but defaults to Apple’s font. (I objected to Android Telegram’s jarring use of a non-native font; the sole developer explained simply that they like Apple’s font more, and eventually shut down the issue tracker to stop people from discussing it further.)

The latest revision of Google’s emoji font even made some questionable changes, seemingly just for the sake of more closely resembling Apple’s font. I’ll get into that a bit later, but suffice to say, even Google is quietly treating Apple’s images as a de facto standard.

The Unicode Consortium will now let you “adopt” a character. If you adopt an emoji, the certificate they print out for you uses Apple’s font.

It’s a little unusual that this would happen when Android has been more popular than the iPhone almost everywhere, even since iOS first exposed its emoji keyboard worldwide. Also given that Apple’s font is not freely-licensed (so you’re not actually allowed to use it in your project), whereas Google’s whole font family is. And — full disclosure here — quite a few of them look to me like they came from a disquieting uncanny valley populated by plastic people.

Terrifying.

Granted, the iPhone did have a 20-month head start at exposing the English-speaking world to emoji. Plus there’s that whole thing where Apple features are mysteriously assumed to be the first of their kind. I’m not entirely surprised that Apple’s font is treated as canonical; I just have some objections.

Some objections

I’m writing this in a terminal that uses Source Code Pro. You’re (probably) reading it on the web in Merriweather. Miraculously, you still understand what all the letters mean, even though they appear fairly differently.

Emoji are text, just like the text you’re reading now, not too different from those goofy :-) smileys in AIM. They’re often displayed with colorful graphics, but they’re just ideograms, similar to Egyptian hieroglyphs (which are also in Unicode). It’s totally okay to write them a little differently sometimes.

This the only reason emoji are in Unicode at all — the only reason we have a universal set of little pictures. If they’d been true embedded images, there never would have been any reason to turn them into characters.

Having them as text means we can use them anywhere we can use text — there’s no need to hunt down a graphic and figure out how to embed it. You want to put emoji in filenames, in source code, in the titlebar of a window? Sure thing — they’re just text.

Treating emoji as though they are a particular set of graphics rather defeats the point. At best, it confuses people’s understanding of what the heck is going on here, and I don’t much like that.

I’ve encountered people who genuinely believed that Apple’s emoji were some kind of official standard, and anyone deviating from them was somehow wrong. I wouldn’t be surprised if a lot of lay people believed Apple invented emoji. I can hardly blame them, when we have things like World Emoji Day, based on the date on Apple’s calendar glyph. This is not a good state of affairs.


Along the same lines, nothing defines an emoji, as I’ve mentioned before. Whether a particular character appears as a colored graphic is purely a property of the fonts you have installed. You could have a font that rendered all English text in sparkly purple letters, if you really wanted to. Or you could have a font that rendered emoji as simple black-and-white outlines like other characters — which is in fact what I have.

Well… that was true, but mere weeks before that post was published, the Unicode Consortium published a list of characters with a genuine “Emoji” property.

But, hang on. That list isn’t part of the actual Unicode database; it’s part of a “technical report”, which is informative only. In fact, if you look over the Unicode Technical Report on emoji, you may notice that the bulk of it is merely summarizing what’s being done in the wild. It’s not saying what you must do, only what’s already been done. The very first sentence even says that it’s about interoperability.

If that doesn’t convince you, consider that the list of “emoji” characters includes # and *. Yes, the ASCII characters on a regular qwerty keyboard. I don’t think this is a particularly good authoritative reference.

Speaking of which, the same list also contains ©, ®, and — and Twitter’s font has glyphs for all three of them: ©, ®, . They aren’t used on web Twitter, but if you naïvely dropped twemoji into your own project, you’d see these little superscript characters suddenly grow to fit large full-width squares. (Worse, all three of them are a single solid color, so they’ll be unreadable on a dark background.) There’s an excellent reason for this, believe it or not: Shift JIS doesn’t contain any of these characters, so the Japanese carriers faked it by including them as emoji.

Anyway, the technical report proper is a little more nuanced, breaking emoji into a few coarse groups based on who implements them. (Observe that it uses Apple’s font for all 1282 example emoji.)

I care about all this because I see an awful lot of tech people link this document as though it were a formal specification, which leads to a curious cycle.

  1. Apple does a thing with emoji.
  2. Because Apple is a major vendor, the thing it did is added to the technical report.
  3. Other people look at the report, believe it to be normative, and also do Apple’s thing because it’s “part of Unicode”.
  4. (Wow, Apple did this first again! They’re so ahead of the curve!)

After I wrote the above list, I accidentally bumbled upon this page from emojipedia, which states:

In addition to emojis approved in Unicode 8.0 (mid-2015), iOS 9.1 also includes emoji versions of characters all the way back to Unicode 1.1 (1993) that have retroactively been deemed worthy of emoji presentation by the Unicode Consortium.

That’s flat-out wrong. The Unicode Consortium has never deemed characters worthy of “emoji presentation” — it’s written reports about the characters that vendors like Apple have given colored glyphs. This paragraph congratulates Apple for having an emoji font that covers every single character Apple decided to put in their emoji font!

This is a great segue into what happened with Google’s recent update to its own emoji font.

Google’s emoji font changes

Android 6.0.1 was released in December 2015, and contained a long-overdue update to its emoji font, Noto Color Emoji. It added newly-defined emoji like 🌭 U+1F32D HOT DOG and 🦄 U+1F984 UNICORN FACE, so, that was pretty good.

ZWJ sequences

How is this a segue, you ask? Well, see, there are these curious chimeras called ZWJ sequences — effectively new emoji created by mashing multiple emoji together with a special “glue” character in the middle. Apple used (possibly invented?) this mechanism to create “diverse” versions of several emoji like 💏 U+1F48F KISS. The emoji for two women kissing looks like a single image, but it’s actually written as seven characters: woman + heart + kiss + woman with some glue between them. It’s a lot like those AIM smileys, only not ASCII under the hood.

So, that’s fine, it makes sense, I guess. But then Apple added a new chimera emoji: a speech bubble with an eyeball in it, written as eye + speech bubble. It turned out to be some kind of symbol related to an anti-bullying campaign, dreamed up in conjunction with the Ad Council (?!). I’ve never seen it used and never heard about this campaign outside of being a huge Unicode nerd.

Lo and behold, it appeared in the updated font. And Twitter’s font. And Emoji One.

Is this how we want it to work? Apple is free to invent whatever it wants by mashing emoji together, and everyone else treats it as canonical, with no resistance whatsoever? Apple gets to deliberately circumvent the Unicode character process?

Apple appreciated the symbol, too. “When we first asked about bringing this emoji to the official Apple keyboard, they told us it would take at least a year or two to get it through and approved under Unicode,” says Wittmark. The company found a way to fast-track it, she says, by combining two existing emoji.

Maybe this is truly a worthy cause. I don’t know. All I know is that Apple added a character (designed by an ad agency) basically on a whim, and now it’s enshrined forever in Unicode documents. There doesn’t seem to be any real incentive for them to not do this again. I can’t wait for apple + laptop to become the MacBook Pro™ emoji.

(On the other hand, I can absolutely get behind ninja cat.)

Gender diversity

I take issue with using this mechanism for some of the “diverse” emoji as well. I didn’t even realize the problem until Google copied Apple’s implementation.

The basic emoji in question are 💏 U+1F48F KISS and 💑 U+1F491 COUPLE WITH HEART. The emoji technical report contains the following advice, emphasis mine:

Some multi-person groupings explicitly indicate gender: MAN AND WOMAN HOLDING HANDS, TWO MEN HOLDING HANDS, TWO WOMEN HOLDING HANDS. Others do not: KISS, COUPLE WITH HEART, FAMILY (the latter is also non-specific as to the number of adult and child members). While the default representation for the characters in the latter group should be gender-neutral, implementations may desire to provide (and users may desire to have available) multiple representations of each of these with a variety of more-specific gender combinations.

This reinforces the document’s general advice about gender which comes down to: if the name doesn’t explicitly reference gender, the image should be gender-neutral. Makes sense.

Here’s how 💏 U+1F48F KISS and 💑 U+1F491 COUPLE WITH HEART look, before and after the font update.

Pictured: straight people, ruining everything

Before, both images were gender-agnostic blobs. Now, with the increased “diversity”, you can choose from various combinations of genders… but the genderless version is gone. The default — what you get from the single characters on their own, without any chimera gluing stuff — is heteromance.

In fact, almost every major font does this for both KISS and COUPLE WITH HEART, save for Microsoft’s. (HTC’s KISS doesn’t, but only because it doesn’t show people at all.)

Google’s font has changed from “here are two people” to “heterosexuals are the default, but you can use some other particular combinations too”. This isn’t a step towards diversity; this is a step backwards. It also violates the advice in the very document that’s largely based on “whatever Apple and Google are doing”, which is confounding.

Sometimes, Apple is wrong

It also highlights another problem with treating Apple’s font as canonical, which is that Apple is occasionally wrong. I concede that “wrong” is a fuzzy concept here, but I think “surprising, given the name of the character” is a reasonable definition.

In that sense, everyone but Microsoft is wrong about 💏 U+1F48F KISS and 💑 U+1F491 COUPLE WITH HEART, since neither character mentions gender.

You might expect 🙌 U+1F64C PERSON RAISING BOTH HANDS IN CELEBRATION and 🙏 U+1F64F PERSON WITH FOLDED HANDS to depict people, but Apple only shows a pair of hands for both of them. This is particularly bad with PERSON WITH FOLDED HANDS, which just looks like a high five. Almost every other font has followed suit (CELEBRATION, FOLDED HANDS). Google used to get this right, but changed it with the update.

Celebration changed to pat-a-cake, for some reason

👿 U+1F47F IMP suggests, er, an imp, especially since it’s right next to other “monster” characters like 👾 U+1F47E ALIEN MONSTER and 👹 U+1F479 JAPANESE OGRE. Apple appears to have copied its own 😈 U+1F608 SMILING FACE WITH HORNS from the emoticons block and changed the smile to a frown, producing something I would never guess is meant to be an imp. Google followed suit, just like most other fonts, resulting in the tragic loss of one of my favorite Noto glyphs and the only generic representation of a demon.

This is going to wreak havoc on all my tweets about Doom

👯 U+1F46F WOMAN WITH BUNNY EARS suggests a woman. Apple has two, for some reason, though that hasn’t been copied quite as much.

⬜ U+2B1C WHITE LARGE SQUARE needs a little explanation. Before Unicode contained any emoji (several of which are named with explicit colors), quite a few character names used “black” to mean “filled” and “white” to mean “empty”, referring to how the character would look when printed in black ink on white paper. “White large square” really means the outline of a square, in contrast to ⬛ U+2B1B BLACK LARGE SQUARE, which is solid. Unfortunately, both of these characters somehow ended up in virtually every emoji font, despite not being in the original lists of Japanese carriers’ emoji… and everyone gets it wrong, save for Microsoft. Every single font shows a solid square colored white. Except Google, who colors it blue. And Facebook, who has some kind of window frame, which it colors black for the BLACK glyph.

When Apple screws up and doesn’t fix it, everyone else copies their screw-up for the sake of compatibility — and as far as I can tell, the only time Apple has ever changed emoji is for the addition of skin tones and when updating images of their own products. We’re letting Apple set a de facto standard for the appearance of text, even when they’re incorrect, because… well, I’m not even sure why.

Hand gestures

Returning briefly to the idea of diversity, Google also updated the glyphs for its dozen or so “hand gesture” emoji:

Hmm I wonder where they got the inspiration for these

They used to be pink outlines with a flat white fill, but now are a more realistic flat style with the same yellow as the blob faces and shading. This is almost certainly for the sake of supporting the skin tone modifiers later, though Noto doesn’t actually support them yet.

The problem is, the new ones are much harder to tell apart at a glance! The shadows are very subtle, especially at small sizes, so they might as well all be yellow splats.

I always saw the old glyphs as abstract symbols, rather than a crop of a person, even a cartoony person. That might be because I’m white as hell, though. I don’t know. If people of color generally saw them the same way, it seems a shame to have made them all less distinct.

It’s not like the pink and white style would’ve prevented Noto from supporting skin tones in the future, either. Nothing says an emoji with a skin tone has to look exactly like the same emoji without one. The font could easily use the more abstract symbols by default, and switch to this more realistic style when combined with a skin tone.

💩

And finally, some kind of tragic accident has made 💩 U+1F4A9 PILE OF POO turn super goofy and grow a face.

What even IS that now?

Why? Well, you see, Apple’s has a face. And so does almost everyone else’s, now.

I looked at the original draft proposal for this one, and SoftBank (the network the iPhone first launched on in Japan) also had a face for this character, whereas KDDI did not. So the true origin is probably just that one particular carrier happened to strike a deal to carry the iPhone first.

Interop and confusion

I’m sure the rationale for many of these changes was to reduce confusion when Android and iOS devices communicate. I’m sure plenty of people celebrated the changes on those grounds.

I was subscribed to several Android Telegram issues about emoji before the issue tracker was shut down, so I got a glimpse into how people feel about this. One person was particularly adamant that in general, the recipient should always see exactly the same image that the sender chose. Which sounds… like it’s asking for embedded images. Which Telegram supports. So maybe use those instead?

I grew up on the Internet, in a time when ^_^ looked terrible in mIRC’s default font of Fixedsys but just fine in PIRCH98. Some people used MS Comic Chat, which would try to encode actions in a way that looked like annoying noise to everyone else. Abbreviations were still a novelty, so you might not know what “ttfn” means.

Somehow, we all survived. We caught on, we asked for clarification, we learned the rules, and life went on. All human communication is ambiguous, so it baffles me when people bring up “there’s more than one emoji font” as though it spelled the end of civilization. Someone might read what you wrote and interpret it differently than you intended? Damn, that is definitely a new and serious problem that we have no idea how to handle.

It sounds to me how this would’ve sounded in 1998:

A: ^_^
B: Wow, that looks totally goofy over here. I’m using mIRC.
A: Oh, I see the problem. Every IRC client should use Arial, like PIRCH does.

That is, after all, the usual subtext: every font should just copy whatever Apple does. Let’s not.

Look, science!

Conveniently for me, someone just did a study on this. Here’s what I found most interesting:

Overall, we found that if you send an emoji across platform boundaries (e.g., an iPhone to a Nexus), the sender and the receiver will differ by about 2.04 points on average on our -5 to 5 sentiment scale. However, even within platforms, the average difference is 1.88 points.

In other words, people still interpret the same exact glyph differently — just like people sometimes interpret the same words differently.

The gap between same-glyph and different-glyph is a mere 0.16 points out of a 10-point scale, a mere 1.6%. The paper still concludes that the designs should move closer together, and sure, they totally should — towards what the characters describe.

To underscore that idea, note the summary page discusses U+1F601 😁 GRINNING FACE WITH SMILING EYES across five different fonts. Surely this should express something positive, right? Grins are positive, smiling eyes are positive; this might be the most positive face in Unicode. Indeed, every font was measured as expressing a very positive emotion, except Apple’s, which was apparently controversial but averaged out to slightly negative. Looking at the various renderings, I can totally see how Apple’s might be construed as a grimace.

So in the name of interoperability, what should font vendors do here? Push Apple (and Twitter and Facebook, by the look of it) to change their glyph? Or should everyone else change, so we end up in a world where two-thirds of people think “grinning face with smiling eyes” is expressing negativity?

A diversion: fonts

Perhaps the real problem here is font support itself.

You can’t install fonts or change default fonts on either iOS or Android (sans root). That Telegram developer who loves Apple’s emoji should absolutely be able to switch their Android devices to use Apple’s font… but that’s impossible.

It’s doubly impossible because of a teensy technical snag. You see,

  • Apple added support for embedding PNG images in an OpenType font to OS X and iOS.

  • Google added support for embedding PNG images in an OpenType font to FreeType, the font rendering library used on Linux and Android. But they did it differently from Apple.

  • Microsoft added support for color layers in OpenType, so all of its emoji are basically several different monochrome vector images colored and stacked together. It’s actually an interesting approach — it makes the font smaller, it allows pieces to be reused between characters, and it allows the same emoji to be rendered in different palettes on different background colors almost for free.

  • Mozilla went way out into the weeds and added support for embedding SVG in OpenType. If you’re using Firefox, please enjoy these animated emoji. Those are just the letter “o” in plain text — try highlighting or copy/pasting it. The animation is part of the font. (I don’t know whether this mechanism can adapt to the current font color, but these particular soccer balls do not.)

We have four separate ways to create an emoji font, all of them incompatible, none of them standard (yet? I think?). You can’t even make one set of images and save it as four separate fonts, because they’re all designed very differently: Apple and Google only support regular PNG images, Microsoft only supports stacked layers of solid colors, and Mozilla is ridiculously flexible but still prefers vectors. Apple and Google control the mobile market, so they’re likely to win in the end, which seems a shame since their approaches are the least flexible in terms of size and color and other text properties.

I don’t think most people have noticed this, partly because even desktop operating systems don’t have an obvious way to change the emoji font (so who would think to try?), and partly because emoji mostly crop up on desktops via web sites which can quietly substitute images (like Twitter and Slack do). It’s not a situation I’d like to see become permanent, though.

Consider, if you will, that making an emoji font is really hard — there are over 1200 high-resolution images to create, if you want to match Apple’s font. If you used any web forums or IM clients ten years ago, you’re probably also aware that most smiley packs are pretty bad. If you’re stuck on a platform where the default emoji font just horrifies you (for example), surely you’d like to be able to change the font system-wide.

Disconnecting the fonts from the platforms would actually make it easier to create a new emoji font, because the ability to install more than one side-by-side means that no one font would need to cover everything. You could make a font that provides all the facial expressions, and let someone else worry about the animals. Or you could make a font that provides ZWJ sequences for every combination of an animal face and a facial expression. (Yes, please.) Or you could make a font that turns names of Pokémon into ligatures, so e-e-v-e-e displays as (eevee icon), similar to how Sans Bullshit Sans works.

But no one can do any of this, so long as there’s no single extension that works everywhere.

(Also, for some reason, I’ve yet to get Google’s font to work anywhere in Linux. I’m sure there are some fascinating technical reasons, but the upshot is that Google’s browser doesn’t support Google’s emoji font using Google’s FreeType patch that implements Google’s own font extension. It’s been like this for years, and there’s been barely any movement on it, leaving Linux as the only remotely-major platform that can’t seem to natively render color emoji glyphs — even though Android can.)

Appendix

Some miscellaneous thoughts:

  • I’m really glad that emoji have forced more developers to actually handle Unicode correctly. Having to deal with commonly-used characters outside of ASCII is a pretty big kick in the pants already, but most emoji are also in Plane 1, which means they don’t fit in a single JavaScript “character” — an issue that would otherwise be really easy to overlook. 💩 is

  • On the other hand, it’s a shame that the rise of emoji keyboards hasn’t necessarily made the rest of Unicode accessible. There are still plenty of common symbols, like ♫, that I can only type on my phone using the Japanese keyboard. I do finally have an input method on my desktop that lets me enter characters by name, which is nice. We’ve certainly improved since the olden days, when you just had to memorize that Alt0233 produced an é… or, wait, maybe English Windows users still have to do that.

  • Breadth of font support is still a problem outside of emoji, and in a plaintext environment there’s just no way to provide any fallback. Google’s Noto font family aspires to have full coverage — it’s named for “no tofu”, referring to the small boxes that often appear for undisplayable characters — but there are still quite a few gaps. Also, on Android, a character that you don’t have a font for just doesn’t appear at all, with no indication you’re missing anything. That’s one way to get no tofu, I guess.

  • Brands™ running ad campaigns revolving around emoji are probably the worst thing. Hey, if we had a standard way to make colored fonts, then Guinness could’ve just released a font with a darker 🍺 U+1F37A BEER MUG and 🍻 U+1F37B CLINKING BEER MUGS, rather than running a ridiculous ad campaign asking Unicode to add a stout emoji.

  • If you’re on a platform that doesn’t ship with an emoji font, you should really really get Symbola. It covers a vast swath of Unicode with regular old black-and-white vector glyphs, usually using the example glyphs from Unicode’s own documents.

  • The plural is “emoji”, dangit. ∎

Chrome and Firefox Block KickassTorrents as “Phishing” Site

Post Syndicated from Ernesto original http://feedproxy.google.com/~r/Torrentfreak/~3/iFkiK1ay2Qw/

kickasstorrents_500x500There’s a slight panic breaking out among KickassTorrents users, who are having a hard time accessing the site.

Over the past few hours Chrome and Firefox have both started to block access to Kat.cr due to reported security issues.

Instead of a page filled with the latest torrents, visitors are presented with an ominous red warning banner.

“Deceptive site ahead: Attackers on kat.cr may trick you into doing something dangerous like installing software or revealing your personal information,” the Chrome warning reads.

Chrome warns against using Kat.cr

chromekatdecept

Firefox users encounter a similar banner, branding Kat.cr as a “web forgery” which may trick users into sharing personal information. This may lead to identity theft or other fraud.

“Web forgeries are designed to trick you into revealing personal or financial information by imitating sources you may trust. Entering any information on this web page may result in identity theft or other fraud,” the browser warns.

Firefox’s alert

katffforg

Interestingly, Google’s safebrowsing page for Kat.cr currently lists no issues with the site.

The KAT team informs TorrentFreak that they are looking into the issue and hope to have the blocks removed soon.

This is not the first time the two browsers have flagged KickassTorrents. Last year site was flagged on several occasions after it was linked to a malicious advertiser.

At the time the KAT team said they addressed the issue in a matter of hours, but it nonetheless took more than two days before the site was unblocked on both Firefox and Chrome.

Impatient or adventurous users who want to bypass the warning can do so by disabling their browser’s security warnings altogether in the settings, at their own risk of course.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Adobe Patches Flash Player Zero-Day Threat

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/04/adobe-patches-flash-player-zero-day-threat/

Adobe Systems this week rushed out an emergency patch to plug a security hole in its widely-installed Flash Player software, warning that the vulnerability is already being exploited in active attacks.

brokenflash-aAdobe said a “critical” bug exists in all versions of Flash including Flash versions 21.0.0.197 and lower (older) across a broad range of systems, including Windows, Mac, Linux and Chrome OS. Find out if you have Flash and if so what version by visiting this link.

In a security advisory, the software maker said it is aware of reports that the vulnerability is being actively exploited on systems running Windows 7 and Windows XP with Flash Player version 20.0.0.306 and earlier. 

Adobe said additional security protections built into all versions of Flash including 21.0.0.182 and newer should block this flaw from being exploited. But even if you’re running one of the newer versions of Flash with the additional protections, you should update, hobble or remove Flash as soon as possible.

The smartest option is probably to ditch the program once and for all and significantly increase the security of your system in the process. I’ve got more on that approach (as well as slightly less radical solutions ) in A Month Without Adobe Flash Player.

If you choose to update, please do it today. The most recent versions of Flash should be available from the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). Chrome and IE should auto-install the latest Flash version on browser restart (I had to manually restart Chrome to get the latest Flash version).

By the way, I’m not the only one trying to make it easier for people to put a lasso on Flash: In a blog post today, Microsoft said Microsoft Edge users on Windows 10 will auto-pause Flash content that is not central to the Web page. The new feature will be available in Windows 10 build 14316.

“Peripheral content like animations or advertisements built with Flash will be displayed in a paused state unless the user explicitly clicks to play that content,” wrote the Microsoft Edge team. “This significantly reduces power consumption and improves performance while preserving the full fidelity of the page. Flash content that is central to the page, like video and games, will not be paused. We are planning for and look forward to a future where Flash is no longer necessary as a default experience in Microsoft Edge.”

Additional reading on this vulnerability:

Kafeine‘s Malware Don’t Need Coffee Blog on active exploitation of the bug.

Trend Micro’s take on evidence that thieves have been using this flaw in automated attacks since at least March 31, 2016.

WebTorrent Desktop Streams Torrents Beautifully

Post Syndicated from Andy original http://feedproxy.google.com/~r/Torrentfreak/~3/j6CRSSYju38/

wtd-logoEvery day millions of Internet users fire up a desktop-based BitTorrent client to download and share everything from movies, TV shows and music, to the latest Linux distros.

Sharing of multimedia content is mostly achieved by use of a desktop client such as uTorrent, Vuze, qBitTorrent or Transmission, but thanks to Stanford University graduate Feross Aboukhadijeh, there is another way.

WebTorrent is a BitTorrent client for the web. Instead of using standalone applications like those listed above it allows people to share files directly from their browser, without having to install any additional software.

“WebTorrent is the first torrent client built for the web. It’s written completely in JavaScript – the language of the web – and uses WebRTC for true peer-to-peer transport. No browser plugin, extension, or installation is required,” Feross previously told TF.

Such has been WebTorrent’s impact, even Netflix contacted Feross to discuss his technology, which could greatly benefit the streaming video service by reducing its bandwidth consumption. But while WebTorrent “for the web” continues its development, Feross has just unveiled his latest creation.

At first glance WebTorrent Desktop (WDT) seems like a step back, in that it appears to move WebTorrent away from webpages and places it back in the desktop environment like a regular torrent client. However, what WDT does is provide a super smooth video streaming experience with a few really neat tricks up its sleeve.

Firstly, WTD looks nothing like any other torrent client. Its clean and straightforward interface is easily navigated and offers little in the way of configuration options or indeed clutter, which in this case is a good thing. The aim is to get the in-built player streaming video as quickly as possible and it achieves that with style.

wtd-1

Torrents are added by using a .torrent file or by copy/pasting a magnet link and in a matter of seconds on a reasonably well-seeded torrent, videos can be played almost immediately. The in-built player is a basic affair but has all the necessary controls to navigate within a video. This is where WebTorrent Desktop excels.

Even without the whole video file being downloaded it’s possible to skip around on the timeline, with WDT fetching the appropriate pieces of the file on demand for almost instant playback. This makes skipping to the last few minutes of a movie or sporting event a breeze, for example.

wtd-3

Also of interest to those who enjoy watching from the comfort of their armchair is the inclusion of AirPlay, Chromecast and DLNA support. WebTorrent Desktop found my network-connected TV as soon as it was switched on and playback was instantaneous.

But while streaming in various torrent clients has been possible for some time, WebTorrent Desktop has another neat feature up its sleeve. In addition to gathering peers via trackers, DHT and PEX, it also supports the WebTorrent protocol for connecting to WebRTC peers.

This enables the client to tap into a pool of additional peers such as those running WebTorrent in their browsers on Instant.io. As the diagram below shows, WebTorrent Desktop acts as a bridge to bring these two sets of peers together.

wtd-2

Available for Windows, Mac and Linux, WebTorrent Desktop is both free and open source, with its source available on Github. It also has zero advertising and none of the bloatware associated with other clients available today.

The WebTorrent Desktop Beta can be downloaded here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Дрогериите в България

Post Syndicated from Боян Юруков original http://yurukov.net/blog/2016/apteki-bg/

Благодарение на екипа на Румяна Бъчварова България е сред държавите с официален портал за отворени данни. На него вече има много масиви, които до сега са залежавали на компютрите на чиновници и агенции. Има план за отваряне на още доста такива, но много от тях все още остават недостъпни.

Освен че писах доста по темата за отворените данни, в последните години отворих доста публично достъпни масиви. Целта ми беше чрез визуализации да илюстрирам ползата от отворените данни. Някои от тях, като индустриалното замърсяване и загиналите в Балканските войни бяха сравнително лесни. Други, като актовете на съдебната система – далеч не толкова. Понякога обаче е възможно да се отвори един регистър с десетина реда код и няколко публични инструменти.

apteki

Има едно такова нещо като регистър на дрогериите в България. Горе виждате как изглежда. Всички имат адрес на седалището и на обекта, собственика и номер. По номерът на регистрация може да се прецени кога е регистрирана аптеката. Няма възможност да се свали списъка и все още регистърът не е качен на портала за отворени данни.

Доколкото знам, се предвижда да бъде отворен тази година, ако Министерството на здравеопазването съдейства. Реших обаче да не чакам за това точно. В края на статията ще намерите стъпките и данните. Тук виждате три бързи карти.

Къде са дрогериите в страната?

Тази карта показва регистрациите през времето. Първите са от август 2000, а последните – до преди дни. Тази карта не отчита закрити дрогерии, защото регистъра съдържа само такива, които работят в момента.

Следващите две карти показват концентрацията на дрогериите в София и Пловдив. От търсачката горе вдясно може да местите бързо картата на други градове.

В Тракия, Пловдив има учудващо малко дрогерии. Не знам дали е заради грешка в геокодирането ми или в адресите. Знаете ли за някоя, която не виждате тук? Може да се окаже, че не е регистрирана.

Отваряне на данните

Използвах вариация на Javascript кода, с който свалих регистъра на домашните кучета в Пловдив. Може да го вземете от Gist – пуска се направо в конзолата на Chrome като отворите страницата. Той изплюва един TSV файл, който вкарах в Google Calc. С малко формули изкарах датата от регистрационния номер, а с добавката Geocode изкарах координатите. Имаше 5-6, които трябваше да направя на ръка, но повечето от 900+ записа станаха автоматично.

След като имах датите и координатите, качих масива в CartoDB и с малко игра направих двата типа карти. Разбира се, не съм сигурен дали всички координати са точни – адресите в България са изключително трудни за поставяне на карта. Няколко агенции и частни компании имат почти всички адреси в базите си данни, но те не са публично достъпни. Всъщност, това е един от основните масиви от данни, които държавата трябва да пусне, за да е възможен добър анализ на голяма част от информацията.

Важно уточнение

Един добър пример защо данните трябва да са отворени и защо тези карти са полезни, е че хората може лесно да ти открият грешки в анализа. Така малко след като тази статия излезе, във фейса отбелязаха, че данните, които съм свалил са за дрогериите, а не за аптеките, което е нещо различно. Свалих вече тези за аптеките и довечера ще бъдат качени към картата. Ще обновя информацията тогава.

Идеи за анализ

Може да свалите данните свободно и да ги използвате. Включил съм координатите. Ето няколко идеи какви справки може да подготвите:

  • Кои села в страната нямат регистрирани дрогерии?
  • Карта на населението на страната и колко трябва да пътуват, за да стигнат до дрогерии
  • Концентрация на дрогерии по населени места спрямо населението им
  • Къде има повече вериги от дрогерии и къде има повече единични?

Ако някой направи някоя от тях, ще се радвам да споделите в коментарите.

Some other comments on the ISIS dead-drop system

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/03/some-other-comments-on-isis-dead-drop.html

So, by the time I finished this, this New York Times article has more details. Apparently, it really is just TrueCrypt. What’s still missing is how the messages are created. Presumably, it’s just notepad. It’s also missing the protocol used. It is HTTP/FTP file upload? Or do they log on via SMB? Or is it a service like DropBox?Anyway, I think my way is better for sending messages that I describe below:Old post:CNN is reporting on how the Euro-ISIS terrorists are using encryption. The details are garbled, because neither the terrorists, the police, or the reporters understand what’s going on. @thegrugq tries to untangle this nonsense in his post, but I have a different theory. It’s pure guesswork, trying to create something that is plausibly useful that somehow fits the garbled story.I assume what’s really going is this.The terrorist is given a USB drive with the TrueCrypt software and an encrypted partition/file. The first thing the terrorist does is put the USB drive into a computer, run the TrueCrypt program, then mount the file/partition, entering a password. In other words, all you see on the USB drive is the directory “TrueCrypt”, and a large file that is the encrypted “container”, as you see in the picture of the “F:” drive.Once the terrorist mounts the container, she then opens that new folder. It’ll contain a copy of a PGP program, like gpg4win. The terrorist runs that GUI. You see that on the right with the “G:” drive, with a ‘portable’ version of GPG installed in the encrypted container.The terrorist types in the message, then encrypts it. The terrorist chooses one of the many public keys that have been stored inside this encrypted container (G:) within the USB flash drive (F:).Then the terrorist runs a ‘portable’ web-browser from the G: drive. These are browsers based on Chrome or Firefox that run completely self-contained from a directory, leaving behind no other trace on the system. In this example, I’m using “Iron Portable”, which is based on Chrome. All the settings, like which website to log into, and possibly saved passwords, are stored in this directory. Likewise, any logs will be stored here.The terrorist then logs onto a forum, such as a typical phpbb one using SSL. The terrorist then create a new message, and copies/pastes the encrypted text from the clipboard. In this example, I’m showing the “Gentoo” forums, which is well-known to be visited by various ne’er-do-wells and sympathizers.This system works because it’s completely contained on the USB drive. The terrorist can walk up to any Windows PC at a cyber-cafe and make this work. All the evidence is on the USB drive, so there’s nothing left on the Windows computer that law-enforcement can track down. Likewise, the forum is likely to be something that the NSA is less likely to be monitoring. But if they are, they’ll get some metadata, but still won’t be able to break the PGP encoding.This is all guesswork. I built this USB drive in the last hour and installed all the portable versions of the software (TrueCrypt, gpg4win, and IronPortable) on it to create these screenshots. It’s a plausibly useful way of doing things such that stupid terrorists can’t mess things up (leave unencrypted messages or metadata around). And, it matches (kinda) the garbled news account.The moral of the story is that news stories ought to talk to experts. We can’t figure out from the inaccurate accounts you can tell, and only make guesses like I have here.