Tag Archives: chrome

Malicious Torrent Network Tool Revealed By Security Company

Post Syndicated from Andy original https://torrentfreak.com/malicious-torrent-network-tool-revealed-by-security-company-160921/

danger-p2pMore than 35 years after 15-year-old high school student Rich Skrenta created the first publicly spread virus, millions of pieces of malware are being spread around the world.

Attackers’ motives are varied but these days they’re often working for financial gain. As a result, popular websites and their users are regularly targeted. Security company InfoArmor has just published a report detailing a particularly interesting threat which homes in on torrent site users.

“InfoArmor has identified a special tool used by cybercriminals to distribute malware by packaging it with the most popular torrent files on the Internet,” the company reports.

InfoArmor says the so-called “RAUM” tool is being offered via “underground affiliate networks” with attackers being financially incentivized to spread the malicious software through infected torrent files.

“Members of these networks are invited by special invitation only, with strict verification of each new member,” the company reports.

InfoArmor says that the attackers’ infrastructure has a monitoring system in place which allows them to track the latest trends in downloading, presumably so that attacks can reach the greatest numbers of victims.

“The bad actors have analyzed trends on video, audio, software and other digital content downloads from around the globe and have created seeds on famous torrent trackers using weaponized torrents packaged with malicious code,” they explain.

RAUM instances were associated with a range of malware including CryptXXX, CTB-Locker and Cerber, online-banking Trojan Dridex and password stealing spyware Pony.

“We have identified in excess of 1,639,000 records collected in the past few months from the infected victims with various credentials to online-services, gaming, social media, corporate resources and exfiltrated data from the uncovered network,” InfoArmor reveals.

What is perhaps most interesting about InfoArmor’s research is how it shines light on the operation of RAUM behind the scenes. The company has published a screenshot which claims to show the system’s dashboard, featuring infected torrents on several sites, a ‘fake’ Pirate Bay site in particular.


“Threat actors were systematically monitoring the status of the created malicious seeds on famous torrent trackers such as The Pirate Bay, ExtraTorrent and many others,” the researchers write.

“In some cases, they were specifically looking for compromised accounts of other users on these online communities that were extracted from botnet logs in order to use them for new seeds on behalf of the affected victims without their knowledge, thus increasing the reputation of the uploaded files.”


According to InfoArmor the malware was initially spread using uTorrent, although any client could have done the job. More recently, however, new seeds have been served through online servers and some hacked devices.

In some cases the malicious files continued to be seeded for more than 1.5 months. Tests by TF on the sample provided showed that most of the files listed have now been removed by the sites in question.

Completely unsurprisingly, people who use torrent sites to obtain software and games (as opposed to video and music files) are those most likely to come into contact with RAUM and associated malware. As the image below shows, Windows 7 and 10 packs and their activators feature prominently.


“All of the created malicious seeds were monitored by cybercriminals in order to prevent early detection by [anti-virus software] and had different statuses such as ‘closed,’ ‘alive,’ and ‘detected by antivirus.’ Some of the identified elements of their infrastructure were hosted in the TOR network,” InfoArmor explains.

The researchers say that RAUM is a tool used by an Eastern European organized crime group known as Black Team. They also report several URLs and IP addresses from where the team operates. We won’t publish them here but it’s of some comfort to know that between Chrome, Firefox and MalwareBytes protection, all were successfully blocked on our test machine.

InfoArmor concludes by warning users to exercise extreme caution when downloading pirated digital content. We’d go a step further and advise people to be wary of installing all software from any untrusted sources, no matter where they’re found online.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Chrome and Firefox Block Pirate Bay Over “Harmful Programs”

Post Syndicated from Ernesto original https://torrentfreak.com/chrome-and-firefox-block-pirate-bay-over-harmful-programs-160915/

thepirateStarting a few hours ago Chrome and Firefox users are unable to access The Pirate Bay’s torrent download pages without running into a roadblock.

Instead of a page filled with the latest torrents, visitors now see an ominous red warning banner when they try to grab a torrent.

“The site ahead contains harmful programs,” Google Chrome informs its users.

“Attackers on thepiratebay.org might attempt to trick you into installing programs that harm your browsing experience (for example, by changing your homepage or showing extra ads on sites you visit),” the warning adds.

Mozilla’s Firefox browser displays a similar message.

While Pirate Bay’s homepage and search is still freely available, torrent detail pages now show the following banner.

Chrome’s Pirate Bay block


Both Chrome and Firefox rely on Google’s Safe Browsing report which currently lists TPB as a partially dangerous site.

In addition to the two browsers, people who use Comodo’s Secure DNS also experienced problems reaching the site.

Comodo’s secure DNS has a built-in malware domain filtering feature and earlier today it flagged the Pirate Bay as a “hacking” site, as the banner below shows. Shortly before publishing this warning disappeared.

Pirate Bay hacking?


Comodo DNS still blocks access to ExtraTorrent, the second largest torrent site trailing just behind The Pirate Bay.

The secure DNS provider accuses ExtraTorrent of spreading “malicious” content. Interestingly, Google’s Safe Browsing doesn’t report any issues with ExtraTorrent’s domain name, so another source may play a role here.

This isn’t the first time that Comodo has blocked torrent sites and usually the warnings disappear again after a few hours or days. Until then, users can add the domains to a whitelist to regain access. Of course, they should do so at their own risk.

Chrome and Firefox users should be familiar with these intermittent warning notices as well, and can take steps to bypass the blocks if they are in a gutsy mood.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Adobe, Microsoft Push Critical Updates

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/09/adobe-microsoft-push-critical-updates-3/

Adobe and Microsoft on Tuesday each issued updates to fix multiple critical security vulnerabilities in their software. Adobe pushed a patch that addresses 29 security holes in its widely-used Flash Player browser plug-in. Microsoft released some 14 patch bundles to correct at least 50 flaws in Windows and associated software, including a zero-day bug in Internet Explorer.

brokenwindowsHalf of the updates Microsoft released Tuesday earned the company’s most dire “critical” rating, meaning they could be exploited by malware or miscreants to install malicious software with no help from the user, save for maybe just visiting a hacked or booby-trapped Web site. Security firms Qualys and Shavlik have more granular writeups on the Microsoft patches.

Adobe’s advisory for this Flash Update is here. It brings Flash to v. for Windows and Mac users. If you have Flash installed, you should update, hobble or remove Flash as soon as possible.

The smartest option is probably to ditch the program once and for all and significantly increase the security of your system in the process. I’ve got more on that approach (as well as slightly less radical solutions ) in A Month Without Adobe Flash Player.

brokenflash-aIf you choose to update, please do it today. The most recent versions of Flash should be available from this Flash distribution page or the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

Chrome and IE should auto-install the latest Flash version on browser restart (I had to manually check for updates in Chrome an restart the browser to get the latest Flash version).

As always, if you run into any issues installing any of these updates, please feel free to leave a comment about your experience below.

Stream Ripping Problem Worse Than Pirate Sites, IFPI Says

Post Syndicated from Andy original https://torrentfreak.com/stream-ripping-problem-worse-than-pirate-sites-ifpi-says-160913/

sadyoutubeOne of the recurring themes of recent years has been entertainment industry criticism of Google alongside claims the search giant doesn’t do enough to tackle piracy.

In more recent months, the focus has fallen on YouTube in particular, with the music industry painting the video hosting site as a haven for unlicensed tracks. This, the labels say, allows YouTube to undermine competitors and run a ‘DMCA protection racket‘.

While complaints surrounding the so-called “value gap” continue, the labels are now revisiting another problem that has existed for years.

For the unfamiliar, stream ripping is a mechanism for obtaining music from an online source and storing it on a local storage device in MP3 or similar format. Ripping can be achieved by using dedicated software or via various sites designed for the purpose.

With the largest library online, YouTube is the most popular destination for ‘rippers’. Broadly speaking, the site carries two kinds of music – that for which the site has a license and that uploaded without permission by its users. The labels consider the latter as straightforward piracy but the former is also problematic in a stream-ripping environment. Once a track is downloaded by a user from YouTube, labels aren’t getting paid per play anymore.

According to IFPI, the stream-ripping problem has become huge. A new study by Ipsos commissioned by IFPI has found that 49% of Internet users aged 16 to 24 admitted to stream ripping in the six months ending April. That’s a 41% increase over the same period a year earlier.

When considering all age groups the situation eases somewhat, but not by enough to calm IFPI’s nerves. Ipsos found that 30% of all Internet users had engaged in stream ripping this year, that’s 10% up on a year earlier.

In fact, according to comments made to FT (subscription) by IFPI, the problem has become so large that it is now the most popular form of online piracy, surpassing downloading from all of the world’s ‘pirate’ sites.

Precisely why there has been such a large increase isn’t clear, but it’s likely that the simplicity of sites such as YouTube-MP3 has played a big role. The site is huge by any measurement and has been extremely popular for many years. However, this year has seen a dramatic increase in visits, as shown below.


Equally, with pirate site blockades springing up all over the world, users in affected regions will find YouTube and ripping sites much easier to access. Also, rippers tend to work well on mobile phones, giving young people the portability they desire for their music.

But while YouTube and Google will now find themselves under yet more pressure, the company hasn’t been silent on the issue of stream-ripping. On several occasions, YouTube lawyers have made legal threats against such sites, including YouTube-MP3 in 2012 and more recently against TubeNinja.

“We strive to keep YouTube a safe, responsible community, and to encourage respect for the rights of millions of YouTube creators,” an email from YouTube’s legal team to TubeNinja read.

“This requires compliance with the Terms of Service and API Terms of Service. We hope that you will cooperate with us by ceasing to offer TubeNinja with functionality that is designed to allow users to download content from YouTube within seven days of this letter.”

While it is indeed the biggest platform, the problem isn’t only limited to YouTube. Stream rippers are available for most streaming sites including Vimeo, Soundcloud, Bandcamp, Mixcloud, and many dozens of others, with Google itself providing convenient addons for its Chrome browser.

With the major labels now describing stream-ripping as the biggest piracy threat, expect to hear much more on this topic as the year unfolds.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

WebTorrent: 250K Downloads & Strong With Zero Revenue

Post Syndicated from Andy original https://torrentfreak.com/webtorrent-250k-downloads-strong-with-zero-revenue-160827/

Stanford University graduate Feross Aboukhadijeh is passionate about P2P technology. The founder of
P2P-assisted content delivery network PeerCDN (sold to Yahoo in 2013), Feross is also the inventor of WebTorrent.

In its classic form, WebTorrent is a BitTorrent client for the web. No external clients are needed for people to share files since everything is done in the user’s web browser with Javascript. No browser plugins or extensions need to be installed, nothing needs to be configured.

In the beginning, some doubted that it could ever work, but Feross never gave up on his dream.

“People thought WebTorrent was crazy. One of the Firefox developers literally said it wouldn’t be possible. I was like, ‘challenge accepted’,” Feross told TF this week.



A few months after WebTorrent’s debut, Feross announced the arrival of WebTorrent Desktop (WD), a standalone torrent client with a few tricks up its sleeve.

After posting a torrent or magnet link into its somewhat unusual client interface, content can be played almost immediately via an inbuilt player. And with AirPlay, Chromecast and DLNA support, WD is at home at the heart of any multi-display household.


But WebTorrent Desktop’s most interesting feature is its ability to find peers not only via trackers, DHT and PEX, but also using the WebTorrent protocol. This means that WD can share content with people using the web-based version of WebTorrent too.

WebTorrent Desk

Since our April report, WebTorrent has been under constant development. It is now more responsive and uses fewer resources, casting has been improved, and subtitles are auto-detected, to name just a few improvements. As a result, the client has been growing its userbase too.

“The WebTorrent project is going full steam ahead and there has been lots of progress in the past few months,” Feross informs TF.

“We just passed a quarter million total downloads of the app – 254,431 downloads as of right now.”

For a young and totally non-commercial project, that’s an impressive number, but the accolades don’t stop there. The project currently has more than 2,083 stars on Github and it recently added its 26th new contributor.

In all, WebTorrent has nine people working on the core team, but since the client is open source and totally non-commercial, no one is earning anything from the project. According to Feross, this only makes WebTorrent stronger.

“People usually think that having revenue, investors, and employees gives you an advantage over your competition. That’s definitely true for certain things: you can hire designers, programmers, marketing experts, product managers, etc. to build out the product, add lots of features,” the developer says.

“But you have to pay your employees and investors, and these pressures usually cause companies to resort to adding advertising (or worse) to their products. When you have no desire to make a profit, you can act purely in the interests of the people using your product. In short, you can build a better product.”

So if not money, what drives people like Feross and his team to give up their time to create something and give it away?

“The real reason I care so much about WebTorrent is that I want decentralized apps to win. Right now, it’s so much easier to build a centralized app: it’s faster to build, uses tried-and-true technology, and it’s easier to monetize because the app creator has all the control. They can use that control to show you ads, sell your data, or make unilateral product changes for their own benefit,” he says.

“On the other hand, decentralized apps are censorship resistant, put users in control of their data, and are safe against user-hostile changes.

“That last point is really important. It’s because of the foresight of Bram Cohen that WebTorrent is even possible today: the BitTorrent protocol is an open standard. If you don’t like your current torrent app, you can easily switch! No one person or company has total control.”

WebTorrent Desktop developer DC Posch says that several things motivate him to work on the project, particularly when there’s no one to order him around.

“There’s satisfaction in craftsmanship, shipping something that feels really solid. Second, it’s awesome having 250,000 users and no boss,” he says.

“Third, it’s something that I want to exist. There are places like the Internet Archive that have lots of great material and no money for bandwidth. BitTorrent is a technologically elegant way to do zero cost distribution. Finally, I want to prove that non-commercial can be a competitive advantage. Freed from the need to monetize or produce a return, you can produce a superior product.”

To close, last year TF reported that WebTorrent had caught the eye of Netflix. Feross says that was a great moment for the project.

“It was pretty cool to show off WebTorrent at Netflix HQ. They were really interested in the possibility of WebTorrent to help during peak hours when everyone is watching Netflix and the uplink to ISPs like Comcast gets completely saturated. WebTorrent could help by letting Comcast subscribers share data amongst themselves without needing to traverse the congested Comcast-Netflix internet exchange,” he explains.

For now, WebTorrent is still a relative minnow when compared to giants such as uTorrent but there are an awful lot of people out there who share the ethos of Feross and his team. Only time will tell whether this non-commercial project will fulfill its dreams, but those involved will certainly have fun trying.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Research on the Timing of Security Warnings

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/research_on_the_2.html

fMRI experiments show that we are more likely to ignore security warnings when they interrupt other tasks.

A new study from BYU, in collaboration with Google Chrome engineers, finds the status quo of warning messages appearing haphazardly­ — while people are typing, watching a video, uploading files, etc.­ — results in up to 90 percent of users disregarding them.

Researchers found these times are less effective because of “dual task interference,” a neural limitation where even simple tasks can’t be simultaneously performed without significant performance loss. Or, in human terms, multitasking.

“We found that the brain can’t handle multitasking very well,” said study coauthor and BYU information systems professor Anthony Vance. “Software developers categorically present these messages without any regard to what the user is doing. They interrupt us constantly and our research shows there’s a high penalty that comes by presenting these messages at random times.”


For part of the study, researchers had participants complete computer tasks while an fMRI scanner measured their brain activity. The experiment showed neural activity was substantially reduced when security messages interrupted a task, as compared to when a user responded to the security message itself.

The BYU researchers used the functional MRI data as they collaborated with a team of Google Chrome security engineers to identify better times to display security messages during the browsing experience.

Research paper. News article.

MQTT Toolbox – MQTTBox

Post Syndicated from The HiveMQ Team original http://www.hivemq.com/blog/mqtt-toolbox-mqttbox/

Guest blog post by: Sathya Vikram

Short Profile

Type Web App and Chrome packaged App on chrome store
License Free
Operating Systems Linux, Windows, Mac OS, Chrome OS
Website Chrome Web Store or
Official Website


MQTTBox is a helper program to develop and load test MQTT based clients, brokers, devices, cloud and apps. Every aspect of MQTTBox is specifically designed to maximize development and testing productivity. Together, with MQTT clients and load testing tools integrated, its powerful enough to supercharge your MQTT workflow.

MQTTBox is available as Chrome App, Web app and code is open on Github for your custom changes.

Feature Overview

MQTT 3.1 ok
MQTT 3.1.1 ok
LWT ok
Automatic Reconnect ok
MQTT over TCP ok
QoS 0 ok
QoS 1 ok
QoS 2 ok
Authentication ok
Scripting nok
MQTT over Websockets ok

Usage / Features

MQTT Clients

  • Add multiple MQTT clients with TCP and Websockets connection support
  • Connect with wide range of MQTT client connection settings
  • Publish/Subscribe to multiple topics
  • Supports Single Level(+) and Multilevel(#) subscription to topics
  • Copy/Republish payload
  • History of published/subscribed messages for each topic
  • Auto connect clients to brokers
  • Supports QoS 0,1,2


  • Load test MQTT based clients, brokers, devices, cloud and apps
  • Supports starting multiple instances to increase load
  • Load test by publishing messages with QoS 0,1,2
  • Load test by publishing different payload in same test case
  • Load test by subscribing to messages with QoS 0,1,2
  • View live status of load testing on dashboard
  • View load test results for all/each instances on graphs plotted vs time
  • View load data for all/each instances in tables with smart search
  • Track time to millisecond units
  • Load test result history


Below screenshot shows navigation between MQTT clients and MQTT Load test cases. You can view broker connection status right from your navigation menu.

MQTT Client

You can add multiple MQTT clients each with multiple publishers and subscribers.

MQTT Load Dashboard

View live data on MQTT Load dashboard when test in progress. You can start, stop or restart load test along with viewing graphs and data for each or all instances.

MQTT Load Graph

MQTT load results can be viewed as graph format plotted with number of messages vs time. Change sampling rates of graphs to see indepth details. Drag a rectangle around an area of interest to zoom in. View graph results for each or all instances together.

MQTT Load Data

View load test data published/subscribed from each or multiple instance. Sort, search data in the table.

Author Information

Sathya Vikram | workswithweb.com
mail  Email

CSS mix-blend-mode is bad for your browsing history

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2016/08/css-mix-blend-mode-is-bad-for-keeping.html

Up until mid-2010, any rogue website could get a good sense of your browsing habits by specifying a distinctive :visited CSS pseudo-class for any links on the page, rendering thousands of interesting URLs off-screen, and then calling the getComputedStyle API to figure out which pages appear in your browser’s history.

After some deliberation, browser vendors have closed this loophole by disallowing almost all attributes in :visited selectors, spare for the fairly indispensable ability to alter foreground and background colors for such links. The APIs have been also redesigned to prevent the disclosure of this color information via getComputedStyle.

This workaround did not fully eliminate the ability to probe your browsing history, but limited it to scenarios where the user can be tricked into unwittingly feeding the style information back to the website one URL at a time. Several fairly convincing attacks have been demonstrated against patched browsers – my own 2013 entry can be found here – but they generally depended on the ability to solicit one click or one keypress per every URL tested. In other words, the whole thing did not scale particularly well.

Or at least, it wasn’t supposed to. In 2014, I described a neat trick that exploited normally imperceptible color quantization errors within the browser, amplified by stacking elements hundreds of times, to implement an n-to-2n decoder circuit using just the background-color and opacity properties on overlaid <a href=…> elements to easily probe the browsing history of multiple URLs with a single click. To explain the basic principle, imagine wanting to test two links, and dividing the screen into four regions, like so:

  • Region #1 is lit only when both links are not visited (¬ link_a ∧ ¬ link_b),
  • Region #2 is lit only when link A is not visited but link B is visited (¬ link_a ∧ link_b),
  • Region #3 is lit only when link A is visited but link B is not (link_a ∧ ¬ link_b),
  • Region #4 is lit only when both links are visited (link_a ∧ link_b).

While the page couldn’t directly query the visibility of the segments, we just had to convince the user to click the visible segment once to get the browsing history for both links, for example under the guise of dismissing a pop-up ad. (Of course, the attack could be scaled to far more than just 2 URLs.)

This problem was eventually addressed by browser vendors by simply improving the accuracy of color quantization when overlaying HTML elements; while this did not eliminate the risk, it made the attack far more computationally intensive, requiring the evil page to stack millions of elements to get practical results. Gave over? Well, not entirely. In the footnote of my 2014 article, I mentioned this:

“There is an upcoming CSS feature called mix-blend-mode, which permits non-linear mixing with operators such as multiply, lighten, darken, and a couple more. These operators make Boolean algebra much simpler and if they ship in their current shape, they will remove the need for all the fun with quantization errors, successive overlays, and such. That said, mix-blend-mode is not available in any browser today.”

As you might have guessed, patience is a virtue! As of mid-2016, mix-blend-mode – a feature to allow advanced compositing of bitmaps, very similar to the layer blending modes available in photo-editing tools such as Photoshop and GIMP – is shipping in Chrome and Firefox. And as it happens, in addition to their intended purpose, these non-linear blending operators permit us to implement arbitrary Boolean algebra. For example, to implement AND, all we need to do is use multiply:

  • black (0) x black (0) = black (0)
  • black (0) x white (1) = black (0)
  • white (1) x black (0) = black (0)
  • white (1) x white (1) = white (1)

For a practical demo, click here. A single click in that whack-a-mole game will reveal the state of 9 visited links to the JavaScript executing on the page. If this was an actual game and if it continued for a bit longer, probing the state of hundreds or thousands of URLs would not be particularly hard to pull off.

QA Mastermind Needed!

Post Syndicated from Yev original https://www.backblaze.com/blog/qa-mastermind-needed/


Backblaze’s expansion continues and we’re in need of a QA mastermind to help us make sure our GUIs are prim and proper! If you’re wanting to join a fast-paced team and help us bring cloud storage to the world, read below and join up!

Here’s what you’ll be responsible for:

  • Identify, record, document reproducible steps thoroughly, and track defects to resolution.
  • Perform thorough regression testing when defects are resolved.
  • Interact with developers to resolve defects, clarify functionality and resolve usability issues.
  • Ability to write scripts and design test scenarios.
  • Contribute to continual QA process improvement efforts.
  • Ability to take initiative like suggesting and completing test plans without being told to do so.
  • Will be testing on Mac and Windows desktops running multiple browsers like Chrome, Edge, Safari, Opera.
  • Will test some native applications on Mac, Windows, Android, and iOS (iPhone).
  • Experience with performance and security testing is a plus**

Please be proficient in:

  • Enough familiarity with HTML, JSP, and the Java programming language to read engineer’s source code submissions and guess which areas of the product are changing and need additional testing.
  • Cross platform (Linux/Macintosh/Windows) — don’t need to be an expert on all three, but cannot be afraid of any.
  • Familiarity with test automation software (such as Selenium or LeanFT) is a plus**

Required for all Backblaze Employees:

  • Good attitude and willingness to do whatever it takes to get the job done.
  • Strong desire to work for a small fast paced company.
  • Desire to learn and adapt to rapidly changing technologies and work environment.
  • Occasional visits to Backblaze datacenters necessary.
  • Rigorous adherence to best practices.
  • Relentless attention to detail.
  • Excellent interpersonal skills and good oral/written communication.
  • Excellent troubleshooting and problem solving skills.
  • OK with pets in office.

This position is located in San Mateo, California. Regular attendance in the office is expected. Backblaze is an Equal Opportunity Employer and we offer competitive salary and benefits, including our no policy vacation policy.

If this sounds like you — follow these steps:

  • Send an email to jobscontact@backblaze.com with the position in the subject line.
  • Include your resume.
  • Tell us a bit about your QA experience.

The post QA Mastermind Needed! appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Рецепта за бисквитки

Post Syndicated from Илия Горанов original http://9ini.babailiica.com/cookies/

Не става дума за захарни изделия, а за информационни технологии. Терминът е заимстван с директен превод от английския език, където се използва думата cookies (въпреки, че буквалният превод е курабийки, а не бисквитки).

И все пак, каква е целта на бисквитките? Бисквитките представляват порции структурирана информация, която съдържа различни параметри. Те се създават от сървърите, които предоставят достъп до уеб страници. Предава се чрез служебните части на HTTP протокола (т. нар. HTTP headers) – това е трансферен протокол, който се използва от браузърите за обмен на информация със сървърите. Бисквитките са добавени в спецификацията на HTTP протокола във версия 1.0 в началото на 90те години. По това време Интернет не беше толкова развит, колкото е в момента и затова HTTP протоколът има някои специфични особености. Протоколът е базиран на заявки (от клиента) и отговори (от сървъра), като всяка двойка заявка и отговор се правят в отделна връзка (socket) към между клиента и сървъра. Тази схема на работа е изключително удобна, тъй като не изисква постоянна и стабилна връзка с Интернет, тъй като всъщност връзката се използва само за кратък момент. За съжаление, заради тази особеност често HTTP протоколът е наричан state less протокол (протокол без запазване на състоянието). А именно – сървърът няма как да знае, че редица от последователни заявки са изпратени от един и същи клиент. За разлика от IRC, SMTP, FTP и други протоколи създадени през същите години, които отварят една връзка и предават и приемат данни двупосочно. При такива протоколи комуникацията започва с hand shake или аутентикация между участниците в комуникацията, след което и за двете страни е ясно, че докато връзката остава отворена, комуникацията тече с конкретен участник.

За да бъде преодолян този недостатък на протокола, след версия 0.9 (първата версия навлязла в реална експлоатация), във версия 1.0 създават механизма на бисквитките. Кръщават технологията cookies заради приказката на братя Грим за Хенцел и Гретел, които маркират пътя, по който са минали пре тъмната гора, като поръсват трохи. В повечето български преводи в приказката се използват трохи от хляб (bread crumbs) термин, който намира друго място в IT сферата и уеб, но в действителност приказката е германска народна приказка с много различни версии през годините. Сравнението е очевидно – cookies позволяват да се проследи пътя на потребителя в тъмната гора на state less протокола HTTP.

Как работят бисквитките? Бисквитките се създават от сървърите и се изпращат на потребителите. При всяка следваща заявка от потребителя към същия сървър, той трябва да изпраща обратно копие на получените от сървъра бисквитки. По този начин, сървърът получава механизъм по който да проследи потребителя по пътя му, т.е. да знае, когато един и същи потребител е направил поредица от множество, иначе несвързани, заявки.

Представете си, че изпращате първа заявка към сървъра и той ви връща отговор, че иска да се аутентикирате – да посочите потребителско име и парола. Вие ги изпращате и сървърът верифицира, че това наистина сте вие. Но заради особеностите на HTTP протокола, след като изпратите името и паролата, връзката между вас и сървъра се прекъсва. По-късно изпращате нова заявка. Сървърът няма как да знае, че това отново сте вие, тъй като за новата заявка е направена нова връзка.

Сега си представете същата схема, но с добавена технологията на бисквитите. Изпращате заявката към сървъра, той ви връща отговор, че иска име и парола. Изпращате име и парола и сървърът ви връща бисквитка в която записва че собственикът на тази бисквитка вече е дал име и парола. Връзката прекъсва. По-късно изпращате нова заявка и тъй като тя е към същия сървър, клиентът трябва да върне обратно копие на получените от сървъра бисквитки. Тогава новата заявка съдържа и бисквитката, в която пише, че по-рано сте предоставили име и парола, които са били валидни.

И така, бисквитките се създават от сървърите, като данните се изпращат от уеб сървъра заедно с отговора на заявка за достъп до ресурс (например отделна уеб страница, текст, картинка или друг файл). Бисквитката се връща като част от служебната информация на протокола. Когато клиент (клиентският софтуер – браузър) получи отговор от сървър, който съдържа бисквитка, той следва да я обработи. Ако клиентът вече разполага с подобна бисквитка – следва да си обнови информацията за нея, ако не разполага с нея, следва да я създаде. Ако срокът на бисквитката е изтекъл – следва да я унищожи и т.н.

Често се споменава, че бисквитките са малки текстови файлове, които се съхраняват на компютъра на потребителя. Това не винаги е вярно – бисквитките представляваха отделни текстови файлове в ранните версии на някои браузъри (например в Internet Explorer и Netscape Navigator), повечето съвременни браузъри съхраняват бисквитките по различен начин. Например съвременните версии на браузърите Mozilla Firefox и Google Chrome съхранява всички бисквитки в един файл, който представлява sqlite база данни. Подходът с база данни е доста подходящ, тъй като бисквитките представляват структурирана информация, която е удобно да се съхранява в база, а достъпът до информацията е доста по ефективен. Въпреки това, браузърът Microsoft Edge продължава да съхранява бисквитките във вид на текстови файлове в AppData директорията.

Какви параметри съдържат бисквитките, които сървърите изпращат? Всяка бисквитка може да съдържа: име, стойност, домейн, адрес (път), срок на варидност и някои параметри за сигурност (да важат само по https и да бъдат достъпни само от трансфертия протокол, т.е. на не бъдат достъпни за javascript и други приложни слоеве при клиента). Името и стойността са задължителни за всяка бисквитка – те задават името на променливата, в която ще се съхранява информацията и съответната стойност – съхранената информация.

Домейнът е основната част от адреса на интернет сайта, който изпраща бисквитката – частта, която идентифицира машината (сървъра) в мрежата. Според техническите спецификации домейнът на бисквитката задължително трябва да съвпада с домейна на сървъра, който ги изпраща, но има някои изключения. Първото изключение е, чекогато се използват няколко нива домейни, те могат да създават бисквитки за различни части от нивата. Например отговорът на заявка към домейна example.com може да създаде бисквитка само за домейна example.com, както и бисквитка валидна за същия домейн и всички негови поддомейни. Ако бисквитката е валидна за всички поддомейни, изписването става с точка пред името на домейна или .example.com. Второто изключение е валидно, когато бисквитката не се създава от сървъра, а от приложен слой при клиента (например от javascript). Тогава е възможно js файлът да е зареден от един домейн, но в html страница от друг домейн – сървърът, който изпраща бисквитка може да я изпрати от името на домейна където е js файлът, но самият js, докато е зареден в страница от друг домейн може да създаде (и прочете) бисквитка от домейна на html страницата.

Адресът (или пътят) е останалата част от URL адреса. По подризбиране бисквитките са валидни за път / (т.е. за корена на уеб сайта). Това означава, че бисквитката е валидна за всички възможни адреси на сървъра. Въпреки това, има възможност изрично да се укаже конкретен адрес, за който да бъдат валидни бисквитките.По идея, адресите са репрезентация на път в йерархична файлова структура върху сървъра (въпреки, че не е задължително). Затова и адресите на бисквитките представят мястото на тяхното създаване в йерархична файлова система. Например ако пътят на бисквитката е /dir/ – това означава, че тя е валидна в директорията с име dir, включително и всички нейни поддиректории.

Да дадем малко по-реалистичен пример, ако имаме бисквитки, които използваме за съпраняване на информация за аутентикацията на потребителите в администраторски панел на уеб сайт, който е разположен в директория /admin/ – можем да посочим, че дадената бисквитка е валидна само за адрес /admin/, по този начин бисквитките създадени от сървъра за нуждите на администраторския панел няма да се използват при заявки за други ресурси от същия сървър.

Срокът на валидност определя колко време потребителят трябва да пази бисквитката при себе си и да я връща с всяка следваща заявка към същия сървър и адрес (път). Когато сървърът иска да изтрие бисквитка при потребителя, той трябва да я изпрати със срок на валидност в миналото, по този начин предизвиква изтичане и автоматично изтриване на бисквитката при потребителя.

Бисквитките имат и параметри, които имат грижата да осигурят сигурността на предаваните данни. Това включва два булеви параметъра – единият определя, дали бисквитката да бъде достъпна (както за четене, така и за писане) само от http протокола или да бъде достъпна и за приложния слой при клиента (например за javascript). Вторият параметър определя, дали бисквитката да се предава по всички протоколи или само по https (защитен http).

Както се досещате, в рамките на една и съща комуникация може да има повече от една бисквитка. Както сървърът може да създаде повече от една бисквитка едновременно, така и клиентът може да върне повече от една бисквитка обратно към сървъра. Именно затова освен домейни и адреси, бисквитките имат и имена.

В допълнение бисквитките имат и редица ограничения. Повечето браузъри не позволяват да има повече от 20 едновременно валидни бисквитки за един и същи домейн. Във Mozilla Firefox това ограничение е 50 броя, а в Opera 30 броя. Също така е ограничен и размерът на всяка отделна бисквитка – не повече от 4KB (4096 Bytes). В спецификациите за бисквитки RFC2109 от 1997 г. е посочено че клиентът може да съхранява до 300 бисквитки по 20 за един и същи домейн и всяка с размер до 4KB. В по-късната спецификация Rfc6265 от 2011 г. лимитите са увеличение до 3000 броя общо и 50 броя за един домейн. Все пак, не трябва да се забравя, че всяка бисквитка се изпраща от клиента при всяка следваща заявка към сървъра, ако чукнем тавана на лимитите и имаме 50 бисквитки по 4KB, това означава, че с всяка заявка ще пътуват близо 200KB само под формата на бисквитки, което може да се окаже сериозен товар за трафика, дори и при техническите възможности на съвременния достъп до Интернет.

Разбира се, по-рано приведеният пример, при който запазваме успешната аутентикация на потребителя в бисквитка има множество особености свързани с гарантиране на сигурността. На първо място – не е добра идея да запазим потребителското име и паролата на потребителя в бисквитка, тъй като тя се запазва на неговия компютър. Това означава, че по всяко време, ако злонамерено лице получи достъп до компютъра на потребителя, може да прочете потребителското име и паролата от записаните там бисквитки. От друга страна, ако в бисквитката съхраним само името на потребителя, без неговата парола – няма как да се предпазим от фалшиви бисквитки – всяко злонамерено лице може да създаде фалшива бисквитка, в която да посочи произволно (чуждо) потребителско име и да се представи пред сървъра от чуждо име.

Затова най-често използваният механизъм е, че при всяка аутентикация с име и парола пред сървъра, след като той ги верифицира, създава някакъв временно валиден идентификатор, който изпраща като бисквитка. В различните технологии този идентификатор може да се намери с различни имена, one time password (OTP), token, session или по друг начин. При тази схема сървърът съхранява за ограничено време (живот на сесията) информация за потребителя. Всяка такава информация (често наричана сесия) получава идентификационен номер, който се изпраща като бисквитка на потребителя. Тъй като той ще връща този идентификатор с всяка следваща заявка, сървърът ще може да възстановява съхранената информация за потребителя и тя да бъде достъпна при всяка следваща заявка. В същото време, информацията е съхранена на сървъра, а не при клиента, което не позволява на злонамерен потребител да я модифицира или фалшифицира. Освен това, идентификаторът е валиден за ограничен период от време (например за 30 минути). Дори и бисквитката с идентификатора да остане на компютъра на потребителя, заисаният в нея идентификатор няма да върши работа след половин час. Не на последно място, при натискане на бутона за изход съхранените на сървъра данни за потребителя се изтриват дори и да не е изтекъл срокът от 30 минути. Именно затова е важно винаги да се използват бутоните за изход при излизане от онлайн системи.

Какво друго се съхранява в бисквитки? На практика всичко! Много често бисквитките се използват за съхраняване на информация за индивидуални настройки на потребителя. Когато потребителят промени някоя настройка сървърът му изпраща бисквитка със съответната настройка и дълъг срок на валидност. При всяко следващо посещение на същия потребител на същия сайт, той ще изпраща запазената в бисквитка настройка, заедно със заявката към сървъра. Сървърът ще знае за желаната настройка и ще я приложи при връщането на отговор на изпратената заявка. Пример за такава настройка е броят на записите които се показват на страница. Ако веднъж промените този брой, избраната стойност може да се запази в бисквитка и при всяка следваща заявка сървърът винаги ще знае за настройката и ще връща правилен отговор.

В бисквитки се съхранява и друга информация, например поръчани стоки в пазарската кошница на онлайн магазин, статистическа информация кога сте посетили даден сайт за последно, колко пъти сте го посетили, кои страници сте посещавали, колко време сте се задържали на всяка от страниците и т.н.

Забранява ли Европейският съюз бисквитките? Бисквитеното чудовище от улица Сезам би било доста разстроено, ако разбере, че ЕС иска да ограничи използването на бисквитки. В действителност истината е доста по-различна, но информацията е масово грешно интепретирана. Да излезем от технологичната сфера и да навлезем малко в юридическата. На първо място, кои са нормативните документи в тази връзка? Масово се цитира европейската Директива 2009/136/ЕО от 25 ноември 2009 г. Истината е, че тази директива не засяга директно бисквитките. Директивата внася изменения в друга Директива 2002/22/ЕО от 7 март 2002 г. относно универсалната услуга (час от която е и достъпът до Интернет) и правата на потребителите. Изменението от директивата от 2009 г. гласи следното (чл. 5, стр. 30):

5. Член 5, параграф 3 се заменя със следния текст:

„3. Държавите-членки гарантират, че съхраняването на информация или получаването на достъп до информация, вече съхранявана в крайното оборудване на абоната или ползвателя, е позволено само при условие, че съответният абонат или ползвател е дал своето съгласие след получаване на предоставена ясна и изчерпателна информация в съответствие с Директива 95/46/ЕО, inter alia, относно целите на обработката. Това не пречи на всякакво техническо съхранение или достъп с единствена цел осъществяване на предаването на съобщение по електронна съобщителна мрежа или доколкото е строго необходимо, за да може доставчикът да предостави услуга на информационното общество, изрично поискана от абоната или ползвателя.“

Също така, в увода на същата директива се споменава още:

(66) Трети страни може да желаят да съхраняват информация върху оборудване на даден ползвател или да получат достъп до вече съхраняваната информация за различни цели, които варират от легитимни (някои видове „бисквитки“ (cookies) до такива, включващи непозволено навлизане в личната сфера (като шпионски софтуер или вируси). Следователно е от първостепенно значение ползвателите да получат ясна и всеобхватна информация при извършване на дейност, която би могла да доведе до подобно съхраняване или получаване на достъп. Методите на предоставяне на информация и на предоставяне на правото на отказ следва да се направят колкото може по-удобни за ползване. Изключенията от задължението за предоставяне на информация и на право на отказ следва да се ограничават до такива ситуации, в които техническото съхранение или достъп е стриктно необходимо за легитимната цел на даване възможност за ползване на специфична услуга, изрично поискана от абоната или ползвателя. Когато е технически възможно и ефективно, съгласно приложимите разпоредби на Директива 95/46/ЕО, съгласието на ползвателя с обработката може да бъде изразено чрез използване на съответните настройки на браузер или друго приложение. Прилагането на тези изисквания следва да се направи по-ефективно чрез разширените правомощия, дадени на съответните национални органи.

От двете цитирания следва да обърнем внамание и на още нещо – цитира се и Директива 95/46/ЕО от 24 октомври 1995 г, която пък третира защитата на физическите лица при обработването на лични данни. Разбира се, трудно е да се каже, че директивата от средата на 90те години засяга директно функционирането на Интернет, който по това време е доста слабо разпространен, а технологията на бисквитките – появила се само преди няколко години, все още изключително нова и рядко използвана.

Преди да продължим с анализа нататък, трябва да отбележим, че и трите цитирани директиви, по отношение на защитата на потребителите от бисквитки, засега не са транспонирани в националното законодателство (поне не в Закона за електронното управление, Закона за електронните съобщения и Закона за защита на личните данни). Слава богу, механизмът на директивите в Европейския съюз е така създаден, че ги прави задължителни за спазване от всички държави-членки, независимо дали има национално законодателство за съответната сфера или не.

И все пак – защо ЕС иска да ограничи използването на бисквитките? Сред изброените множество приложения на технологията – за съхраняване на аутентикация, за съхраняване на настройки, за следене на поведението на потребителя, за следене на статистика за потребителя, за маркетингови анализи, за съхраняване на данни за пазарски колички и др. (някои от изброените се припокриват или допълват), бисквитките още може да се използват и за сериозно навлизане в личното пространство на потребителите, като се извършват различни форми на следене и анализ на потреблението и поведението на потребителите в Интернет с цел предоставяне на таргетирана реклама или с други, дори и незаконни, цели.

Да разгледаме един по-реалистичен пример на базата на вече разгледаната по-рано технология. Имаме прост информационен сайт с автомобилна тематика, който няма никакви специфични функции, не предоставя услуги или нещо друго. Сайтът обаче използва Google Analytics (безплатен инструмент за събиране на статистика за посетителите, предоставят от Google), също така, собственикът на сайта, с цел да монетизира поне в някаква минимална степен събраната на сайта си информация е пуснал и Google Adwords (услуга за публикуване на рекламни банери предоставяна от Google). Също така имаме и потребител, който търси в Google информация за ремонт на спукана автомобилна гума. Потребителят открива цитирания по-горе сайт в Google, където кликва линк и отива на сайта. Същият потребител има и email в Gmail (безплатна email услуга предоставяна от Google). Както забелязвате, до момента имаме един неизменно преследващ ни по целия път общ знаменател – Google. В случая това далеч не е единствения голям играч на този пазар, просто примерът с него е най-достъпен за широката публика. Всъщност Google едновременно има достъп до писмата, които потребителят е изпращал и получавал, до това какво е търсил, до това в кой сайт е влязъл, какво е чел там (точно кои страници), колко време е прекарал на този сайт, също така и информация за всеки друг сайт, който същият потребител е посещавал в миналото, независимо дали ги е посещавал от същия компютър или от друг, достатъчно е всички сайтове да използват Google Analytics за статистиката си. Ако потребителят има служебен и домашен компютър и е влизал в Gmail пощата си и от двата компютъра, тогава Google Analytics е в състояние да направи връзка, че сайтовете, които сапосещавани на двата компютъра, които иначе нямат никаква друга връзка помежду си, са посещавани от един и същи потребител. Тогава, потребителят не трябва да се учудва, ако отиде на трето място, несвързано по никакъв начин с предишните две (домашния и служебния компютър), влезе си в пощата и по-късно посети произволен сайт, на който види реклама за продажба на нови автомобилни гуми.

Проследяването на всичко описано до момента е възможно именно чрез механизмите на бисквитките. Неслучайно те се наричат механизъм за запазване на състоянието и са създадени за „следене на потребителите на протокола”. Разбира се – следене в позитивния и чисто технологичен смисъл на термина, но все пак следене, което в ръцете на недобронамерени лица може да придобие съвсем различни мащаби и да се извършва със съвсем различни цели.

Всичко това може (и се) комбинира и с други данни, които се събират за потребителите – IP гео локация, информация за интернет връзката, използваното устройство, размер на дисплея, операционна система, използван софтуер и много други данни, които се предоставят автоматизирано, докато браузваме в мрежата.

Отново, сама по себе си, технологията на функциониране на бисквитките има предвидени защитни механизми. Например бисквитките от един уеб сайт не могат да бъдат прочетени от друг сайт. Но тук цялата схема се чупи поради факта, че бисквитките са достъпни за приложния слой на клиента (javascript), като в същото време милиони сайтове по света се възползват от иначе безплатната услуга за събиране на статистика за посещенията от един и същи доставчик. Именно този доставчик е свързващото звено в цялата схема. Всеки от сайтовете и всеки от потребителите поотделно не разполагат с особено полезна информация, но свързващото звено разполага с цялата информация и при добро желание, а повярвайте ми, за да бъдат безплатни всички тези услуги, желанието е преминало в нужда, тази натрупана информация може да бъде анализирана, обработена и използвана за всякакви нужди.

И тъй като често тези действия могат да навлязат доста надълбоко в личния живот на хората, Европейският съюз е предприел необходимите мерки, да задължи доставчиците на услуги в Интернет (собствениците на уеб сайтове), да информират потребителите за това какви данни се съхраняват за тях на крайните устройства (т.е. на самите компютри на клиентите) и за какви цели се използват тези данни.

Тук е много важно да уточним няколко аспекта. На първо място – използването на бисквитки, както и проследяването като процес не са забранени, просто се изисква потребителите да бъдат информирани какво се прави и с каква цел, както и потребителят изрично да е дал съгласието си за това. Другата важна подробност е, че бисквитките не са единственят механизъм за съхраняване на информация на крайните устройства и европейското законодателство не се ограничава до използването именно на бисквитки. Local Storage е съвременна алтернатива на бисквитките и въпреки, че функционира по различен начин и предоставя съвсем различни възможности, но също съхранява информация на крайните устройства и реално може да бъде използвана за следене на потребителите и може да засегне правата им по отношение на обработка на личните им данни. В този смисъл европейските директиви засягат всяка форма на съхраняване на информация на устройствата на потребителите, а не само бисквитките. Също така – директивите разглеждат съхраняването на данни за проследяване на потребителите отделно от бисквитките необходими за технологичното функциониране на системите в Интернет. Също така се прави и разлика между бисквитки от трети страни и бисквитки от собствениците на сайтовете, като в примера с бисквитките оставяни от Google Analytics, Google е трета страна.

Когато съхраняването на данните (били те бисквитки или други) се извършва от трети страни (различни от доставчика на услугата и крайния потребител), това не отменя ангажиментите на доставчика на услугата да информира потребителите, както и да им иска съгласието. Казано накратко, ако имам сайт, който използва Google Analytics, ангажиментът да информирам потребителите на сайта, както и да поискам тяхното съгласие, си остава мой (т.е. на собственика на сайта – лицето, което предоставя услугата), а не на третата страна (т.е. не е ангажимент на Google.

Също интересен факт е, че директивата разглежда като допустимо, съгласието на потребителя да бъде получено и чрез настройка на браузъра или друго приложение. Тук можем обратно да се върнем към технологиите. Преди време имаше P3P (пи три пи като Platform for Privacy Preferences, не ръ зъ ръ)– технология, която започна обещаващо, но в последствие беше имплементирана единствено от Internet Explorer и в крайна сметка разработката на спецификацията беше прекратена от W3C. Една от сочените причини е, че планираната технология беше относително сложна за имплементиране. Към днешна дата повечето браузъри поддържат Do Not Track (DNT), което представлява един HTTP header с име DNT, който ако присъства в заявката на клиента със стойност 1, посочва, че потребителят не е съгласен да бъде проследяван. Разбира се, проследяването, което се визира от DNT и запазването и достъпването на информация на крайните устройства на потребителите, което се визира в европейските директиви на се едно и също. Може да записваш и четеш информация на клиента, без да го проследяваш, което би нарушило европейските директиви, както и можеш да проследиш клиента, без да му записваш и четеш данни локално (например чрез Browser Fingerprint и с дънни съхранявани изцяло на сървъра).

Накрая, нека обобщим:

  1. Европейският съюз не забранява бисквитките;
  2. Европейският съюз предвижда мерки за защита на личните данни, като налага правила за искане на позволение от потребителите, когато се записват и достъпват данни на крайните им устройства, когато тези данни;
  3. Изискването за информирано съгласие не се ограничава единствено до бисквитките, а покрива всички съществуващи и евентуално бъдещи технологии позволяващи записване и достъпване на информация на крайните устройства на потребителите;
  4. Информиране на потребителите следва да има, както и трябва да им се поиска, независимо дали данните се записват или достъпват директно от доставчика на услугата или чрез услугите на трета страна. Или по-просто казано, ако използваме Google Analytics, ние трябва да предупредим потребителя и да му поискаме съгласието, а не Google. В този случай доставчикът на услугата (самият сайт) се явява като оператор на лични данни (не по смисъла на българския ЗЗЛД, но по смисъла на европейските директиви, а Google се явява трета страна оторизирана от администратора да управлява тези данни от негово име и за негова сметка – ерго отговорността е на администратора;
  5. Информиране на потребителя и искане на съгласие не е необходимо, когато записваните и четените данни се използват за технологични нужди и това е свързано с предоставянето на услугата, която потребителят изрично е поискал да използва. Бих казал, че когато случаят е такъв, лично аз бих избрал да информирам потребителя какво и защо се записва и чете, без обаче да му искам съгласието;
  6. Според Европейските директиви съгласието може да се изрази от потребителите и чрез специфични технологии създадени за тази цел, но използването на технологиите не отменя необходимостта от информиране на потребителя относно това какви данни се съхраняват и с каква цел се обработват;
  7. Европейската нормативна уредба в това отношение изглежда не е траспонирана в националното законодателство на България, което не означава, че може да не се спазва;

Canadian Man Behind Popular ‘Orcus RAT’

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/07/canadian-man-is-author-of-popular-orcus-rat/

Far too many otherwise intelligent and talented software developers these days apparently think they can get away with writing, selling and supporting malicious software and then couching their commerce as a purely legitimate enterprise. Here’s the story of how I learned the real-life identity of Canadian man who’s laboring under that same illusion as proprietor of one of the most popular and affordable tools for hacking into someone else’s computer.

Earlier this week I heard from Daniel Gallagher, a security professional who occasionally enjoys analyzing new malicious software samples found in the wild. Gallagher said he and members of @malwrhunterteam and @MalwareTechBlog recently got into a Twitter fight with the author of Orcus RAT, a tool they say was explicitly designed to help users remotely compromise and control computers that don’t belong to them.

A still frame from a Youtube video showing Orcus RAT's keylogging ability to steal passwords from Facebook users and other credentials.

A still frame from a Youtube video demonstrating Orcus RAT’s keylogging ability to steal passwords from Facebook and other sites.

The author of Orcus — a person going by the nickname “Ciriis Mcgraw” a.k.a. “Armada” on Twitter and other social networks — claimed that his RAT was in fact a benign “remote administration tool” designed for use by network administrators and not a “remote access Trojan” as critics charged. Gallagher and others took issue with that claim, pointing out that they were increasingly encountering computers that had been infected with Orcus unbeknownst to the legitimate owners of those machines.

The malware researchers noted another reason that Mcgraw couldn’t so easily distance himself from how his clients used the software: He and his team are providing ongoing technical support and help to customers who have purchased Orcus and are having trouble figuring out how to infect new machines or hide their activities online.

What’s more, the range of features and plugins supported by Armada, they argued, go well beyond what a system administrator would look for in a legitimate remote administration client like Teamviewer, including the ability to launch a keylogger that records the victim’s every computer keystroke, as well as a feature that lets the user peek through a victim’s Web cam and disable the light on the camera that alerts users when the camera is switched on.

A new feature of Orcus announced July 7 lets users configure the RAT so that it evades digital forensics tools used by malware researchers, including an anti-debugger and an option that prevents the RAT from running inside of a virtual machine.

Other plugins offered directly from Orcus’s tech support page (PDF) and authored by the RAT’s support team include a “survey bot” designed to “make all of your clients do surveys for cash;” a “USB/.zip/.doc spreader,” intended to help users “spread a file of your choice to all clients via USB/.zip/.doc macros;” a “Virustotal.com checker” made to “check a file of your choice to see if it had been scanned on VirusTotal;” and an “Adsense Injector,” which will “hijack ads on pages and replace them with your Adsense ads and disable adblocker on Chrome.”


Gallagher said he was so struck by the guy’s “smugness” and sheer chutzpah that he decided to look closer at any clues that Ciriis Mcgraw might have left behind as to his real-world identity and location. Sure enough, he found that Ciriis Mcgraw also has a Youtube account under the same name, and that a video Mcgraw posted in July 2013 pointed to a 33-year-old security guard from Toronto, Canada.

ciriis-youtubeGallagher noticed that the video — a bystander recording on the scene of a police shooting of a Toronto man — included a link to the domain policereview[dot]info. A search of the registration records attached to that Web site name show that the domain was registered to a John Revesz in Toronto and to the email address john.revesz@gmail.com.

A reverse WHOIS lookup ordered from Domaintools.com shows the same john.revesz@gmail.com address was used to register at least 20 other domains, including “thereveszfamily.com,” “johnrevesz.com, revesztechnologies[dot]com,” and — perhaps most tellingly —  “lordarmada.info“.

Johnrevesz[dot]com is no longer online, but this cached copy of the site from the indispensable archive.org includes his personal résumé, which states that John Revesz is a network security administrator whose most recent job in that capacity was as an IT systems administrator for TD Bank. Revesz’s LinkedIn profile indicates that for the past year at least he has served as a security guard for GardaWorld International Protective Services, a private security firm based in Montreal.

Revesz’s CV also says he’s the owner of the aforementioned Revesz Technologies, but it’s unclear whether that business actually exists; the company’s Web site currently redirects visitors to a series of sites promoting spammy and scammy surveys, come-ons and giveaways.


Contacted by KrebsOnSecurity, Revesz seemed surprised that I’d connected the dots, but beyond that did not try to disavow ownership of the Orcus RAT.

“Profit was never the intentional goal, however with the years of professional IT networking experience I have myself, knew that proper correct development and structure to the environment is no free venture either,” Revesz wrote in reply to questions about his software. “Utilizing my 15+ years of IT experience I have helped manage Orcus through its development.”

Revesz continued:

“As for your legalities question.  Orcus Remote Administrator in no ways violates Canadian laws for software development or sale.  We neither endorse, allow or authorize any form of misuse of our software.  Our EULA [end user license agreement] and TOS [terms of service] is very clear in this matter. Further we openly and candidly work with those prudent to malware removal to remove Orcus from unwanted use, and lock out offending users which may misuse our software, just as any other company would.”

Revesz said none of the aforementioned plugins were supported by Orcus, and were all developed by third-party developers, and that “Orcus will never allow implementation of such features, and or plugins would be outright blocked on our part.”

In an apparent contradiction to that claim, plugins that allow Orcus users to disable the Webcam light on a computer running the software and one that enables the RAT to be used as a “stresser” to knock sites and individuals users offline are available directly from Orcus Technologies’ Github page.

Revesz’s also offers a service to help people cover their tracks online. Using his alter ego “Armada” on the hacker forum Hackforums[dot]net, Revesz also sells a “bulletproof dynamic DNS service” that promises not to keep records of customer activity.

Dynamic DNS services allow users to have Web sites hosted on servers that frequently change their Internet addresses. This type of service is useful for people who want to host a Web site on a home-based Internet address that may change from time to time, because dynamic DNS services can be used to easily map the domain name to the user’s new Internet address whenever it happens to change.


Unfortunately, these dynamic DNS providers are extremely popular in the attacker community, because they allow bad guys to keep their malware and scam sites up even when researchers manage to track the attacking IP address and convince the ISP responsible for that address to disconnect the malefactor. In such cases, dynamic DNS allows the owner of the attacking domain to simply re-route the attack site to another Internet address that he controls.

Free dynamic DNS providers tend to report or block suspicious or outright malicious activity on their networks, and may well share evidence about the activity with law enforcement investigators. In contrast, Armada’s dynamic DNS service is managed solely by him, and he promises in his ad on Hackforums that the service — to which he sells subscriptions of various tiers for between $30-$150 per year — will not log customer usage or report anything to law enforcement.

According to writeups by Kaspersky Lab and Heimdal Security, Revesz’s dynamic DNS service has been seen used in connection with malicious botnet activity by another RAT known as Adwind.  Indeed, Revesz’s service appears to involve the domain “nullroute[dot]pw”, which is one of 21 domains registered to a “Ciriis Mcgraw,” (as well as orcus[dot]pw and orcusrat[dot]pw).

I asked Gallagher (the researcher who originally tipped me off about Revesz’s activities) whether he was persuaded at all by Revesz’s arguments that Orcus was just a tool and that Revesz wasn’t responsible for how it was used.

Gallagher said he and his malware researcher friends had private conversations with Revesz in which he seemed to acknowledge that some aspects of the RAT went too far, and promised to release software updates to remove certain objectionable functionalities. But Gallagher said those promises felt more like the actions of someone trying to cover himself.

“I constantly try to question my assumptions and make sure I’m playing devil’s advocate and not jumping the gun,” Gallagher said. “But I think he’s well aware that what he’s doing is hurting people, it’s just now he knows he’s under the microscope and trying to do and say enough to cover himself if it ever comes down to him being questioned by law enforcement.”

Some stuff about color

Post Syndicated from Eevee original https://eev.ee/blog/2016/07/16/some-stuff-about-color/

I’ve been trying to paint more lately, which means I have to actually think about color. Like an artist, I mean. I’m okay at thinking about color as a huge nerd, but I’m still figuring out how to adapt that.

While I work on that, here is some stuff about color from the huge nerd perspective, which may or may not be useful or correct.


Hues are what we usually think of as “colors”, independent of how light or dim or pale they are: general categories like purple and orange and green.

Strictly speaking, a hue is a specific wavelength of light. I think it’s really weird to think about light as coming in a bunch of wavelengths, so I try not to think about the precise physical mechanism too much. Instead, here’s a rainbow.

rainbow spectrum

These are all the hues the human eye can see. (Well, the ones this image and its colorspace and your screen can express, anyway.) They form a nice spectrum, which wraps around so the two red ends touch.

(And here is the first weird implication of the physical interpretation: purple is not a real color, in the sense that there is no single wavelength of light that we see as purple. The actual spectrum runs from red to blue; when we see red and blue simultaneously, we interpret it as purple.)

The spectrum is divided by three sharp lines: yellow, cyan, and magenta. The areas between those lines are largely dominated by red, green, and blue. These are the two sets of primary colors, those hues from which any others can be mixed.

Red, green, and blue (RGB) make up the additive primary colors, so named because they add light on top of black. LCD screens work exactly this way: each pixel is made up of three small red, green, and blue rectangles. It’s also how the human eye works, which is fascinating but again a bit too physical.

Cyan, magenta, and yellow are the subtractive primary colors, which subtract light from white. This is how ink, paint, and other materials work. When you look at an object, you’re seeing the colors it reflects, which are the colors it doesn’t absorb. A red ink reflects red light, which means it absorbs green and blue light. Cyan ink only absorbs red, and yellow ink only absorbs blue; if you mix them, you’ll get ink that absorbs both red and blue green, and thus will appear green. A pure black is often included to make CMYK; mixing all three colors would technically get you black, but it might be a bit muddy and would definitely use three times as much ink.

The great kindergarten lie

Okay, you probably knew all that. What confused me for the longest time was how no one ever mentioned the glaring contradiction with what every kid is taught in grade school art class: that the primary colors are red, blue, and yellow. Where did those come from, and where did they go?

I don’t have a canonical answer for that, but it does make some sense. Here’s a comparison: the first spectrum is a full rainbow, just like the one above. The second is the spectrum you get if you use red, blue, and yellow as primary colors.

a full spectrum of hues, labeled with color names that are roughly evenly distributed
a spectrum of hues made from red, blue, and yellow

The color names come from xkcd’s color survey, which asked a massive number of visitors to give freeform names to a variety of colors. One of the results was a map of names for all the fully-saturated colors, providing a rough consensus for how English speakers refer to them.

The first wheel is what you get if you start with red, green, and blue — but since we’re talking about art class here, it’s really what you get if you start with cyan, magenta, and yellow. The color names are spaced fairly evenly, save for blue and green, which almost entirely consume the bottom half.

The second wheel is what you get if you start with red, blue, and yellow. Red has replaced magenta, and blue has replaced cyan, so neither color appears on the wheel — red and blue are composites in the subtractive model, and you can’t make primary colors like cyan or magenta out of composite colors.

Look what this has done to the distribution of names. Pink and purple have shrunk considerably. Green is half its original size and somewhat duller. Red, orange, and yellow now consume a full half of the wheel.

There’s a really obvious advantage here, if you’re a painter: people are orange.

Yes, yes, we subdivide orange into a lot of more specific colors like “peach” and “brown”, but peach is just pale orange, and brown is just dark orange. Everyone, of every race, is approximately orange. Sunburn makes you redder; fear and sickness make you yellower.

People really like to paint other people, so it makes perfect sense to choose primary colors that easily mix to make people colors.

Meanwhile, cyan and magenta? When will you ever use those? Nothing in nature remotely resembles either of those colors. The true color wheel is incredibly, unnaturally bright. The reduced color wheel is much more subdued, with only one color that stands out as bright: yellow, the color of sunlight.

You may have noticed that I even cheated a little bit. The blue in the second wheel isn’t the same as the blue from the first wheel; it’s halfway between cyan and blue, a tertiary color I like to call azure. True pure blue is just as unnatural as true cyan; azure is closer to the color of the sky, which is reflected as the color of water.

People are orange. Sunlight is yellow. Dirt and rocks and wood are orange. Skies and oceans are blue. Blush and blood and sunburn are red. Sunsets are largely red and orange. Shadows are blue, the opposite of yellow. Plants are green, but in sun or shade they easily skew more blue or yellow.

All of these colors are much easier to mix if you start with red, blue, and yellow. It may not match how color actually works, but it’s a useful approximation for humans. (Anyway, where will you find dyes that are cyan or magenta? Blue is hard enough.)

I’ve actually done some painting since I first thought about this, and would you believe they sell paints in colors other than bright red, blue, and yellow? You can just pick whatever starting colors you want and the whole notion of “primary” goes a bit out the window. So maybe this is all a bit moot.

More on color names

The way we name colors fascinates me.

A “basic color term” is a single, unambiguous, very common name for a group of colors. English has eleven: red, orange, yellow, green, blue, purple, black, white, gray, pink, and brown.

Of these, orange is the only tertiary hue; brown is the only name for a specifically low-saturation color; pink and grey are the only names for specifically light shades. I can understand grey — it’s handy to have a midpoint between black and white — but the other exceptions are quite interesting.

Looking at the first color wheel again, “blue” and “green” together consume almost half of the spectrum. That seems reasonable, since they’re both primary colors, but “red” is relatively small; large chunks of it have been eaten up by its neighbors.

Orange is a tertiary color in either RGB or CMYK: it’s a mix of red and yellow, a primary and secondary color. Yet we ended up with a distinct name for it. I could understand if this were to give white folks’ skin tones their own category, similar to the reasons for the RBY art class model, but we don’t generally refer to white skin as “orange”. So where did this color come from?

Sometimes I imagine a parallel universe where we have common names for other tertiary colors. How much richer would the blue/green side of the color wheel be if “chartreuse” or “azure” were basic color terms? Can you even imagine treating those as distinct colors, not just variants or green or blue? That’s exactly how we treat orange, even though it’s just a variant of red.

I can’t speak to whether our vocabulary truly influences how we perceive or think (and that often-cited BBC report seems to have no real source). But for what it’s worth, I’ve been trying to think of “azure” as distinct for a few years now, and I’ve had a much easier time dealing with blues in art and design. Giving the cyan end of blue a distinct and common name has given me an anchor, something to arrange thoughts around.

Come to think of it, yellow is an interesting case as well. A decent chunk of the spectrum was ultimately called “yellow” in the xkcd map; here’s that chunk zoomed in a bit.

full range of xkcd yellows

How much of this range would you really call yellow, rather than green (or chartreuse!) or orange? Yellow is a remarkably specific color: mixing it even slightly with one of its neighbors loses some of its yellowness, and darkening it moves it swiftly towards brown.

I wonder why this is. When we see a yellowish-orange, are we inclined to think of it as orange because it looks like orange under yellow sunlight? Is it because yellow is between red and green, and the red and green receptors in the human eye pick up on colors that are very close together?

Most human languages develop their color terms in a similar order, with a split between blue and green often coming relatively late in a language’s development. Of particular interest to me is that orange and pink are listed as a common step towards the end — I’m really curious as to whether that happens universally and independently, or it’s just influence from Western color terms.

I’d love to see a list of the basic color terms in various languages, but such a thing is proving elusive. There’s a neat map of how many colors exist in various languages, but it doesn’t mention what the colors are. It’s easy enough to find a list of colors in various languages, like this one, but I have no idea whether they’re basic in each language. Note also that this chart only has columns for English’s eleven basic colors, even though Russian and several other languages have a twelfth basic term for azure. The page even mentions this, but doesn’t include a column for it, which seems ludicrous in an “omniglot” table.

The only language I know many color words in is Japanese, so I went delving into some of its color history. It turns out to be a fascinating example, because you can see how the color names developed right in the spelling of the words.

See, Japanese has a couple different types of words that function like adjectives. Many of the most common ones end in -i, like kawaii, and can be used like verbs — we would translate kawaii as “cute”, but it can function just as well as “to be cute”. I’m under the impression that -i adjectives trace back to Old Japanese, and new ones aren’t created any more.

That’s really interesting, because to my knowledge, only five Japanese color names are in this form: kuroi (black), shiroi (white), akai (red), aoi (blue), and kiiroi (yellow). So these are, necessarily, the first colors the language could describe. If you compare to the chart showing progression of color terms, this is the bottom cell in column IV: white, red, yellow, green/blue, and black.

A great many color names are compounds with iro, “color” — for example, chairo (brown) is cha (tea) + iro. Of the five basic terms above, kiiroi is almost of that form, but unusually still has the -i suffix. (You might think that shiroi contains iro, but shi is a single character distinct from i. kiiroi is actually written with the kanji for iro.) It’s possible, then, that yellow was the latest of these five words — and that would give Old Japanese words for white, red/yellow, green/blue, and black, matching the most common progression.

Skipping ahead some centuries, I was surprised to learn that midori, the word for green, was only promoted to a basic color fairly recently. It’s existed for a long time and originally referred to “greenery”, but it was considered to be a shade of blue (ao) until the Allied occupation after World War II, when teaching guidelines started to mention a blue/green distinction. (I would love to read more details about this, if you have any; the West’s coming in and adding a new color is a fascinating phenomenon, and I wonder what other substantial changes were made to education.)

Japanese still has a number of compound words that use ao (blue!) to mean what we would consider green: aoshingou is a green traffic light, aoao means “lush” in a natural sense, aonisai is a greenhorn (presumably from the color of unripe fruit), aojiru is a drink made from leafy vegetables, and so on.

This brings us to at least six basic colors, the fairly universal ones: black, white, red, yellow, blue, and green. What others does Japanese have?

From here, it’s a little harder to tell. I’m not exactly fluent and definitely not a native speaker, and resources aimed at native English speakers are more likely to list colors familiar to English speakers. (I mean, until this week, I never knew just how common it was for aoi to mean green, even though midori as a basic color is only about as old as my parents.)

I do know two curious standouts: pinku (pink) and orenji (orange), both English loanwords. I can’t be sure that they’re truly basic color terms, but they sure do come up a lot. The thing is, Japanese already has names for these colors: momoiro (the color of peach — flowers, not the fruit!) and daidaiiro (the color of, um, an orange). Why adopt loanwords for concepts that already exist?

I strongly suspect, but cannot remotely qualify, that pink and orange weren’t basic colors until Western culture introduced the idea that they could be — and so the language adopted the idea and the words simultaneously. (A similar thing happened with grey, natively haiiro and borrowed as guree, but in my limited experience even the loanword doesn’t seem to be very common.)

Based on the shape of the words and my own unqualified guesses of what counts as “basic”, the progression of basic colors in Japanese seems to be:

  1. black, white, red (+ yellow), blue (+ green) — Old Japanese
  2. yellow — later Old Japanese
  3. brown — sometime in the past millenium
  4. green — after WWII
  5. pink, orange — last few decades?

And in an effort to put a teeny bit more actual research into this, I searched the Leeds Japanese word frequency list (drawn from websites, so modern Japanese) for some color words. Here’s the rank of each. Word frequency is generally such that the actual frequency of a word is inversely proportional to its rank — so a word in rank 100 is twice as common as a word in rank 200. The five -i colors are split into both noun and adjective forms, so I’ve included an adjusted rank that you would see if they were counted as a single word, using ab / (a + b).

  • white: 1010 ≈ 1959 (as a noun) + 2083 (as an adjective)
  • red: 1198 ≈ 2101 (n) + 2790 (adj)
  • black: 1253 ≈ 2017 (n) + 3313 (adj)
  • blue: 1619 ≈ 2846 (n) + 3757 (adj)
  • green: 2710
  • yellow: 3316 ≈ 6088 (n) + 7284 (adj)
  • orange: 4732 (orenji), n/a (daidaiiro)
  • pink: 4887 (pinku), n/a (momoiro)
  • purple: 6502 (murasaki)
  • grey: 8472 (guree), 10848 (haiiro)
  • brown: 10622 (chairo)
  • gold: 12818 (kin’iro)
  • silver: n/a (gin’iro)
  • navy: n/a (kon)

n/a” doesn’t mean the word is never used, only that it wasn’t in the top 15,000.

I’m not sure where the cutoff is for “basic” color terms, but it’s interesting to see where the gaps lie. I’m especially surprised that yellow is so far down, and that purple (which I hadn’t even mentioned here) is as high as it is. Also, green is above yellow, despite having been a basic color for less than a century! Go, green.

For comparison, in American English:

  • black: 254
  • white: 302
  • red: 598
  • blue: 845
  • green: 893
  • yellow: 1675
  • brown: 1782
  • golden: 1835
  • græy: 1949
  • pink: 2512
  • orange: 3171
  • purple: 3931
  • silver: n/a
  • navy: n/a

Don’t read too much into the actual ranks; the languages and corpuses are both very different.

Color models

There are numerous ways to arrange and identify colors, much as there are numerous ways to identify points in 3D space. There are also benefits and drawbacks to each model, but I’m often most interested in how much sense the model makes to me as a squishy human.

RGB is the most familiar to anyone who does things with computers — it splits a color into its red, green, and blue channels, and measures the amount of each from “none” to “maximum”. (HTML sets this range as 0 to 255, but you could just as well call it 0 to 1, or -4 to 7600.)

RGB has a couple of interesting problems. Most notably, it’s kind of difficult to read and write by hand. You can sort of get used to how it works, though I’m still not particularly great at it. I keep in mind these rules:

  1. The largest channel is roughly how bright the color is.

    This follows pretty easily from the definition of RGB: it’s colored light added on top of black. The maximum amount of every color makes white, so less than the maximum must be darker, and of course none of any color stays black.

  2. The smallest channel is how pale (desaturated) the color is.

    Mixing equal amounts of red, green, and blue will produce grey. So if the smallest channel is green, you can imagine “splitting” the color between a grey (green, green, green), and the leftovers (red – green, 0, blue – green). Mixing grey with a color will of course make it paler — less saturated, closer to grey — so the bigger the smallest channel, the greyer the color.

  3. Whatever’s left over tells you the hue.

It might be time for an illustration. Consider the color (50%, 62.5%, 75%). The brightness is “capped” at 75%, the largest channel; the desaturation is 50%, the smallest channel. Here’s what that looks like.

illustration of the color (50%, 62.5%, 75%) split into three chunks of 50%, 25%, and 25%

Cutting out the grey and the darkness leaves a chunk in the middle of actual differences between the colors. Note that I’ve normalized it to (0%, 50%, 100%), which is the percentage of that small middle range. Removing the smallest and largest channels will always leave you with a middle chunk where at least one channel is 0% and at least one channel is 100%. (Or it’s grey, and there is no middle chunk.)

The odd one out is green at 50%, so the hue of this color is halfway between cyan (green + blue) and blue. That hue is… azure! So this color is a slightly darkened and fairly dull azure. (The actual amount of “greyness” is the smallest relative to the largest, so in this case it’s about ⅔ grey, or about ⅓ saturated.) Here’s that color.

a slightly darkened, fairly dull azure

This is a bit of a pain to do in your head all the time, so why not do it directly?

HSV is what you get when you directly represent colors as hue, saturation, and value. It’s often depicted as a cylinder, with hue represented as an angle around the color wheel: 0° for red, 120° for green, and 240° for blue. Saturation ranges from grey to a fully-saturated color, and value ranges from black to, er, the color. The azure above is (210°, ⅓, ¾) in HSV — 210° is halfway between 180° (cyan) and 240° (blue), ⅓ is the saturation measurement mentioned before, and ¾ is the largest channel.

It’s that hand-waved value bit that gives me trouble. I don’t really know how to intuitively explain what value is, which makes it hard to modify value to make the changes I want. I feel like I should have a better grasp of this after a year and a half of drawing, but alas.

I prefer HSL, which uses hue, saturation, and lightness. Lightness ranges from black to white, with the unperturbed color in the middle. Here’s lightness versus value for the azure color. (Its lightness is ⅝, the average of the smallest and largest channels.)

comparison of lightness and value for the azure color

The lightness just makes more sense to me. I can understand shifting a color towards white or black, and the color in the middle of that bar feels related to the azure I started with. Value looks almost arbitrary; I don’t know where the color at the far end comes from, and it just doesn’t seem to have anything to do with the original azure.

I’d hoped Wikipedia could clarify this for me. It tells me value is the same thing as brightness, but the mathematical definition on that page matches the definition of intensity from the little-used HSI model. I looked up lightness instead, and the first sentence says it’s also known as value. So lightness is value is brightness is intensity, but also they’re all completely different.

Wikipedia also says that HSV is sometimes known as HSB (where the “B” is for “brightness”), but I swear I’ve only ever seen HSB used as a synonym for HSL. I don’t know anything any more.

Oh, and in case you weren’t confused enough, the definition of “saturation” is different in HSV and HSL. Good luck!

Wikipedia does have some very nice illustrations of HSV and HSL, though, including depictions of them as a cone and double cone.

(Incidentally, you can use HSL directly in CSS now — there are hsl() and hsla() CSS3 functions which evaluate as colors. Combining these with Sass’s scale-color() function makes it fairly easy to come up with decent colors by hand, without having to go back and forth with an image editor. And I can even sort of read them later!)

An annoying problem with all of these models is that the idea of “lightness” is never quite consistent. Even in HSL, a yellow will appear much brighter than a blue with the same saturation and lightness. You may even have noticed in the RGB split diagram that I used dark red and green text, but light blue — the pure blue is so dark that a darker blue on top is hard to read! Yet all three colors have the same lightness in HSL, and the same value in HSV.

Clearly neither of these definitions of lightness or brightness or whatever is really working. There’s a thing called luminance, which is a weighted sum of the red, green, and blue channels that puts green as a whopping ten times brighter than blue. It tends to reflect how bright colors actually appear.

Unfortunately, luminance and related values are only used in fairly obscure color models, like YUV and Lab. I don’t mean “obscure” in the sense that nobody uses them, but rather that they’re very specialized and not often seen outside their particular niches: YUV is very common in video encoding, and Lab is useful for serious photo editing.

Lab is pretty interesting, since it’s intended to resemble how human vision works. It’s designed around the opponent process theory, which states that humans see color in three pairs of opposites: black/white, red/green, and yellow/blue. The idea is that we perceive color as somewhere along these axes, so a redder color necessarily appears less green — put another way, while it’s possible to see “yellowish green”, there’s no such thing as a “yellowish blue”.

(I wonder if that explains our affection for orange: we effectively perceive yellow as a fourth distinct primary color.)

Lab runs with this idea, making its three channels be lightness (but not the HSL lightness!), a (green to red), and b (blue to yellow). The neutral points for a and b are at zero, with green/blue extending in the negative direction and red/yellow extending in the positive direction.

Lab can express a whole bunch of colors beyond RGB, meaning they can’t be shown on a monitor, or even represented in most image formats. And you now have four primary colors in opposing pairs. That all makes it pretty weird, and I’ve actually never used it myself, but I vaguely aspire to do so someday.

I think those are all of the major ones. There’s also XYZ, which I think is some kind of master color model. Of course there’s CMYK, which is used for printing, but it’s effectively just the inverse of RGB.

With that out of the way, now we can get to the hard part!


I called RGB a color model: a way to break colors into component parts.

Unfortunately, RGB alone can’t actually describe a color. You can tell me you have a color (0%, 50%, 100%), but what does that mean? 100% of what? What is “the most blue”? More importantly, how do you build a monitor that can display “the most blue” the same way as other monitors? Without some kind of absolute reference point, this is meaningless.

A color space is a color model plus enough information to map the model to absolute real-world colors. There are a lot of these. I’m looking at Krita’s list of built-in colorspaces and there are at least a hundred, most of them RGB.

I admit I’m bad at colorspaces and have basically done my best to not ever have to think about them, because they’re a big tangled mess and hard to reason about.

For example! The effective default RGB colorspace that almost everything will assume you’re using by default is sRGB, specifically designed to be this kind of global default. Okay, great.

Now, sRGB has gamma built in. Gamma correction means slapping an exponent on color values to skew them towards or away from black. The color is assumed to be in the range 0–1, so any positive power will produce output from 0–1 as well. An exponent greater than 1 will skew towards black (because you’re multiplying a number less than 1 by itself), whereas an exponent less than 1 will skew away from black.

What this means is that halfway between black and white in sRGB isn’t (50%, 50%, 50%), but around (73%, 73%, 73%). Here’s a great example, borrowed from this post (with numbers out of 255):

alternating black and white lines alongside gray squares of 128 and 187

Which one looks more like the alternating bands of black and white lines? Surely the one you pick is the color that’s actually halfway between black and white.

And yet, in most software that displays or edits images, interpolating white and black will give you a 50% gray — much darker than the original looked. A quick test is to scale that image down by half and see whether the result looks closer to the top square or the bottom square. (Firefox, Chrome, and GIMP get it wrong; Krita gets it right.)

The right thing to do here is convert an image to a linear colorspace before modifying it, then convert it back for display. In a linear colorspace, halfway between white and black is still 50%, but it looks like the 73% grey. This is great fun: it involves a piecewise function and an exponent of 2.4.

It’s really difficult to reason about this, for much the same reason that it’s hard to grasp text encoding problems in languages with only one string type. Ultimately you still have an RGB triplet at every stage, and it’s very easy to lose track of what kind of RGB that is. Then there’s the fact that most images don’t specify a colorspace in the first place so you can’t be entirely sure whether it’s sRGB, linear sRGB, or something entirely; monitors can have their own color profiles; you may or may not be using a program that respects an embedded color profile; and so on. How can you ever tell what you’re actually looking at and whether it’s correct? I can barely keep track of what I mean by “50% grey”.

And then… what about transparency? Should a 50% transparent white atop solid black look like 50% grey, or 73% grey? Krita seems to leave it to the colorspace: sRGB gives the former, but linear sRGB gives the latter. Does this mean I should paint in a linear colorspace? I don’t know! (Maybe I’ll give it a try and see what happens.)

Something I genuinely can’t answer is what effect this has on HSV and HSL, which are defined in terms of RGB. Is there such a thing as linear HSL? Does anyone ever talk about this? Would it make lightness more sensible?

There is a good reason for this, at least: the human eye is better at distinguishing dark colors than light ones. I was surprised to learn that, but of course, it’s been hidden from me by sRGB, which is deliberately skewed to dedicate more space to darker colors. In a linear colorspace, a gradient from white to black would have a lot of indistinguishable light colors, but appear to have severe banding among the darks.

several different black to white gradients

All three of these are regular black-to-white gradients drawn in 8-bit color (i.e., channels range from 0 to 255). The top one is the naïve result if you draw such a gradient in sRGB: the midpoint is the too-dark 50% grey. The middle one is that same gradient, but drawn in a linear colorspace. Obviously, a lot of dark colors are “missing”, in the sense that we could see them but there’s no way to express them in linear color. The bottom gradient makes this more clear: it’s a gradient of all the greys expressible in linear sRGB.

This is the first time I’ve ever delved so deeply into exactly how sRGB works, and I admit it’s kind of blowing my mind a bit. Straightforward linear color is so much lighter, and this huge bias gives us a lot more to work with. Also, 73% being the midpoint certainly explains a few things about my problems with understanding brightness of colors.

There are other RGB colorspaces, of course, and I suppose they all make for an equivalent CMYK colorspace. YUV and Lab are families of colorspaces, though I think most people talking about Lab specifically mean CIELAB (or “L*a*b*”), and there aren’t really any competitors. HSL and HSV are defined in terms of RGB, and image data is rarely stored directly as either, so there aren’t really HSL or HSV colorspaces.

I think that exhausts all the things I know.

Real world color is also a lie

Just in case you thought these problems were somehow unique to computers. Surprise! Modelling color is hard because color is hard.

I’m sure you’ve seen the checker shadow illusion, possibly one of the most effective optical illusions, where the presence of a shadow makes a gray square look radically different than a nearby square of the same color.

Our eyes are very good at stripping away ambient light effects to tell what color something “really” is. Have you ever been outside in bright summer weather for a while, then come inside and everything is starkly blue? Lingering compensation for the yellow sunlight shifting everything to be slightly yellow; the opposite of yellow is blue.

Or, here, I like this. I’m sure there are more drastic examples floating around, but this is the best I could come up with. Here are some Pikachu I found via GIS.

photo of Pikachu plushes on a shelf

My question for you is: what color is Pikachu?

Would you believe… orange?

photo of Pikachu plushes on a shelf, overlaid with color swatches; the Pikachu in the background are orange

In each box, the bottom color is what I color-dropped, and the top color is the same hue with 100% saturation and 50% lightness. It’s the same spot, on the same plush, right next to each other — but the one in the background is orange, not yellow. At best, it’s brown.

What we see as “yellow in shadow” and interpret to be “yellow, but darker” turns out to be another color entirely. (The grey whistles are, likewise, slightly blue.)

Did you know that mirrors are green? You can see it in a mirror tunnel: the image gets slightly greener as it goes through the mirror over and over.

Distant mountains and other objects, of course, look bluer.

This all makes painting rather complicated, since it’s not actually about painting things the color that they “are”, but painting them in such a way that a human viewer will interpret them appropriately.

I, er, don’t know enough to really get very deep here. I really should, seeing as I keep trying to paint things, but I don’t have a great handle on it yet. I’ll have to defer to Mel’s color tutorial. (warning: big)

Blending modes

You know, those things in Photoshop.

I’ve always found these remarkably unintuitive. Most of them have names that don’t remotely describe what they do, the math doesn’t necessarily translate to useful understanding, and they’re incredibly poorly-documented. So I went hunting for some precise definitions, even if I had to read GIMP’s or Krita’s source code.

In the following, A is a starting image, and B is something being drawn on top with the given blending mode. (In the case of layers, B is the layer with the mode, and A is everything underneath.) Generally, the same operation is done on each of the RGB channels independently. Everything is scaled to 0–1, and results are generally clamped to that range.

I believe all of these treat layer alpha the same way: linear interpolation between A and the combination of A and B. If B has alpha t, and the blending mode is a function f, then the result is t × f(A, B) + (1 - t) × A.

If A and B themselves have alpha, the result is a little more complicated, and probably not that interesting. It tends to work how you’d expect. (If you’re really curious, look at the definition of BLEND() in GIMP’s developer docs.)

  • Normal: B. No blending is done; new pixels replace old pixels.

  • Multiply: A × B. As the name suggests, the channels are multiplied together. This is very common in digital painting for slapping on a basic shadow or tinting a whole image.

    I think the name has always thrown me off just a bit because “Multiply” sounds like it should make things bigger and thus brighter — but because we’re dealing with values from 0 to 1, Multiply can only ever make colors darker.

    Multiplying with black produces black. Multiplying with white leaves the other color unchanged. Multiplying with a gray is equivalent to blending with black. Multiplying a color with itself squares the color, which is similar to applying gamma correction.

    Multiply is commutative — if you swap A and B, you get the same result.

  • Screen: 1 - (1 - A)(1 - B). This is sort of an inverse of Multiply; it multiplies darkness rather than lightness. It’s defined as inverting both colors, multiplying, and inverting the result. Accordingly, Screen can only make colors lighter, and is also commutative. All the properties of Multiply apply to Screen, just inverted.

  • Hard Light: Equivalent to Multiply if B is dark (i.e., less than 0.5), or Screen if B is light. There’s an additional factor of 2 included to compensate for how the range of B is split in half: Hard Light with B = 0.4 is equivalent to Multiply with B = 0.8, since 0.4 is 0.8 of the way to 0.5. Right.

    This seems like a possibly useful way to apply basic highlights and shadows with a single layer? I may give it a try.

    The math is commutative, but since B is checked and A is not, Hard Light is itself not commutative.

  • Soft Light: Like Hard Light, but softer. No, really. There are several different versions of this, and they’re all a bit of a mess, not very helpful for understanding what’s going on.

    If you graphed the effect various values of B had on a color, you’d have a straight line from 0 up to 1 (at B = 0.5), and then it would abruptly change to a straight line back down to 0. Soft Light just seeks to get rid of that crease. Here’s Hard Light compared with GIMP’s Soft Light, where A is a black to white gradient from bottom to top, and B is a black to white gradient from left to right.

    graphs of combinations of all grays with Hard Light versus Soft Light

    You can clearly see the crease in the middle of Hard Light, where B = 0.5 and it transitions from Multiply to Screen.

  • Overlay: Equivalent to either Hard Light or Soft Light, depending on who you ask. In GIMP, it’s Soft Light; in Krita, it’s Hard Light except the check is done on A rather than B. Given the ambiguity, I think I’d rather just stick with Hard Light or Soft Light explicitly.

  • Difference: abs(A - B). Does what it says on the tin. I don’t know why you would use this? Difference with black causes no change; Difference with white inverts the colors. Commutative.

  • Addition and Subtract: A + B and A - B. I didn’t think much of these until I discovered that Krita has a built-in brush that uses Addition mode. It’s essentially just a soft spraypaint brush, but because it uses Addition, painting over the same area with a dark color will gradually turn the center white, while the fainter edges remain dark. The result is a fiery glow effect, which is pretty cool. I used it manually as a layer mode for a similar effect, to make a field of sparkles. I don’t know if there are more general applications.

    Addition is commutative, of course, but Subtract is not.

  • Divide: A ÷ B. Apparently this is the same as changing the white point to 1 - B. Accordingly, the result will blow out towards white very quickly as B gets darker.

  • Dodge and Burn: A ÷ (1 - B) and 1 - (1 - A) ÷ B. Inverses in the same way as Multiply and Screen. Similar to Divide, but with B inverted — so Dodge changes the white point to B, with similar caveats as Divide. I’ve never seen either of these effects not look horrendously gaudy, but I think photographers manage to use them, somehow.

  • Darken Only and Lighten Only: min(A, B) and max(A, B). Commutative.

  • Linear Light: (2 × A + B) - 1. I think this is the same as Sai’s “Lumi and Shade” mode, which is very popular, at least in this house. It works very well for simple lighting effects, and shares the Soft/Hard Light property that darker colors darken and lighter colors lighten, but I don’t have a great grasp of it yet and don’t know quite how to explain what it does. So I made another graph:

    graph of Linear Light, with a diagonal band of shading going from upper left to bottom right

    Super weird! Half the graph is solid black or white; you have to stay in that sweet zone in the middle to get reasonable results.

    This is actually a combination of two other modes, Linear Dodge and Linear Burn, combined in much the same way as Hard Light. I’ve never encountered them used on their own, though.

  • Hue, Saturation, Value: Work like you might expect: converts A to HSV and replaces either its hue, saturation, or value with Bs.

  • Color: Uses HSL, unlike the above three. Combines Bs hue and saturation with As lightness.

  • Grain Extract and Grain Merge: A - B + 0.5 and A + B - 0.5. These are clearly related to film grain, somehow, but their exact use eludes me.

    I did find this example post where someone combines a photo with a blurred copy using Grain Extract and Grain Merge. Grain Extract picked out areas of sharp contrast, and Grain Merge emphasized them, which seems relevant enough to film grain. I might give these a try sometime.

Those are all the modes in GIMP (except Dissolve, which isn’t a real blend mode; also, GIMP doesn’t have Linear Light). Photoshop has a handful more. Krita has a preposterous number of other modes, no, really, it is absolutely ridiculous, you cannot even imagine.

I may be out of things

There’s plenty more to say about color, both technically and design-wise — contrast and harmony, color blindness, relativity, dithering, etc. I don’t know if I can say any of it with any particular confidence, though, so perhaps it’s best I stop here.

I hope some of this was instructive, or at least interesting!

Adobe, Microsoft Patch Critical Security Bugs

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/07/adobe-microsoft-patch-critical-security-bugs/

Adobe has pushed out a critical update to plug at least 52 security holes in its widely-used Flash Player browser plugin, and another update to patch holes in Adobe Reader. Separately, Microsoft released 11 security updates to fix vulnerabilities more than 40 flaws in Windows and related software.

brokenflash-aFirst off, if you have Adobe Flash Player installed and haven’t yet hobbled this insecure program so that it runs only when you want it to, you are playing with fire. It’s bad enough that hackers are constantly finding and exploiting zero-day flaws in Flash Player before Adobe even knows about the bugs.

The bigger issue is that Flash is an extremely powerful program that runs inside the browser, which means users can compromise their computer just by browsing to a hacked or malicious site that targets unpatched Flash flaws.

The smartest option is probably to ditch this insecure program once and for all and significantly increase the security of your system in the process. I’ve got more on that approach — as well as slightly less radical solutions — in A Month Without Adobe Flash Player.

If you choose to update, please do it today. The most recent versions of Flash should be available from this Flash distribution page or the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). Chrome and IE should auto-install the latest Flash version on browser restart.

Happily, Adobe has delayed plans to stop distributing direct download links to its Flash Player program. The company had said it would decommission the direct download page on June 30, 2016, but the latest, patched Flash version for Windows and Mac systems is still available there. The wording on the site has been changed to indicate the download links will be decommissioned “soon.”

Adobe’s advisory on the Flash flaws is here. The company also released a security update that addresses at least 30 security holes in Adobe Reader. The latest version of Reader for most Windows and Mac users is v. 15.017.20050.

brokenwindowsSix of the 11 patches Microsoft issued this month earned its most dire “critical” rating, which Microsoft assigns to software bugs that can be exploited to remotely commandeer vulnerable machines with little to no help from users, save from perhaps browsing to a hacked or malicious site.

In fact, most of the vulnerabilities Microsoft fixed this Patch Tuesday are in the company’s Web browsers — i.e., Internet Explorer (15 vulnerabilities) and its newer Edge browser (13 flaws). Both patches address numerous browse-and-get-owned issues.

Another critical patch from Redmond tackles problems in Microsoft Office that could be exploited through poisoned Office documents.

For further breakdown on the patches this month from Adobe and Microsoft, check out these blog posts from security vendors Qualys and Shavlik. And as ever, if you encounter any problems downloading or installing any of the updates mentioned above please leave a note about your experience in the comments below.

Google’s Post-Quantum Cryptography

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/googles_post-qu.html

News has been bubbling about an announcement by Google that it’s starting to experiment with public-key cryptography that’s resistant to cryptanalysis by a quantum computer. Specifically, it’s experimenting with the New Hope algorithm.

It’s certainly interesting that Google is thinking about this, and probably okay that it’s available in the Canary version of Chrome, but this algorithm is by no means ready for operational use. Secure public-key algorithms are very hard to create, and this one has not had nearly enough analysis to be trusted. Lattice-based public-key cryptosystems such as New Hope are particularly subtle — and we cryptographers are still learning a lot about how they can be broken.

Targets are important in cryptography, and Google has turned New Hope into a good one. Consider this an opportunity to advance our cryptographic knowledge, not an offer of a more-secure encryption option. And this is the right time for this area of research, before quantum computers make discrete-logarithm and factoring algorithms obsolete.

Adobe Update Plugs Flash Player Zero-Day

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/06/adobe-update-plugs-flash-player-zero-day/

Adobe on Thursday issued a critical update for its ubiquitous Flash Player software that fixes three dozen security holes in the widely-used browser plugin, including at least one vulnerability that is already being exploited for use in targeted attacks.

brokenflash-aThe latest update brings Flash to v. for Windows and Mac users alike. If you have Flash installed, you should update, hobble or remove Flash as soon as possible.

The smartest option is probably to ditch the program once and for all and significantly increase the security of your system in the process. I’ve got more on that approach (as well as slightly less radical solutions ) in A Month Without Adobe Flash Player.

If you choose to update, please do it today. The most recent versions of Flash should be available from this Flash distribution page or the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). Chrome and IE should auto-install the latest Flash version on browser restart (I had to manually check for updates in Chrome an restart the browser to get the latest Flash version).

For some reason that probably has nothing to do with security, Adobe has decided to stop distributing direct links to its Flash Player software. According to the company’s Flash distribution page, on June 30, 2016 Adobe will decommission direct links to various Flash Player downloads. This will essentially force Flash users to update the program using its built-in automatic updates feature (which sometimes takes days to notice a new security update is available), or to install the program from the company’s Flash Home page — a download that currently bundles McAfee Security Scan Plus and a product called True Key by Intel Security.

Anything that makes it less likely users will update Flash seems like a bad idea, especially when we’re talking about a program that often needs security fixes more than once a month.

YouTube Threatens Legal Action Against Video Downloader

Post Syndicated from Ernesto original https://torrentfreak.com/youtube-threatens-legal-action-video-downloader-160530/

sadyoutubeWith over a billion users YouTube is the largest video portal on the Internet.

Every day users watch hundreds of millions of hours of video on the site, and for many it’s a prime source to enjoy music as well.

While YouTube is a blessing to thousands of content creators, there are also concerns among rightsholders. Music labels in particular are not happy with the fact that music videos can be easily downloaded from the site with help from external services.

To address the problem YouTube is contacting these third party sites, urging them to shut down this functionality. Most recently, YouTube’s legal team contacted the popular download service TubeNinja.

“It appears from your website and other marketing materials that TubeNinja is designed to allow users to download content from YouTube,” the email from YouTube’s legal team reads.

According to YouTube the video downloader violates the terms of service (ToS) of both the site and the API. Among other things, YouTube’s ToS prohibits the downloading of any video that doesn’t have a download link listed on the site.

Later, Google’s video service adds that if the site owner continues to operate the service this “may result in legal consequences.”

Email from YouTube’s legal team


Despite the threatening language, TubeNinja owner Nathan doesn’t plan to take the functionality offline. He informed YouTube that his service doesn’t use YouTube’s API and says that it’s the responsibility of his users to ensure that they don’t violate the ToS of YouTube and TubeNinja.

“Our own ToS clearly states that the user is responsible for the legitimacy of the content they use our service for,” Nathan tells us.

TubeNinja doesn’t believe that YouTube has a very strong case and Nathan has asked the company for clarification. He also mentions that Google’s very own Chrome service lists many plugins that offer the exact same functionality.

“They don’t even seem to enforce removal of Chrome plugins that enable users to do the exact same thing,” Nathan says.

“Also the fact that services like Savefrom, Keepvid, clipconverter etc have been around since 2008, we find it hard to believe that there is any legal case at all. Kind of like suing a maker of VHS-recorders for users taping the television broadcast,” he adds.

This isn’t the first time that YouTube has taken action against download services. The site has gone after similar sites in the past

In 2012 Google went after Youtube-mp3.org with a message similar to the one TubeNinja received, but despite these efforts the site remains one of the most used conversion tools with millions of visitors per day.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Under Construction, our PICO-8 game

Post Syndicated from Eevee original https://eev.ee/blog/2016/05/25/under-construction-our-pico-8-game/

Mel and I made a game!

We’d wanted to a small game together for a while. Last month’s post about embedding Lua reminded me of the existence of the PICO-8, a “fantasy console” with 8-bit-ish limitations and built-in editing tools. Both of us have a bad habit of letting ambitions spiral way out of control, so “built-in limitations” sounded pretty good to me. I bought the console ($15, or free with the $20 Voxatron alpha) on a whim and started tinkering with it.

The result: Under Construction!

pico-8 cartridge

You can play in your very own web browser, assuming you have a keyboard. Also, that image is the actual cartridge, which you can save and play directly if you happen to have PICO-8. It’s also in the PICO-8 BBS.

(A couple people using Chrome on OS X have reported a very early crash, which seems to be a bug outside of my control. Safari works, and merely restarting Chrome has fixed it for at least one person.)

I don’t have too much to say about the game itself; hopefully, it speaks for itself. If not, there’s a little more on its Floraverse post.

I do have some things to say about making it. Also I am really, really tired, so apologies if this is even more meandering than usual.

The PICO-8

You can get a quick idea of what the console is all about from the homepage. I say “console”, but of course, there’s no physical hardware — the PICO-8 is a game engine with some severe artificial restrictions. It’s kind of like an emulator for a console that never existed.

Feel free to pore over the manual, but the most obvious limitations are:

  • Screen resolution of 128×128, usually displayed scaled up, since that’s microscopic on modern monitors.

  • 16-color fixed palette. The colors you see in the screenshots are the only colors you can use, period.

  • A spritesheet with enough room for 256 8×8 sprites. (You can make and use larger sprites, or use the space for something else entirely if you want, but the built-in tools assume you generally want that sprite size.)

  • A 128×64 map, as measured in tiles. The screen is 16×16 tiles, so that’s enough space for 32 screenfuls.

  • Alas! Half of the sprite space and half of the map space are shared, so you can’t actually have both 256 sprites and 32 map screens. You can have 256 sprites and 16 map screens, or 128 sprites and 32 map screens (which is what we did), or split the shared space some other way.

  • 64 chiptuney sound effects, complete with a tiny tracker for creating them.

  • 64 music tracks, built out of loops of up to four sound effects. There are only four sound channels total, so having four-channel background music means you can’t play any sound effects on top of it.

The programming language is a modified Lua, which comes with its own restrictions, but I’ll get to those later.

The restrictions are “carefully chosen to be fun to work with, [and] encourage small but expressive designs”. For the most part, they succeeded.

Our process

I bought PICO-8 at the beginning of the month. I spent a few hours a day over the next several days messing around, gradually producing a tiny non-game I called “eeveequest” and tweeting out successive iterations. I added the basics as they came to mind, without any real goal: movement, collision, jumping, music, sound effects, scrolling, camera movement.

The PICO-8 did let me create something with all the usual parts of a game in a matter of hours, and that’s pretty cool. I’ve never made music before, save for an hour or two trying to get audio working with LMMS, but I managed even a simple tune and some chimes here.

Meanwhile, Mel was independently trying it out, drawing sprites and populating a map. I copied my movement code over to their cartridge so we could walk around in this little world.

Then, uh, they gave me an avalanche of text describing what they wanted, and I vanished from the mortal realm for ten days while I set about making it happen.

Look, I didn’t say this was a good process. Our next endeavor should be a little more balanced, since we won’t be eight hours apart, and a good chunk of engine code is already written.

One particularly nice thing: PICO-8 cartridges are very simple text files with the code at the top. I eventually migrated to writing code in vim rather than the (extremely simple) built-in code editor, and if Mel had been working on the map at the same time, I could just copy-paste everything except the code into my own cart. It would play decently well with source control, but for a two-person project where I’m the only one editing the code, I couldn’t really justify inflicting git on a non-programmer. We did have one minor incident where a few map changes were lost, but for the most part, it worked surprisingly well for collaborating between a programmer and an artist.

There was a lot more back-and-forth towards the end, once the bulk of the code was written (and Mel was no longer busy selling at three conventions), when the remaining work was subtler design issues. That was probably the most fun I had.

Programming the PICO-8

I picked on Lua a bit in that post last month. Having now worked with it for two solid weeks, I regret what I said. Because now I really want to say it all again, more forcefully. Silently ignoring errors (including division by zero!) and having no easy way to print values for debugging are my top two least favorite programming language “features”.

Plenty of Lua is just plain weird, but forgiveable. Those two things make it pretty aggravating. It’s not even for any good reason — Lua clearly has an error-reporting mechanism, so it’s entirely capable of having division by zero be an error. And half the point of a dynamic runtime is that you can easily examine values at runtime, yet Lua makes doing that as difficult as possible. Argh.

PICO-8’s Lua

The PICO-8 uses a slightly modified Lua 5.2. The documentation isn’t precise about what parts of Lua are still available, so I had some minor misadventures, like thinking varargs weren’t supported because the arg global was missing. (Turns out varargs work fine, but Lua 5.2 changed the syntax and got rid of the clumsy global.)

If it weren’t yet obvious, I’m no Lua expert. The most obvious differences to me are:

  • Numbers are signed 16.16 fixed-point, rather than Lua’s double-precision floats. (This has the curious side effect that dividing by zero produces 32768.) I have a soft spot for fixed-point! It’s nice sometimes to have a Planck length that doesn’t depend on where you are in the game world.

  • Most of the standard library is missing. There’s no collectgarbage, error, ipairs, pcall, tostring, unpack, or other more obscure stuff. The bit32, debug, math, os, string, and table libraries are gone. The _G superglobal is gone.

  • Several built-ins have been changed or replaced with alternatives: all iterates over a sequence-like table; pairs iterates over all keys and values in a table; print now prints to a particular screen coordinate. A handful of substitute math and string functions are built in. There are functions for bit operations, which I guess were in the bit32 library.

  • There are some mutating assignment operators: +=, -=, *=, /=, %=. Lua doesn’t have these.

The PICO-8 does still have metatables and coroutines. Metatables were a blessing. Coroutines are nice, but I never came up with a good excuse to use them.

Writing the actual game stuff

Each of the PICO-8’s sprites has a byte’s worth of flags — eight color-coded toggles that can be turned on or off. They don’t mean anything to the console, and you’re free to use them however you want in your game.

I decided early on, even for EeveeQuest™, to use one of the flags to indicate a block was solid. It’s the most obvious approach, and it turned out to have the happy side effect that Mel could change the physics of the game world without touching the code at all. I ended up using five more flags to indicate which acts a tile should appear in.

I love rigging game engines so that people can modify as much stuff as possible without touching code. I already prefer writing code in a more declarative style, and this is even better: the declarations are out of code entirely and in a graphical tool instead.

Game code is generally awful, so I did my best to keep things tidy. First, some brief background.

A game has two major concerns: it has to simulate the world, and it has to draw that world to the screen. The easiest and most obvious approach, then, is to alternate those steps: do a single update pass (where everyone takes one step, say), then draw, then do another update, and so on. One round of updating and drawing is a frame, and the length of time it takes is a tic. If you want your game to run at 60 fps, then a tic is 1/60 seconds, and you have to be sure you can both update once and draw once in that amount of time.

You get into trouble when drawing to the screen starts to take long. Say it takes two tics. Now not only is the display laggy, but the actual game world is running at half speed, because you only run an update pass every two tics instead of every one.

The PICO-8 is not the kind of platform you might expect to have this problem, but nonetheless it offers a very simple solution. It asks your code to update, then it asks your code to draw, separately. If the console detects that the game is running too slowly to hit its usual 30 fps, it’ll automatically slow down its framerate to 15fps, but it’ll ask you to update twice between draws. (Put differently, it skips every other draw.)

This turned out to be handy in act 4, where the fog effect is sometimes a little too intensive. The framerate drops, but the world still moves along at the right speed, so it just looks like the animation isn’t quite as smooth.

I copied this approach all the way down. I have a simple object called a “scene”, which is what you might also call a “screen”: the title screen, playable area, text between acts, and ending credits are all different scenes. The playable scene has a handful of layers, each of which updates and then draws like a stack.

The map is kind of interesting. The PICO-8 has a function for drawing an entire block of the map to the screen, which works pretty well for static stuff like background tiles and solid walls. There’s no built-in support for animation, though, and certainly nothing for handling player movement or tiles that only sometimes exist.

So I made an “actor” type, which can be anything that wants to participate in the usual update/draw cycle. There are three major kinds of actor:

  • Decor shouldn’t move and doesn’t draw itself; instead, it edits its own position on the map. This is used for all the animated sprites, as well as a few static objects like radios.

  • Mobs aren’t on the map at all and can move around freely, possibly colliding with parts of the map and each other. Only the player and the rocks are mobs; each additional one makes collision detection significantly more expensive. (I never got around to adding a blockmap.)

  • Transient actors don’t actually update or draw, and in fact they don’t permanently exist at all. They’re only created temporarily, when the player or another mob bumps into a map tile. When you walk into a wall, for example, the game creates a temporary wall actor at that position that checks its sprite’s flags to see if it blocks you. This is how the spikes work, too; they don’t need to update every tic, because they don’t actually do anything on their own, but they can still have custom behavior when you bump into them. They also have a custom collision box.

You can declare that a specific sprite on the map will always be turned into a particular kind of actor, and give it its own behavior. Virtually everything works this way, even the player, which means you can move the mole sprite anywhere on the map and it’ll automatically become the player’s start point.

You can also define an animation, and it’ll automatically be turned into a type of decor that doesn’t do anything except edit the map every few frames to animate itself.

By far, the worst thing to deal with was collision. I must’ve rewritten it four or five times.

I’ve written collision code before, but apparently I’d never done it from complete scratch for a world with free movement and polished it to a mirror sheen.

Maybe I’m missing something here, but the usual simple approach to collision is terrible. It tends to go like this.

  1. Move the player some amount in the direction they’re moving.
  2. See if they’re now overlapping anything.
  3. If they are, move them back to where they started.

That’s some hot garbage. What if you’re falling, and you’re a pixel above the ground, but you’re moving at two pixels per tic? The game will continually try to move you two pixels, see you’re stuck in the ground, and then put you back where you started. You’ll hover a pixel above the ground forever.

I tried doing some research, by which I mean I googled for three minutes and got sick of finding “tutorials” that didn’t go any further than this useless algorithm, so I just beat on it until I got what I wanted. My approach:

  1. Get the player’s bounding box. Extend that box in the direction they’re moving, so it covers both their old and new position.
  2. Get a list of all the actors that overlap that box, including transient actors for every tile. (This is the main place they’re used, so I can handle them the same way as any other actor.)
  3. Sort all those actors in the order the player will hit them, starting with the closest.
  4. Look at the first actor. Move the player towards it until they touch it, then stop. If the actor has any collision behavior, activate it now.
  5. If the actor is solid, you’re done, at least in this direction; drop the actor’s velocity to zero. Otherwise, repeat with the next actor.
  6. If there’s any movement left at the end, tack that on too, since nothing else can be in the way.

I had some fun bugs along the way, like the one where the collision detection was almost perfect, unless you hit a solid block exactly on its corner, in which case you would pass right through it. Or the multiple times you couldn’t walk along the ground because you were colliding with the corner of the next tile of ground.

This seems to be pretty rock solid now, though it’s a bit slow when there are more than half a dozen or so mobs wandering around. It works well enough for this game.

The runner-up for awkward things to deal with: time. Even a simple animation is mildly annoying. It’s not that the math is difficult; it’s more that you can’t look at the math and immediately understand what it’s doing.

I wished numerous times that I had a more straightforward way to say “fade this in, wait 10 tics, then fade it back out”, but I never had the time to sit down and come up with something nice. I considered using coroutines for this, since they’re naturally sleep-able, but I don’t think they’d work so well for anything beyond the most trivial operations. cocos2d has a concept of “actions”, where you can animate a change over time and even schedule several changes to happen in sequence; maybe I’ll give something like that a try next time.

The PICO-8’s limits

Working within limitations has a unique way of inspiring creative solutions. It can even help you get started in a medium you’ve never tried before. I know. It’s the whole point of the console. I only got around to making some music because I was handed a very limited tracker.

Sixteen colors? Well, that’s okay; now I don’t have to worry about picking my own colors, which is something I’m not terribly good at.

Small screen and sprite size? That puts a cap on how good the art can possibly be expected to look anyway, so I can make simple sprites fairly easily and have a decent-looking game.

Limited map space? Well… now we’re getting into fuzzier territory. A limit on map space is less of a creative constraint and more of a hard cap on how much game you can fit in your game. At least you can make up for it in a few ways with some creative programming.

Hmm. About that.

There are three limits on the amount of code you can have:

  • No more than 8192 tokens. A token is a single identifier, string, number, or operator. A pair of brackets counts as one token. Commas, periods, colons, and the local and end keywords are free.
  • No more than 65536 bytes. That’s a pretty high limit, and I think it really only exists to prevent you from cramming ludicrous amounts of data into a cartridge via strings.
  • No more than 15360 bytes, after compression. More on that in a moment.

I thought 8192 sounded like plenty. Then I went and wrote a game. Our final tally, for my 2779 lines of Lua:

  • 8183/8192 tokens
  • 56668/65536 bytes
  • 22574/15360 compressed bytes

That’s not a typo; I was well over the compressed limit. I only even found out about the compressed limit a few days ago — the documentation never actually mentions it, and the built-in editor only tracks code size and token count! It came as a horrible surprise. For the released version of the game, I had to delete every single comment, remove every trace of debugging code, and rename several local variables. I was resorting to deleting extra blank lines before I finally got it to fit.

Trying to squeeze code into a compressed limit is maddening. Most of the obvious ways to significantly shorten code involve removing duplication, but duplication is exactly what compression algorithms are good at dealing with! Several times I tried an approach that made the code shorter in both absolute size and token count, only to find that the compressed size had grown slightly larger.

As for tokens, wow. I’ve never even had to think in tokens before. Here are some token costs from the finished game.

  • 65 — updating the camera to follow the player
  • 95 — sort function
  • 148 — simple vector type
  • 198 — function to print text on the screen aligned to a point
  • 201 — debugging function that prints values to stdout
  • 256 — perlin noise implementation
  • 261 — screen fade
  • 280 — fog effect from act 4
  • 370 — title screen
  • 690 — credits in their entirety
  • 703 — physics and collision detection

It adds up pretty quickly. We essentially sliced off two entire endings, because I just didn’t have any space left.

Incredibly, the token limitation used to be worse. I went searching for people talking about this, and I mostly found posts from several releases ago, when the byte limit was 32K and there weren’t any freebies in the token count — parentheses counted as two, dots counted as one.

This kind of sucks, for several reasons.

  • The most obvious way to compensate for the graphical limitations is with code: procedural generation, effects, and the like. Both of those eat up tokens fast. I spent a total of 536 tokens, 6.5% of my total space, just on adding a fog overlay.

  • I’m effectively punished for organizing code. Some 13% of my token count goes towards dotted names, i.e., foo.bar instead of foo_bar.

  • It’s just not the kind of limitation that really inspires creativity. If you limit the number of sprites I have, okay, I can at least visually see I only have so much space and adjust accordingly. I don’t have any sense for how many tokens I’d need to write some code. If I hit the limit and the game isn’t done, that’s just a wall. I don’t have a whole lot of creative options here: I can waste time playing code golf (which here is more of an annoying puzzle than a source of inspiration), or I can delete a chunk of the game.

I looked at the most popular cartridges, and a couple depressing themes emerged. Several games are merely demos of interesting ideas that could be expanded into a game, except that the ideas alone already take half of the available tokens. Quite a few have resorted to a strong arcade vibe, with little distinction between “levels” except that there are more or different enemies. There’s a little survival crafting game which seems like it could be interesting, but it only has four objects you can build and it’s already out of tokens. Very few games have any sort of instructions (which would eat precious tokens!), and several of them have left me frustrated from the outset, until I read through the forum thread in search of someone explaining what I’m supposed to be doing.

And keep in mind, you’re starting pretty much from scratch. The most convenience the PICO-8 affords is copying a whole block of the map to the screen at once. Everything else is your problem, and it all has to fit under the cap.

The PICO-8 is really appealing overall: it has little built-in tools for creating all the resources, it’s pretty easy to do stuff with, its plaintext cartridge format makes collaboration simple, its resource limits are inspiring, it can compile to a ready-to-go browser page with no effort at all, it can even spit out short gameplay GIFs with the press of a button. And yet I’m a little apprehensive about trying anything very ambitious, now that I know how little code I’m allowed to have. The relatively small map is a bit of a shame, but the code limit is really killer.

I’ll certainly be making a few more things with this. At the same time, I can’t help but feel that some of the potential has been squeezed out of it.

Adobe, Microsoft Push Critical Updates

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/05/adobe-microsoft-push-critical-updates-2/

Adobe has issued security updates to fix weaknesses in its PDF Reader and Cold Fusion products, while pointing to an update to be released later this week for its ubiquitous Flash Player browser plugin. Microsoft meanwhile today released 16 update bundles to address dozens of security flaws in Windows, Internet Explorer and related software.

Microsoft’s patch batch includes updates for “zero-day” vulnbrokenwindowserabilities (flaws that attackers figure out how to exploit before before the software maker does) in Internet Explorer (IE) and in Windows. Half of the 16 patches that Redmond issued today earned its “critical” rating, meaning the vulnerabilities could be exploited remotely through no help from the user, save for perhaps clicking a link, opening a file or visiting a hacked or malicious Web site.

According to security firm Shavlik, two of the Microsoft patches tackle issues that were publicly disclosed prior to today’s updates, including bugs in IE and the Microsoft .NET Framework.

Anytime there’s a .NET Framework update available, I always uncheck those updates to install and then reboot and install the .NET updates; I’ve had too many .NET update failures muddy the process of figuring out which update borked a Windows machine after a batch of patches to do otherwise, but your mileage may vary.

On the Adobe side, the pending Flash update fixes a single vulnerability that apparently is already being exploited in active attacks online. However, Shavlik says there appears to be some confusion about how many bugs are fixed in the Flash update.

“If information gleaned from [Microsoft’s account of the Flash Player update] MS16-064 is accurate, this Zero Day will be accompanied by 23 additional CVEs, with the release expected on May 12th,” Shavlik wrote. “With this in mind, the recommendation is to roll this update out immediately.”


Adobe says the vulnerability is included in Adobe Flash Player and earlier versions for Windows, Macintosh, Linux, and Chrome OS, and that the flaw will be fixed in a version of Flash to be released May 12.

As far as Flash is concerned, the smartest option is probably best to hobble or ditch the program once and for all — and significantly increase the security of your system in the process. I’ve got more on that approach (as well as slightly less radical solutions ) in A Month Without Adobe Flash Player.

If you use Adobe Reader to display PDF documents, you’ll need to update that, too. Alternatively, consider switching to another reader that is perhaps less targeted. Adobe Reader comes bundled with a number of third-party software products, but many Windows users may not realize there are alternatives, including some good free ones. For a time I used Foxit Reader, but that program seems to have grown more bloated with each release. My current preference is Sumatra PDF; it is lightweight (about 40 times smaller than Adobe Reader) and quite fast.

Finally, if you run a Web site that in any way relies on Adobe’s Cold Fusion technology, please update your software soon. Cold Fusion vulnerabilities have traditionally been targeted by cyber thieves to compromise countless online shops.

Chrome, Firefox and Safari Block Pirate Bay as “Phishing” Site

Post Syndicated from Ernesto original https://torrentfreak.com/chrome-and-firefox-block-tpb-as-phishing-site-160507/

thepirateThere’s a slight panic breaking out among Pirate Bay users, who are having a hard time accessing the site.

Over the past few hours Chrome and Firefox and Safari have started to block access to Thepiratebay.se due to reported security issues.

Instead of a page displaying the iconic pirate ship, visitors are presented with an ominous red warning banner.

“Deceptive site ahead: Attackers on Thepiratebay.se may trick you into doing something dangerous like installing software or revealing your personal information,” the Chrome warning reads.

tpbchrome block May2016

Firefox users may encounter a similar banner, branding The Pirate Bay as a “web forgery” which may trick users into sharing personal information. This may lead to identity theft or other fraud.

“Web forgeries are designed to trick you into revealing personal or financial information by imitating sources you may trust. Entering any information on this web page may result in identity theft or other fraud,” the browser warns.

tpb firefox block May2016

Google’s safebrowsing page for TPB currently lists the site as dangerous. It is likely that the current problems are related to issues caused by third-party advertisers. Just last week Malwarebytes reported that some ads were dropping ransomware through the site.

The issue appears to be limited to the desktop versions of most browsers, and not everyone is seeing the error pages yet.

This is not the first time browsers have flagged The Pirate Bay. The same issue has come up before supposedly due to malicious advertisers.

The Pirate Bay team is aware of the issues and will probably resolve it sooner than later. Impatient or adventurous users who want to bypass the warning can do so by disabling their browser’s security warnings altogether in the settings, at their own risk of course.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.