Tag Archives: cci

A Thanksgiving Carol: How Those Smart Engineers at Twitter Screwed Me

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/11/a-thanksgiving-carol-how-those-smart.html

Thanksgiving Holiday is a time for family and cheer. Well, a time for family. It’s the holiday where we ask our doctor relatives to look at that weird skin growth, and for our geek relatives to fix our computers. This tale is of such computer support, and how the “smart” engineers at Twitter have ruined this for life.

My mom is smart, but not a good computer user. I get my enthusiasm for science and math from my mother, and she has no problem understanding the science of computers. She keeps up when I explain Bitcoin. But she has difficulty using computers. She has this emotional, irrational belief that computers are out to get her.

This makes helping her difficult. Every problem is described in terms of what the computer did to her, not what she did to her computer. It’s the computer that needs to be fixed, instead of the user. When I showed her the “haveibeenpwned.com” website (part of my tips for securing computers), it showed her Tumblr password had been hacked. She swore she never created a Tumblr account — that somebody or something must have done it for her. Except, I was there five years ago and watched her create it.

Another example is how GMail is deleting her emails for no reason, corrupting them, and changing the spelling of her words. She emails the way an impatient teenager texts — all of us in the family know the misspellings are not GMail’s fault. But I can’t help her with this because she keeps her GMail inbox clean, deleting all her messages, leaving no evidence behind. She has only a vague description of the problem that I can’t make sense of.

This last March, I tried something to resolve this. I configured her GMail to send a copy of all incoming messages to a new, duplicate account on my own email server. With evidence in hand, I would then be able solve what’s going on with her GMail. I’d be able to show her which steps she took, which buttons she clicked on, and what caused the weirdness she’s seeing.

Today, while the family was in a state of turkey-induced torpor, my mom brought up a problem with Twitter. She doesn’t use Twitter, she doesn’t have an account, but they keep sending tweets to her phone, about topics like Denzel Washington. And she said something about “peaches” I didn’t understand.

This is how the problem descriptions always start, chaotic, with mutually exclusive possibilities. If you don’t use Twitter, you don’t have the Twitter app installed, so how are you getting Tweets? Over much gnashing of teeth, it comes out that she’s getting emails from Twitter, not tweets, about Denzel Washington — to someone named “Peaches Graham”. Naturally, she can only describe these emails, because she’s already deleted them.

“Ah ha!”, I think. I’ve got the evidence! I’ll just log onto my duplicate email server, and grab the copies to prove to her it was something she did.

I find she is indeed receiving such emails, called “Moments”, about topics trending on Twitter. They are signed with “DKIM”, proving they are legitimate rather than from a hacker or spammer. The only way that can happen is if my mother signed up for Twitter, despite her protestations that she didn’t.

I look further back and find that there were also confirmation messages involved. Back in August, she got a typical Twitter account signup message. I am now seeing a little bit more of the story unfold with this “Peaches Graham” name on the account. It wasn’t my mother who initially signed up for Twitter, but Peaches, who misspelled the email address. It’s one of the reasons why the confirmation process exists, to make sure you spelled your email address correctly.

It’s now obvious my mom accidentally clicked on the [Confirm] button. I don’t have any proof she did, but it’s the only reasonable explanation. Otherwise, she wouldn’t have gotten the “Moments” messages. My mom disputed this, emphatically insisting she never clicked on the emails.

It’s at this point that I made a great mistake, saying:

“This sort of thing just doesn’t happen. Twitter has very smart engineers. What’s the chance they made the mistake here, or…”.

I recognized condescension of words as they came out of my mouth, but dug myself deeper with:

“…or that the user made the error?”

This was wrong to say even if I were right. I have no excuse. I mean, maybe I could argue that it’s really her fault, for not raising me right, but no, this is only on me.

Regardless of what caused the Twitter emails, the problem needs to be fixed. The solution is to take control of the Twitter account by using the password reset feature. I went to the Twitter login page, clicked on “Lost Password”, got the password reset message, and reset the password. I then reconfigured the account to never send anything to my mom again.

But when I logged in I got an error saying the account had not yet been confirmed. I paused. The family dog eyed me in wise silence. My mom hadn’t clicked on the [Confirm] button — the proof was right there. Moreover, it hadn’t been confirmed for a long time, since the account was created in 2011.

I interrogated my mother some more. It appears that this has been going on for years. She’s just been deleting the emails without opening them, both the “Confirmations” and the “Moments”. She made it clear she does it this way because her son (that would be me) instructs her to never open emails she knows are bad. That’s how she could be so certain she never clicked on the [Confirm] button — she never even opens the emails to see the contents.

My mom is a prolific email user. In the last eight months, I’ve received over 10,000 emails in the duplicate mailbox on my server. That’s a lot. She’s technically retired, but she volunteers for several charities, goes to community college classes, and is joining an anti-Trump protest group. She has a daily routine for triaging and processing all the emails that flow through her inbox.

So here’s the thing, and there’s no getting around it: my mom was right, on all particulars. She had done nothing, the computer had done it to her. It’s Twitter who is at fault, having continued to resend that confirmation email every couple months for six years. When Twitter added their controversial “Moments” feature a couple years back, somehow they turned on Notifications for accounts that technically didn’t fully exist yet.

Being right this time means she might be right the next time the computer does something to her without her touching anything. My attempts at making computers seem rational has failed. That they are driven by untrustworthy spirits is now a reasonable alternative.

Those “smart” engineers at Twitter screwed me. Continuing to send confirmation emails for six years is stupid. Sending Notifications to unconfirmed accounts is stupid. Yes, I know at the bottom of the message it gives a “Not my account” selection that she could have clicked on, but it’s small and easily missed. In any case, my mom never saw that option, because she’s been deleting the messages without opening them — for six years.

Twitter can fix their problem, but it’s not going to help mine. Forever more, I’ll be unable to convince my mom that the majority of her problems are because of user error, and not because the computer people are out to get her.

NetNeutrality vs. AT&T censoring Pearl Jam

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/11/netneutrality-vs-at-censoring-pearl-jam.html

People keep retweeting this ACLU graphic in response to the FCC’s net neutrality decision. In this post, I debunk the first item on the list. In other posts [2] [4] I debunk other items.

First of all, this obviously isn’t a Net Neutrality case. The case isn’t about AT&T acting as an ISP transiting network traffic. Instead, this was about AT&T being a content provider, through their “Blue Room” subsidiary, whose content traveled across other ISPs. Such things will continue to happen regardless of the most stringent enforcement of NetNeutrality rules, since the FCC doesn’t regulate content providers.
Second of all, it wasn’t AT&T who censored the traffic. It wasn’t their Blue Room subsidiary who censored the traffic. It was a third party company they hired to bleep things like swear words and nipple slips. You are blaming AT&T for a decision by a third party that went against AT&T’s wishes. It was an accident, not AT&T policy.
Thirdly, and this is the funny bit, Tim Wu, the guy who defined the term “net neutrality”, recently wrote an op-ed claiming that while ISPs shouldn’t censor traffic, that content providers should. In other words, he argues that companies AT&T’s Blue Room should censor political content.
What activists like ACLU say about NetNeutrality have as little relationship to the truth as Trump’s tweets. Both pick “facts” that agree with them only so long as you don’t look into them.

Backing Up the Modern Enterprise with Backblaze for Business

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/endpoint-backup-solutions/

Endpoint backup diagram

Organizations of all types and sizes need reliable and secure backup. Whether they have as few as 3 or as many as 300,000 computer users, an organization’s computer data is a valuable business asset that needs to be protected.

Modern organizations are changing how they work and where they work, which brings new challenges to making sure that company’s data assets are not only available, but secure. Larger organizations have IT departments that are prepared to address these needs, but often times in smaller and newer organizations the challenge falls upon office management who might not be as prepared or knowledgeable to face a work environment undergoing dramatic changes.

Whether small or large, local or world-wide, for-profit or non-profit, organizations need a backup strategy and solution that matches the new ways of working in the enterprise.

The Enterprise Has Changed, and So Has Data Use

More and more, organizations are working in the cloud. These days organizations can operate just fine without their own file servers, database servers, mail servers, or other IT infrastructure that used to be standard for all but the smallest organization.

The reality is that for most organizations, though, it’s a hybrid work environment, with a combination of cloud-based and PC and Macintosh-based applications. Legacy apps aren’t going away any time soon. They will be with us for a while, with their accompanying data scattered amongst all the desktops, laptops and other endpoints in corporate headquarters, home offices, hotel rooms, and airport waiting areas.

In addition, the modern workforce likely combines regular full-time employees, remote workers, contractors, and sometimes interns, volunteers, and other temporary workers who also use company IT assets.

The Modern Enterprise Brings New Challenges for IT

These changes in how enterprises work present a problem for anyone tasked with making sure that data — no matter who uses it or where it lives — is adequately backed-up. Cloud-based applications, when properly used and managed, can be adequately backed up, provided that users are connected to the internet and data transfers occur regularly — which is not always the case. But what about the data on the laptops, desktops, and devices used by remote employees, contractors, or just employees whose work keeps them on the road?

The organization’s backup solution must address all the needs of the modern organization or enterprise using both cloud and PC and Mac-based applications, and not be constrained by employee or computer location.

A Ten-Point Checklist for the Modern Enterprise for Backing Up

What should the modern enterprise look for when evaluating a backup solution?

1) Easy to deploy to workers’ computers

Whether installed by the computer user or an IT person locally or remotely, the backup solution must be easy to implement quickly with minimal demands on the user or administrator.

2) Fast and unobtrusive client software

Backups should happen in the background by efficient (native) PC and Macintosh software clients that don’t consume valuable processing power or take memory away from applications the user needs.

3) Easy to configure

The backup solutions must be easy to configure for both the user and the IT professional. Ease-of-use means less time to deploy, configure, and manage.

4) Defaults to backing up all valuable data

By default, the solution backs up commonly used files and folders or directories, including desktops. Some backup solutions are difficult and intimidating because they require that the user chose what needs to be backed up, often missing files and folders/directories that contain valuable data.

5) Works automatically in the background

Backups should happen automatically, no matter where the computer is located. The computer user, especially the remote or mobile one, shouldn’t be required to attach cables or drives, or remember to initiate backups. A working solution backs up automatically without requiring action by the user or IT administrator.

6) Data restores are fast and easy

Whether it’s a single file, directory, or an entire system that must be restored, a user or IT sysadmin needs to be able to restore backed up data as quickly as possible. In cases of large restores to remote locations, the ability to send a restore via physical media is a must.

7) No limitations on data

Throttling, caps, and data limits complicate backups and require guesses about how much storage space will be needed.

8) Safe & Secure

Organizations require that their data is secure during all phases of initial upload, storage, and restore.

9) Easy-to-manage

The backup solution needs to provide a clear and simple web management interface for all functions. Designing for ease-of-use leads to efficiency in management and operation.

10) Affordable and transparent pricing

Backup costs should be predictable, understandable, and without surprises.

Two Scenarios for the Modern Enterprise

Enterprises exist in many forms and types, but wanting to meet the above requirements is common across all of them. Below, we take a look at two common scenarios showing how enterprises face these challenges. Three case studies are available that provide more information about how Backblaze customers have succeeded in these environments.

Enterprise Profile 1

The needs of a smaller enterprise differ from those of larger, established organizations. This organization likely doesn’t have anyone who is devoted full-time to IT. The job of on-boarding new employees and getting them set up with a computer likely falls upon an executive assistant or office manager. This person might give new employees a checklist with the software and account information and lets users handle setting up the computer themselves.

Organizations in this profile need solutions that are easy to install and require little to no configuration. Backblaze, by default, backs up all user data, which lets the organization be secure in knowing all the data will be backed up to the cloud — including files left on the desktop. Combined with Backblaze’s unlimited data policy, organizations have a truly “set it and forget it” platform.

Customizing Groups To Meet Teams’ Needs

The Groups feature of Backblaze for Business allows an organization to decide whether an individual client’s computer will be Unmanaged (backups and restores under the control of the worker), or Managed, in which an administrator can monitor the status and frequency of backups and handle restores should they become necessary. One group for the entire organization might be adequate at this stage, but the organization has the option to add additional groups as it grows and needs more flexibility and control.

The organization, of course, has the choice of managing and monitoring users using Groups. With Backblaze’s Groups, organizations can set user-based access rules, which allows the administrator to create restores for lost files or entire computers on an employee’s behalf, to centralize billing for all client computers in the organization, and to redeploy a recovered computer or new computer with the backed up data.

Restores

In this scenario, the decision has been made to let each user manage her own backups, including restores, if necessary, of individual files or entire systems. If a restore of a file or system is needed, the restore process is easy enough for the user to handle it by herself.

Case Study 1

Read about how PagerDuty uses Backblaze for Business in a mixed enterprise of cloud and desktop/laptop applications.

PagerDuty Case Study

In a common approach, the employee can retrieve an accidentally deleted file or an earlier version of a document on her own. The Backblaze for Business interface is easy to navigate and was designed with feedback from thousands of customers over the course of a decade.

In the event of a lost, damaged, or stolen laptop,  administrators of Managed Groups can  initiate the restore, which could be in the form of a download of a restore ZIP file from the web management console, or the overnight shipment of a USB drive directly to the organization or user.

Enterprise Profile 2

This profile is for an organization with a full-time IT staff. When a new worker joins the team, the IT staff is tasked with configuring the computer and delivering it to the new employee.

Backblaze for Business Groups

Case Study 2

Global charitable organization charity: water uses Backblaze for Business to back up workers’ and volunteers’ laptops as they travel to developing countries in their efforts to provide clean and safe drinking water.

charity: water Case Study

This organization can take advantage of additional capabilities in Groups. A Managed Group makes sense in an organization with a geographically dispersed work force as it lets IT ensure that workers’ data is being regularly backed up no matter where they are. Billing can be company-wide or assigned to individual departments or geographical locations. The organization has the choice of how to divide the organization into Groups (location, function, subsidiary, etc.) and whether the Group should be Managed or Unmanaged. Using Managed Groups might be suitable for most of the organization, but there are exceptions in which sensitive data might dictate using an Unmanaged Group, such as could be the case with HR, the executive team, or finance.

Deployment

By Invitation Email, Link, or Domain

Backblaze for Business allows a number of options for deploying the client software to workers’ computers. Client installation is fast and easy on both Windows and Macintosh, so sending email invitations to users or automatically enrolling users by domain or invitation link, is a common approach.

By Remote Deployment

IT might choose to remotely and silently deploy Backblaze for Business across specific Groups or the entire organization. An administrator can silently deploy the Backblaze backup client via the command-line, or use common RMM (Remote Monitoring and Management) tools such as Jamf and Munki.

Restores

Case Study 3

Read about how Bright Bear Technology Solutions, an IT Managed Service Provider (MSP), uses the Groups feature of Backblaze for Business to manage customer backups and restores, deploy Backblaze licenses to their customers, and centralize billing for all their client-based backup services.

Bright Bear Case Study

Some organizations are better equipped to manage or assist workers when restores become necessary. Individual users will be pleased to discover they can roll-back files to an earlier version if they wish, but IT will likely manage any complete system restore that involves reconfiguring a computer after a repair or requisitioning an entirely new system when needed.

This organization might chose to retain a client’s entire computer backup for archival purposes, using Backblaze B2 as the cloud storage solution. This is another advantage of having a cloud storage provider that combines both endpoint backup and cloud object storage among its services.

The Next Step: Server Backup & Data Archiving with B2 Cloud Storage

As organizations grow, they have increased needs for cloud storage beyond Macintosh and PC data backup. Backblaze’s object cloud storage, Backblaze B2, provides low-cost storage and archiving of records, media, and server data that can grow with the organization’s size and needs.

B2 Cloud Storage is available through the same Backblaze management console as Backblaze Computer Backup. This means that Admins have one console for billing, monitoring, deployment, and role provisioning. B2 is priced at 1/4 the cost of Amazon S3, or $0.005 per month per gigabyte (which equals $5/month per terabyte).

Why Modern Enterprises Chose Backblaze

Backblaze for Business

Businesses and organizations select Backblaze for Business for backup because Backblaze is designed to meet the needs of the modern enterprise. Backblaze customers are part of a a platform that has a 10+ year track record of innovation and over 400 petabytes of customer data already under management.

Backblaze’s backup model is proven through head-to-head comparisons to back up data that other backup solutions overlook in their default configurations — including valuable files that are needed after an accidental deletion, theft, or computer failure.

Backblaze is the only enterprise-level backup company that provides TOTP (Time-based One-time Password) via both SMS and Authentication app to all accounts at no incremental charge. At just $50/year/computer, Backblaze is affordable for any size of enterprise.

Modern Enterprises can Meet The Challenge of The Changing Data Environment

With the right backup solution and strategy, the modern enterprise will be prepared to ensure that its data is protected from accident, disaster, or theft, whether its data is in one office or dispersed among many locations, and remote and mobile employees.

Backblaze for Business is an affordable solution that enables organizations to meet the evolving data demands facing the modern enterprise.

The post Backing Up the Modern Enterprise with Backblaze for Business appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

US Court Grants ISPs and Search Engine Blockade of Sci-Hub

Post Syndicated from Ernesto original https://torrentfreak.com/us-court-grants-isps-and-search-engine-blockade-of-sci-hub-171106/

Earlier this year the American Chemical Society (ACS), a leading source of academic publications in the field of chemistry, filed a lawsuit against Sci-Hub and its operator Alexandra Elbakyan.

The non-profit organization publishes tens of thousands of articles a year in its peer-reviewed journals. Because many of these are available for free on Sci-Hub, ACS wants to be compensated.

Sci-Hub was made aware of the legal proceedings but did not appear in court. As a result, a default was entered against the site.

In addition to millions of dollars in damages, ACS also requested third-party Internet intermediaries to take action against the site.

The broad request was later adopted in a recommendation from Magistrate Judge John Anderson. This triggered a protest from the tech industry trade group CCIA, which represents global tech firms including Google, Facebook, and Microsoft, that warned against the broad implications. However, this amicus brief was denied.

Just before the weekend, US District Judge Leonie Brinkema issued a final decision which is a clear win for ACS. The publisher was awarded the maximum statutory damages of $4.8 million for 32 infringing works, as well as a permanent injunction.

The injunction is not limited to domain name registrars and hosting companies, but expands to search engines, ISPs and hosting companies too, who can be ordered to stop linking to or offering services to Sci-Hub.

“Ordered that any person or entity in active concert or participation with Defendant Sci-Hub and with notice of the injunction, including any Internet search engines, web hosting and Internet service providers, domain name registrars, and domain name registries, cease facilitating access to any or all domain names and websites through which Sci-Hub engages in unlawful access to, use, reproduction, and distribution of ACS’s trademarks or copyrighted works,” the injunction reads.

part of the injunction

There is a small difference with the recommendation from the Magistrate Judge. Instead of applying the injunction to all persons “in privity” with Sci-Hub, it now applies to those who are “in active concert or participation” with the pirate site.

The injunction means that Internet providers, such as Comcast, can be requested to block users from accessing Sci-Hub. That’s a big deal since pirate site blockades are not common in the United States. The same is true for search engine blocking of copyright-infringing sites.

It’s clear that the affected Internet services will not be happy with the outcome. While the CCIA’s attempt to be heard in the case failed, it’s likely that they will protest the injunction when ACS tries to enforce it.

Previously, Cloudflare objected to a similar injunction where the RIAA argued that it was “in active concert or participation” with the pirate site MP3Skull. Here, Cloudflare countered that the DMCA protects the company from liability for the copyright infringements of its customers, limiting the scope of anti-piracy injunctions.

However, a Florida federal court ruled that the DMCA doesn’t apply in these cases.

It’s likely that ISPs and search engines will lodge similar protests if ACS tries to enforce the injunction against them.

While this case is crucial for copyright holders and Internet services, Sci-Hub itself doesn’t seem too bothered by the blocking prospect or the millions in damages it must pay on paper.

It already owes Elsevier $15 million, which it can’t pay, and a few million more or less doesn’t change anything. Also, the site has a Tor version which can’t be blocked by Internet providers, so determined scientists will still be able to access the site if they want.

The full order is available here (pdf) and a copy of the injunction can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Съд на ЕС: отговорност на оператора на фенстраница във Фейсбук

Post Syndicated from nellyo original https://nellyo.wordpress.com/2017/11/05/fb-3/

Стана известно заключението на Генералния адвокат Бот по дело C−210/16 Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein

срещу Wirtschaftsakademie Schleswig-Holstein GmbH в присъствието на Facebook Ireland Ltd.

„Преюдициално запитване — Директива 95/46/ЕО — Членове 2, 4 и 28 — Защита на физическите лица при обработване на лични данни и свободно движение на такива данни — Разпореждане за деактивиране на фенстраница в социалната мрежа Facebook — Понятие „администратор“ — Отговорност на оператора на фенстраница — Съвместна отговорност — Приложимо национално право — Обхват на правомощията за намеса на надзорните органи“

Запитването е отправено в рамките на спор между Wirtschaftsakademie Schleswig-Holstein GmbH, частноправно дружество, специализирано в областта на образованието  и  органа за защита на данните на Шлезвиг-Холщайн („ULD“), във връзка със законосъобразността на разпореждане, прието от последно посочения орган срещу Wirtschaftsakademie, с което се иска деактивиране на „фенстраница“ (Fanpage), хоствана на сайта на Facebook Ireland Ltd (наричано по-нататък „Facebook Ireland“).

Това разпореждане е мотивирано от твърдяното нарушение на разпоредбите на германското право за транспониране на Директива 95/46, по-специално поради факта, че посетителите на дадена фенстраница не са информирани за това, че техните лични данни се събират от социалната мрежа Facebook благодарение на „бисквитки“, които се разполагат на техния твърд диск, като това събиране се извършва с цел да се изготвят статистически данни за посетителите, които са предназначени за оператора на посочената страница, и за да се предостави възможност на Facebook да разпространява целеви реклами.

Надзорните органи на няколко държави  от ЕС са решили през последните месеци да наложат глоби на Facebook за нарушаване на правилата за защита на личните данни на неговите потребители – на 11 септември 2017 г. Agencia española de protección de datos (испанската Агенция за защита на данните) е обявила, че налага глоба в размер на 1,2 милиона евро на Facebook Inc. Преди това, на 27 април 2017 г. Commission nationale de l’informatique et des libertés (Националната комисия по въпросите на информатиката и свободите, CNIL, Франция) приема решение срещу дружествата Facebook Inc. и Facebook Ireland, като при условията на солидарна отговорност им налага имуществена санкция в размер на 150 000 EUR.

Разглежданото дело ще даде възможност на Съда да уточни обхвата на правомощията за намеса, с които разполага надзорен орган като ULD при обработване на лични данни, включващо участието на няколко субекта.

Преюдициалните въпроси

Заключението:

„1)      Член 2, буква г) от Директива 95/46/ЕО на Европейския парламент и на Съвета от 24 октомври 1995 година за защита на физическите лица при обработването на лични данни и за свободното движение на тези данни, изменена с Регламент (ЕО) № 1882/2003 на Европейския парламент и на Съвета от 29 септември 2003 г., трябва да се тълкува в смисъл, че съгласно тази разпоредба за администратор следва да се счита операторът на фенстраница в социална мрежа като Facebook, що се отнася до етапа на обработване на лични данни, състоящ се в събирането от тази социална мрежа на данни относно лицата, които консултират тази страница, с оглед на изготвянето на статистически данни за посетителите на посочената страница.

2)      Член 4, параграф 1, буква а) от Директива 95/46, изменена с Регламент № 1882/2003, трябва да се тълкува в смисъл, че обработване на лични данни като разглежданото в главното производство, се извършва в контекста на дейностите на установен на територията на държава членка обект на администратора по смисъла на посочената разпоредба, когато предприятието, което управлява социална мрежа, създава в тази държава членка дъщерно дружество, чието предназначение е да осигури рекламирането и продажбата на рекламните пространства, предлагани от това предприятие, и чиято дейност е насочена към лицата, живеещи в тази държава членка.

3)      В положение като разглежданото в главното производство, при което към съответното обработване на лични данни се прилага националното право на държавата членка на надзорния орган, член 28, параграфи 1, 3 и 6 от Директива 95/46, изменена с Регламент № 1882/2003, трябва да се тълкува в смисъл, че този надзорен орган може да упражнява всички възложени му в съответствие с член 28, параграф 3 от посочената директива ефективни правомощия за намеса по отношение на администратора, включително когато администраторът е установен в друга държава членка или пък в трета държава.

4)      Член 28, параграфи 1, 3 и 6 от Директива 95/46, изменена с Регламент № 1882/2003, трябва да се тълкува в смисъл, че при обстоятелства като разглежданите в главното производство надзорният орган на държавата членка, в която се намира обектът на администратора, има право да упражнява самостоятелно правомощията си за намеса по отношение на този администратор и без да е длъжен предварително да поиска от надзорния орган на държавата членка, в която се намира посоченият администратор, да упражни своите правомощия“.

u Techcrunch

Filed under: EU Law, Media Law Tagged: данни, FB, съд на ес

YouTube MP3 Converters Block UK Traffic to Avoid Trouble

Post Syndicated from Ernesto original https://torrentfreak.com/youtube-mp3-converters-block-uk-traffic-to-avoid-trouble-171029/

The music industry sees stream ripping as one of the largest piracy threats, worse than torrent sites or direct download portals.

Last year the RIAA, IFPI and BPI filed legal action against YouTube-MP3, the largest stream ripping site at the time. This case eventually resulted in a settlement where the site agreed to shut down voluntarily.

This was a clear victory for the music groups which swiftly identified their next targets. These include Convert2mp3.net, Savefrom.net, MP3juices.cc and YtMp3.cc, which were highlighted by the RIAA in a letter to the US Government.

The legal action against YouTube-MP3 and the RIAA’s notorious markets report appears to have made an impact, as MP3Juices.cc and YtMp3.cc have shut their doors. Interestingly, this only applies to the UK.

..not available in the UK

It’s unclear why both sites are “shutting down” in the UK and not elsewhere, as the operators haven’t commented on the issue. However, in other parts of the world, the site is readily available.

MP3juices

Last year, music industry group BPI signed an agreement with YouTube-MP3 to block UK visitors, which sounds very familiar. While the BPI is not directly responsible for the recent geo-blocks, the group sees it as a positive trend.

“We are seeing that the closure of the largest stream ripping site, YouTube-mp3, following coordinated global legal action from record companies, is having an impact on the operations of other ripping sites,” BPI Chief Executive Geoff Taylor informs TorrentFreak.

“However, stream ripping remains a major issue for the industry. These sites are making large sums of money from music without paying a penny to those that invest in and create it. We will continue to take legal action against other illegal ripping sites where necessary.”

Stream rippers or converters are not by definition illegal, as pointed out by the CCIA last week. However, music industry groups will continue to crack down on the ones they view as copyright infringing.

MP3Juices.cc and YtMp3.cc are likely hoping to take the pressure off with their voluntary geo-blocking. Time will tell whether that’s a good strategy. In any event, it didn’t prevent YouTube-MP3 from caving in completely, in the end.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

timeShift(GrafanaBuzz, 1w) Issue 19

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/10/27/timeshiftgrafanabuzz-1w-issue-19/

This week, we were busy prepping for our latest stable release, Grafana 4.6! This is a sizeable release that adds some key new functionality, but there’s no time to pat ourselves on the back – now it’s time to focus on Grafana 5.0! In the meantime, find out more about what’s in 4.6 in our release blog post, and let us know what you think of the new features and enhancements.


Latest Release

Grafana 4.6 Stable is now available! The Grafana 4.6 release contains some exciting and much anticipated new additions:

  • The new Postgres Data Source
  • Create your own Annotations from the Graph panel
  • Cloudwatch Alerting Support
  • Prometheus query editor enhancements

Download Grafana 4.6 Stable Now


From the Blogosphere

Lyft’s Envoy dashboards: Lyft developed Envoy to relieve operational and reliability headaches. Envoy is a “service mesh” substrate that provides common utilities such as service discovery, load balancing, rate limiting, circuit breaking, stats, logging, tracing, etc. to application architectures. They’ve recently shared their Envoy dashboards, and walk you through their setup.

Monitoring Data in a SQL Table with Prometheus and Grafana Joseph recently built a proof-of-concept to add monitoring and alerting on the results of a Microsoft SQL Server query. Since he knew he’d eventually want to monitor many other things, from many other sources, he chose Prometheus and Grafana as his starting point. In this article, he walks us through his steps of exposing SQL queries to Prometheus, collecting metrics, alerting, and visualizing the results in Grafana.

Crypto Exchange Trading Data Discovering interesting public Grafana dashboards has been happening more and more lately. This week, I came across a dashboard visualizing trading data on the crypto exchanges. If you have a public dashboard you’d like shared, Let us know.


GrafanaCon EU Early Bird is Ending

Early bird discounts will be ending October 31; this is your last chance to take advantage of the discounted tickets!

Get Your Early Bird Ticket Now


Grafana Plugins

Each week we review updated plugins to ensure code quality and compatibility before publishing them on grafana.com. This process can take time, and we appreciate all of the communication from plugin authors. This week we have two plugins that received some major TLC. These are two very popular plugins, so we encourage you to update. We’ve made updating easy; for on-prem Grafana, use the Grafana-cli tool, or update with 1 click if you are using Hosted Grafana.

UPDATED PLUGIN

Zabbix App Plugin – The Zabbix App Plugin just got a big update! Here are just a few of the changes:

  • PostgreSQL support for Direct DB Connection.
  • Triggers query mode, which allows counting active alerts by group, host and application, #141.
  • sortSeries() function that sorts multiple timeseries by name, #447, thanks to @mdorenkamp.
  • percentil() function, thanks to @pedrohrf.
  • Zabbix System Status example dashboard.

Update

UPDATED PLUGIN

Wroldmap Panel Plugin – The Worldmap panel also got a new update. Zooming with the mouse wheel has been turned off, as it was too easy to accidentally zoom in when scrolling the page. You can zoom in with the mouse by either double-clicking or using shift+drag to zoom in on an area.

  • Support for new data source integration, the Dynamic JSON endpoint #103, thanks @LostInBrittany
  • Fix for using floats in thresholds #79, thanks @fabienpomerol
  • Turned off mouse wheel zoom

Update


Upcoming Events:

In between code pushes we like to speak at, sponsor and attend all kinds of conferences and meetups. We have some awesome talks lined up this November. Hope to see you at one of these events!


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Nice – but dashboards are meant for sharing! You should upload that to our list of Icinga2 dashboards.


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


How are we doing?

Well, that wraps up another week! How we’re doing? Submit a comment on this article below, or post something at our community forum. Help us make these weekly roundups better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Tech Giants Warn Against Kodi Scapegoating

Post Syndicated from Ernesto original https://torrentfreak.com/tech-giants-warn-kodi-scapegoating-171022/

At the beginning of October, several entertainment industry groups shared their piracy concerns with the US Government’s Trade Representative (USTR).

Aside from pointing towards traditional websites, pirate streaming boxes were also brought up, by the MPAA among others.

“An emerging global threat is streaming piracy which is enabled by piracy devices preloaded with software to illicitly stream movies and television programming and a burgeoning ecosystem of infringing add-ons,” the MPAA noted.

This week the Computer & Communications Industry Association (CCIA), which includes members such as Amazon, Facebook, Google, and Netflix, notes that the USTR should be careful not to blame an open source media player such as Kodi, for the infringing actions of others.

CCIA wrote a rebuttal clarifying that Kodi and similar open source players are not the problem here.

“Another example of commenters raising concerns about generalized technology is the MPAA’s characterization of customizable, open-source set-top boxes utilizing the Kodi multimedia player application along with websites that allegedly ‘enable one-click installation of modified software onto set-top boxes or other internet-connected devices’,” CCIA writes.

While the MPAA itself also clearly mentioned that “Kodi is not itself unlawful,” CCIA stresses that any enforcement actions should be aimed at those who are breaking the law. The real targets include vendors who sell streaming boxes pre-loaded with infringing addons.

“These enforcement activities should focus on the infringers themselves, however, not a general purpose technology, such as an operating system for set-top boxes, which may be used in both lawful and unlawful ways.

“Open-source software designed for operating a home electronics device is unquestionably legitimate, and capable of substantial non-infringing uses,” CCIA adds in its cautionary letter the USTR.

While the MPAA’s submission was not trying to characterize Kodi itself as illegal, it did call out TVAddons.ag as a “piracy add-on repository.” The new incarnation of TVAddons wasn’t happy with this label and previously scolded the movie industry group for its comments, pointing out that it only received a handful of DMCA takedown notices in recent years.

“…in the entire history of TV ADDONS, XBMC HUB and OffshoreGit, we only received a total of about five DMCA notices in all; two of which were completely bogus. None of which came from a MPAA affiliate.”

While it’s obvious to most that Kodi isn’t the problem, as CCIA is highlighting, to many people it’s still unclear where the line between infringing and non-infringing is drawn. Lawsuits, including those against TVAddons and TickBox, are expected to bring more clarity.

CCIA’s full submission is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Tech Giants Protest Looming US Pirate Site Blocking Order

Post Syndicated from Ernesto original https://torrentfreak.com/tech-giants-protest-looming-us-pirate-site-blocking-order-171013/

While domain seizures against pirate sites are relatively common in the United states, ISP and search engine blocking is not. This could change soon though.

In an ongoing case against Sci-Hub, regularly referred to as the “Pirate Bay of Science,” a magistrate judge in Virginia recently recommended a broad order which would require search engines and Internet providers to block the site.

The recommendation followed a request from the academic publisher American Chemical Society (ACS) that wants these third-party services to make the site in question inaccessible. While Sci-Hub has chosen not to defend itself, a group of tech giants has now stepped in to prevent the broad injunction from being issued.

This week the Computer & Communications Industry Association (CCIA), which includes members such as Cloudflare, Facebook, and Google, asked the court to limit the proposed measures. In an amicus curiae brief submitted to the Virginia District Court, they share their concerns.

“Here, Plaintiff is seeking—and the Magistrate Judge has recommended—a permanent injunction that would sweep in various Neutral Service Providers, despite their having violated no laws and having no connection to this case,” CCIA writes.

According to the tech companies, neutral service providers are not “in active concert or participation” with the defendant, and should, therefore, be excluded from the proposed order.

While search engines may index Sci-Hub and ISPs pass on packets from this site, they can’t be seen as “confederates” that are working together with them to violate the law, CCIA stresses.

“Plaintiff has failed to make a showing that any such provider had a contract with these Defendants or any direct contact with their activities—much less that all of the providers who would be swept up by the proposed injunction had such a connection.”

Even if one of the third party services could be found liable the matter should be resolved under the DMCA, which expressly prohibits such broad injunctions, the CCIA claims.

“The DMCA thus puts bedrock limits on the injunctions that can be imposed on qualifying providers if they are named as defendants and are held liable as infringers. Plaintiff here ignores that.

“What ACS seeks, in the posture of a permanent injunction against nonparties, goes beyond what Congress was willing to permit, even against service providers against whom an actual judgment of infringement has been entered.That request must be rejected.”

The tech companies hope the court will realize that the injunction recommended by the magistrate judge will set a dangerous precedent, which goes beyond what the law is intended for, so will impose limits in response to their concerns.

It will be interesting to see whether any copyright holder groups will also chime in, to argue the opposite.

CCIA’s full amicus curiae brief is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Microcell through a mobile hotspot

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/10/microcell-through-mobile-hotspot.html

I accidentally acquired a tree farm 20 minutes outside of town. For utilities, it gets electricity and basic phone. It doesn’t get water, sewer, cable, or DSL (i.e. no Internet). Also, it doesn’t really get cell phone service. While you can get SMS messages up there, you usually can’t get a call connected, or hold a conversation if it does.

We have found a solution — an evil solution. We connect an AT&T “Microcell“, which provides home cell phone service through your Internet connection, to an AT&T Mobile Hotspot, which provides an Internet connection through your cell phone service.

Now, you may be laughing at this, because it’s a circular connection. It’s like trying to make a sailboat go by blowing on the sails, or lifting up a barrel to lighten the load in the boat.

But it actually works.

Since we get some, but not enough, cellular signal, we setup a mast 20 feet high with a directional antenna pointed to the cell tower 7.5 miles to the southwest, connected to a signal amplifier. It’s still an imperfect solution, as we are still getting terrain distortions in the signal, but it provides a good enough signal-to-noise ratio to get a solid connection.

We then connect that directional antenna directly to a high-end Mobile Hotspot. This gives us a solid 2mbps connection with a latency under 30milliseconds. This is far lower than the 50mbps you can get right next to a 4G/LTE tower, but it’s still pretty good for our purposes.

We then connect the AT&T Microcell to the Mobile Hotspot, via WiFi.

To avoid the circular connection, we lock the frequencies for the Mobile Hotspot to 4G/LTE, and to 3G for the Microcell. This prevents the Mobile Hotspot locking onto the strong 3G signal from the Microcell. It also prevents the two from causing noise to the other.

This works really great. We now get a strong cell signal on our phones even 400 feet from the house through some trees. We can be all over the property, out in the lake, down by the garden, and so on, and have our phones work as normal. It’s only AT&T, but that’s what the whole family uses.

You might be asking why we didn’t just use a normal signal amplifier, like they use on corporate campus. It boosts all the analog frequencies, making any cell phone service works.

We’ve tried this, and it works a bit, allowing cell phones to work inside the house pretty well. But they don’t work outside the house, which is where we spend a lot of time. In addition, while our newer phones work, my sister’s iPhone 5 doesn’t. We have no idea what’s going on. Presumably, we could hire professional installers and stuff to get everything working, but nobody would quote us a price lower than $25,000 to even come look at the property.

Another possible solution is satellite Internet. There are two satellites in orbit that cover the United States with small “spot beams” delivering high-speed service (25mbps downloads). However, the latency is 500milliseconds, which makes it impractical for low-latency applications like phone calls.

While I know a lot about the technology in theory, I find myself hopelessly clueless in practice. I’ve been playing with SDR (“software defined radio”) to try to figure out exactly where to locate and point the directional antenna, but I’m not sure I’ve come up with anything useful. In casual tests, it seems rotating the antenna from vertical to horizontal increases the signal-to-noise ratio a bit, which seems counter intuitive, and should not happen. So I’m completely lost.

Anyway, I thought I’d write this up as a blogpost, in case anybody has better suggestion. Or, instead of signals, suggestions to get wired connectivity. Properties a half mile away get DSL, I wish I knew who to talk to at the local phone company to pay them money to extend Internet to our property.

Phone works in all this area now

Какво искат каталунците? (Част 1)

Post Syndicated from Йовко Ламбрев original https://yovko.net/what-catalans-want-1/

Този текст няма как да е кратък, защото се налага да ви разходя из историята на Иберийския полуостров и да обърна внимание на няколко ключови факта, които ние, българите, или не сме учили в училище, или те са били споменати бегло, доколкото нямат отношение към нашата история, но имат тежест и значение за днешната ситуация в Каталуния. За мен простичкият отговор какво искат каталунците е наистина адски ясен, но за съжаление чувстителността на съвременния човек към непоколебимите принципи, около които е изградено общството ни, е твърде притъпена и тези прости и принципни отговори все не са ни достатъчни. Иначе казано някъде в последните абзаци на втората част на този текст ще споделя тези уж, очевидни неща, които проумяваме толкова трудно.

Пиша този текст, защото у нас (включително в уж реномирани издания) се публикува обикновено само част от истината и предимно тази на Кралство Испания. Това донякъде е обяснимо – за българите Испания е една от посоките, които олицетворяват Европа. Достатъчно българи говорят испански и следят испански медии, но каталунският у нас е екзотика и информация за другата гледна точка стига или чрез преразкази от испански медии, или другоезични такива. И тук е част от проблема – каталунската позиция рядко достига до националните испански медии в чист вид, а каталунските медии, които излизат на английски, испански или друг език, са с твърде скромна публика, за да бъдат забелязани.

Затова все по-често попадам на притеснени познати, които се питат “Какво точно се случва?” и “Какво толкова искат каталунците?”.

На 1 октомври предстои референдум за независимост, който Испания твърди, че е незаконен. Само че това не е точно така. Референдумът беше узаконен чрез сложен процес от закони и процедури през последната година и половина от Парламента на Автономната област Каталуния. Това, което е дискусионно, е дали целият този процес и новопретият закон за този референдум са ОК – и понеже той беше приет буквално преди дни, Конституционният съд на Испания го суспендира за неопределено време, докато успее да си изясни ситуацията (или поне това е официалната теза). И това е спор за юристи.

Оттук следват няколко проблема – референдумът е вече насрочен (и не е незаконен, а само суспендиран). Тезата, че е незаконен, е любима на испанските медии и на премиера на Испания, Мариано Рахой, който си я повтаря от години и при предишните опити за референдуми. Всички останали я папагалстват. По-големият проблем е, че Конституционният съд на Испания напоследък суспендира почти всичко, което излиза от Каталуния, включително суспендиран е основният закон на автономията от 2010 (по-точно поправките в него, нищо че те са гласувани както от Каталунския парламент, така и от Испанския). Но за това малко по-късно…

Каша има. Тя е голяма, но се обяснява простичко… Нека преди това обаче се разходим из историята на тази част на света.

Ако не си падате по историята и не държите да сте наясно с крайъгълните моменти в миналото на Каталуния можете да отскочите направо към съвремието и втората част.

Графство Барселона

На Иберийския полуостров след VII-VIII век, след постепенното отблъскване на маврите започват да се появяват множество кралства и графства. Във времето те са се обединявали (предимно след женитби между техните владетели), враждували са, влизали са в различни конфигурации под нечия корона, а някои са били едновременно и част от някакво обединение, но и са запазвали някаква независимост.

Историята на каталунците е свързана предимно с графство Барселона и Арагонската корона. Графството в най-близка степен отговаря на днешната територия на автономната област Каталуния. За времеви репер на появата му се счита 801 година (или началото на 9 век), което го прави една от най-старите частички от пъзела, които сглобяват днешна Испания. Само едно друго кралство е по-старо – Астурия – с едва около 80 години.

IХ – XI век

Отблъсквайки маврите (по това време днешната територия на Испания е била ислямска), Карл Велики (същият, който днес считаме за създател на френската и германската монархии и едва ли не прародител на Европа) се сдобива с Барселона през 801 година, а малко по-късно – и с Тарагона и Валенсия. Около столетие по-късно (през Х век) графовете на Барселона вече са независими и наследствени владетели и чрез множество бракове помежду си са успели да присъединят повечето каталунски графства и да разширят сериозно влиянието си в региона.

ХII век

Най-старото открито историческо писмено споменаване на каталунци (в смисъл на етнос) и Каталуния (като географско обозначение) е в документ, за който се счита, че е написан между 1117 и 1125 година.

През второто столетие на века Арагонската кралица се жени за графа на Барселона и така техният син наследява както графство Барселона, така и Кралство Арагон, и занапред историята на каталунците е свързана с т.нар. след този момент обединение Корона Арагон, което освен Арагон и Барселона, до ХVIII век ще включва и кралствата Валенсия и Майорка. Езиците, които са се говорели тогава там, са арагонски, каталунски и латински.

Днес най-вероятно бихме определили държавното устройство на формированието Арагонска корона като конфедерация, а постепенно то се превръща в нещо като средиземноморска империя, предвид че мореплаването се превръща в основен бизнес и военна мощ.

Междувременно между ХII и ХVIII век Кастилската корона ще обедини кралствата Астурия, Галисия, Леон и Кастилия.

ХIII век

Каталуния развива сложна институционална и политическа система за разпределение на властта между краля и интересите на съсловията. През 1283 г. в Каталуния законодателната власт преминава в нещо като първообраз на парламент (наричат го Corts Catalanes или Cort General de Catalunya) и действа в Барселона до ХVIII век, когато Каталуния губи автономията си.

Този орган реално твори законодателството (разписват го като Constitucions catalanes), по което се управляват Кралство Арагон и Графство Барселона. Обърнете внимание, че всичко това е само няколко десетилетия след Магна харта.

Първообразът на правителство пък се нарича Consell de Cent, което означава нещо като Съветът на стоте, защото толкова са били членовете му. Постепенно обаче този съвет губи тази роля, като му остава само ролята да избира членовете на общинското управление на Барселона.

ХIV век

През 1359 г. се появява следваща надстройка на политическата система в Каталуния, обусловена от необходимостта да се събират и разпределят данъците. Наричат я нещо като постоянно представителство или Diputació del General или Generalitat и към нея с времето се измества политическата власт. Днес така се нарича локалното правителство на автономната област Каталуния.

За да не прекалявам с всичко това, ще спра до тук, но ако ви е интересно, потърсете за Reial Audiència i Reial Consell de Catalunya, Conferència dels Tres Comuns, Junta de Braços, Tribunal de Contrafaccions и ще ви падне шапката, когато откриете зад тези институции първообразите на това, което днес бихме нарекли Върховен съд, да кажем съвещателни органи за съгласуване на действията на различните власти и дори нещо като първообраз на съд за гарантиране на индивидуални и граждански права. Забележително институционално строителство и то през ХIII-ХVIII век!

Разказвам всичко това не за друго, а защото една от адски популярните атаки по това имат ли или не основание каталунците да твърдят, че са имали собствена държава, е нещо като “Абе, мани ги тея, били са там някога едно племе пирати-разбойници!”…

ХV век

Кастилската корона и Арагонската корона се обединяват след брак между престолонаследника на Арагон и кралицата на Кастилия. Короната на Арагон през това време е в своя пик на могъщество като владее големи части от днешна източна Испания, южна Франция, Балеарските острови, Сицилия, Корсика, Сардиния, Малта, Южна Италия и части от Гърция (дори за кратко и Атина). Столицата по това време даже е била в Неапол, защото Сарагоса (днешната столица на област Арагон) е твърде встрани от центъра на владенията на тази средиземноморска “империя”.

Междувременно е написан (на каталунски) Consolat de mar, който представлява един от най-старите опити за установяване на юридическа рамка за морско и търговско право. Във Валенсия (която е част от Короната на Арагон) е отпечатана първата книга на Иберийския полустров.

ХVI век

Арагон и Кастилия присъединяват и кралство Навара (останало независимо до 1513 г.) и това се счита за началото на кралство Испания. Моля, обърнете внимание – едва тук в нашата сага на сцената се появи Испания, уважаеми зрители и скъпи радиослушатели!

През ХVI и ХVII век Испания е една от най-могъщите държави в света, но още в средата на ХVI век започва да зрее криза, свързана с проблемите на роднинските бракове, понеже първите владетели на все още новото кралство Испания са Хабсбургите, които се женят предмно за свои братовчеди и племенници, за да остава богатството във фамилията. Така веднага след царуването на Карлос I (известен още и като Карл V) и синът му Филип (или Фелипе II) започват няколко години на нестабилност освен в религиозно-политически контекст, но и за това кой да наследи короната, понеже първородният син на Фелипе II е с психични отклонения и е отстранен от власт (после и убит). Цели 4 други негови деца умират още като малки. Следващият му син е чак от четвъртата му жена и е толкова мекушав и невзрачен по характер, че придворните го превръщат в играчка на манипулациите помежду си. Неговият наследник е същата трагедия и реално властта е в ръцете на приближен нему граф. И така стигаме до Карлос II, който е толкова зле ментално и физически, че мисли бавно, ходи трудно и е трябвало да бъде привързван с колани към трона, за да може изобщо да стои изправен без да падне (неговият баща е на 56 години при раждането му, а майка му е с 30 години по-малка и е племенница на баща му). Въпреки че формално се жени два пъти (едната му съпруга е внучка на Луи XIII, а другата е негова австрийска братовчедка), Карлос II не може да има наследници и след неговата смърт няма кой да наследи кралството.

Така стигаме до ключов крайъгълен камък – войната за испанското наследство – и е важно да уточним, че то включва освен Испания, колониите ѝ в Америка и Филипините, също и Южна Италия, Сицилия, Сардиния, Милано и част от днешна Белгия (тогава наричана Южна Нидерландия) заедно с малко от Северна Африка. Очевидно мераците са били отявлено нескромни.

ХVIII век

Основните заподозрени за наследници са с корени от Австрия и от Франция, но Луи XIV действа най-пъргаво и, пренебрегвайки някоя друга честна дума и подписано споразумение с Англия, директно изпраща внука си в Мадрид, заявявайки, че “няма вече Пиренеи”. Това вбесява Австрия, но без съюзници не смее да предприеме нищо, докато само година и нещо по-късно французите не се оливат в своята алчност и не окупират крепости в Нидерландия, които са буферни за доставката на роби и от които Англия има ключов интерес. Така Австрия и Англия решават да не позволят на Франция да се превърне в нова доминираща държава в Европа, подобна на империята на Карл Велики, и обявяват война на Франция (и Испания, доколкото през тази около година и половина на колебания тя се води френска).

Така от една страна Франция е нападната от Англия и съюзническите войски, а от друга австрийският претендент за испанската корона Карл (наричан още и Карлос III) нахлува в Испания с австрийски и английски сили. Нашите приятели, каталунците и арагонците, застават на негова страна, което се оказва историческа грешка.

Войната е сложна – води се на много места и се развива на приливи и отливи и за двете страни. В крайна сметка обаче умира австрийският император и понеже и той няма наследници, същият Карл, който претендира за испанския престол, се сдобива с австрийския, а това пък разколебава подкрепата на англичаните за него. В крайна сметка войната вече е омръзнала на всички, французите са дали много човешки жертви, Карл си има друго кралство и примирието се случва, като Филип V (внукът на Луи XIV) запазва испанската корона, но е принуден да се откаже от френската. (И до ден днешен ще намерите както французи, така и испанци, които да считат, че това не било законно – пуста алчност и имперски бленувания!). Испания губи своята част от Нидерландия, Неапол, Милано, Сардиния и Гибралтар и официално до днес е владение на династията на Бурбоните.

Барселона не се дава до последно, заедно с Майорка тя остава последната страна, която защитава интереса на австрийския претендент, а след като е подписано примирието Corts Catalanes (онзи същият парламент) решава, че трябва да продължат войната, за да съхранят Каталунската конституция и институции! Тук е редно да признаем, че този момент се преекспонира от някои каталунски историци, които твърдят, че това вече е битка за независимост от Испания. Формално е така, но няма как да сме сигурни, че аргументацията е била такава (независимост). Това решение на парламента може да е било провокирано от чисто търговски, политически или други съображения, за които историците тепърва ще спорят.

Барселона преживява тежка обсада, продължила година и половина в периода 1713-1714 г., но когато през април 1714 г. Бурбоните струпват 20-хилядна армия и на 30 август успяват да пробият защитата и да влязат в града, на 11 септември Барселона капитулира, след като са убити ключовите военни, ръководещи съпротивата. Каталунските лидери едва тогава предлагат преговори. Филип V отлага преговорите с един ден, през който насъсква войниците си да колят, насилват и отмъщават, но неговият пълководец, уплашен това да не ескалира в партизанска война, го разубеждава и се ангажира да уважава загубилия противник.

Така на 11 септември 1714 г. Каталуния престава да съществува, институциите ѝ са разпуснати и забранени, отнети са привилегиите и титлите на каталунския елит, забранени са събиранията и всякакви форми на обществени организации. Каталунският език е изцяло забранен не само в Испания, но и във Франция. Забранено е не просто да се ползва в официални документи, но и да се преподава. Филип V установява абсолютна монархия от френски тип и ликвидира автономията на областите от Арагонската корона (Каталуния, Валенсия и Арагон).

Днес този ден е Национален празник на Катауния (Diada Nacional de Catalunya) или просто La Diada. Странно е някак денят, в който си загубил независимостта си, да е националният ти празник, но ако попитате каталунците защо, много от тях ще се усмихнат многозначително и ще отговорят – за да помним, че имаме несвършена работа…

ХIХ век

Още през втората половина на ХVIII век в Каталуния започва сериозно развитие на индустрията, а в началото на ХIХ век и културен ренесанс около каталунският език. И до днес Каталуния е най-развитата индустриална и предприемаческа област в Испания.

След кралска абдикация испанският парламент провъзгласява Първата испанска република, просъществувала малко по-малко от две години, когато военнен преврат възстановява отново монархията на Бурбоните.

ХХ век

През 1931 г. резултатите от местните избори в Испания показват грандиозен успех за републиканските партии. Два дни по-късно е провъзгласена Втората испанска република, а кралят бяга в изгнание. Новата конституция установява свобода на словото и сдружаването, увеличава правата на жените, позволява разводите и ограничава ролята на Католическата църква в образованието и държавното управление. Знамето на Испания е сменено с трикольор от червено, жълто и виолетово, а испанските региони получават право на автономно управление. Каталуния първа договаря своята още през следващата година, Баските успяват през 1936 г., но останалите нямат този шанс заради започналата междувременно гражданска война.

Изборите през 1933 г. са спечелени от дясно-центристки и крайнодесни партии не без помощта и манипулациите на църквата, използвайки влиянието си сред силно религиозните испанци. Новата дясна коалиция, наречена Национален фронт, суспендира започнатите реформи. Левичарите са бесни и на 6 октомври 1934 огранизират обща стачка, покрай която нещата излизат извън контрол, след като миньори окупират столицата на Астурия, избиват местната власт и палят театрите и университета. Две седмици по-късно стачката е потушена от армията толкова жестоко, че са разрушени огромни части от града и са избити толкова хора, че командващият операцията генерал Франциско Франко се сдобива с прозвището “Касапинът на Астурия”.

На същия 6 октомври 1934 г. президентът на Каталуния Lluís Companys, юрист и лидер на партията ERC (Републиканска левица на Каталуния) повежда Каталунско национално въстание и провъзгласява Estat Català (Каталунска държава) в рамките на Испанската федерална република. Въстанието е потушено жестоко, с много арестувани и осъдени. Възприето е от дясното правителство в Мадрид като подкрепа на миньорите в Астурия и като опит за преврат. Lluís Companys е осъден на 30 години затвор, но излиза от затвора през 1936, когато поредните избори в Испания са спечелени този път от лява коалиция, нарекла се Frente Popular (Народен фронт), която възстановява и каталунското правителство. 

Политическата конфронтация в Испания обаче е безумна. Двете коалици вляво и вдясно нямат желание за сближаване на позиците, а центристи-балансьори не са останали. Отгоре на това вече е възникнала вдъхновената от идеологията на фашизма Falange Española – малка националистическа партия, която успява да спечели по-малко от 1% на изборите, но има над 40 хиляди членове.

Скоро се случва така, че фалангист убива антифашист, а като отмъщение неговата организация отвлича лидер на крайнодясно формирование, известен с това, че призовава “армията да спаси Испания от болшевиките, след като политиците не могат”. Десните обвиняват правителството за отвличането и в тази гмеж от политическа неразбория, поляризация и популизъм се случва планиран военен преврат, който започва с военно въстание в Мароко. Това е началото на испанската гражданска война между републиканците и националистите. Същият онзи “касапин” генерал Франко ще поведе разбунтувалите се срещу републиката генерали и с подкрепата на нацистка Германия, фашистка Италия и Португалия ще спечели войната. Само през първите няколко дни са избити над 50-хиляди души, озовали се от грешната страна на барикадата. Националистите избиват верните на републиката с обвинения, че са болшевики и комунисти. Левичарите и анархистите пък убиват заможни хора, свещеници и по-консервативно настроени с аргументите, че са “буржоа” и десни.

 Барселона е “лява” и прорепубликански настроена. Само през лятото и есента на 1936-та се счита, че там са загинали повече от 8000 души. Lluís Companys не успява да контролира града и след края на войната през 1939 бяга във Франция, но е арестуван през 1940 г. и нацистите го връщат на Франко. Изтезаван е жестоко със седмици, физически и психически. Осъден е на смърт в скалъпен процес, продължил по-малко от час. Разстрелян е на 15 октомври 1940 в замъка Montjuïc в Барселона като отказва превръзка на очите и успява преди изстрелите да извика “Per Catalunya!” (За Каталуния!). В смъртния му акт е записана като причина за смъртта “травматичен вътрешен кръвоизлив”.

Lluís Companys е със статут на национален символ в Каталуния и за каталунците. Той е единственият действащ демократично избран президент в европейската история, който е бил екзекутиран и вече 77 години по-късно още не е реабилитиран.

Продължение – втора част

Have Friends Who Don’t Back Up? Share This Post!

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/beginner-guide-to-computer-backup/

pointing out how to backup a computer

We’ve all been there.

A friend or family member comes to you knowing you’re a knowledgeable computer user and tells you that he has lost all the data on his computer.

You say, “Sure, I’ll help you get your computer working again. We’ll just restore your backup to a new drive or a new computer.”

Your friend looks at his feet and says, “I didn’t have a backup.”

You have to tell your friend that it’s very possible that without a backup that data is lost forever. It’s too late for a lecture about how he should have made regular backups of his computer. Your friend just wants his data back and he’s looking to you to help him.

You wish you could help. You realize that the time you could have helped was before the loss happened; when you could have helped your friend start making regular backups.

Yes, we’ve all been there. In fact, it’s how Backblaze got started.

You Can Be a Hero to a Friend by Sharing This Post

If you share this post with a friend or family member, you could avoid the situation where your friend loses his data and you wish you could help but can’t.

The following information will help your friend get started backing up in the easiest way possible — no fuss, no decisions, and no buying storage drives or plugging in cables.

The guide begins here:

Getting Started Backing Up

Your friend or family member has shared this guide with you because he or she believes you might benefit from backing up your computer. Don’t consider this an intervention, just a friendly tip that will save you lots of headaches, sorrow, and maybe money. With the right backup solution, it’s easy to protect your data against accidental deletion, theft, natural disaster, or malware, including ransomware.

Your friend was smart to send this to you, which probably means that you’re a smart person as well, so we’ll get right to the point. You likely know you should be backing up, but like all of us, don’t always get around to everything we should be doing.

You need a backup solution that is:

  1. Affordable
  2. Easy
  3. Never runs out of storage space
  4. Backs up everything automatically
  5. Restores files easily

Why Cloud Backup is the Best Solution For You

Backblaze Personal Backup was created for everyone who knows they should back up, but doesn’t. It backs up to the cloud, meaning that your data is protected in our secure data centers. A simple installation gets you started immediately, with no decisions about what or where to back up. It just works. And it’s just $5 a month to back up everything. Other services might limit the amount of data, the types of files, or both. With Backblaze, there’s no limit on the amount of data you can back up from your computer.

You can get started immediately with a free 15 day trial of Backblaze Unlimited Backup. In fewer than 5 minutes you’ll be all set.

Congratulations, You’re Done!

You can now celebrate. Your data is backed up and secure.

That’s it, and all you really need to get started backing up. We’ve included more details below, but frankly, the above is all you need to be safely and securely backed up.

You can tell the person who sent this to you that you’re now safely backed up and have moved on to other things, like what advice you can give them to help improve their life. Seriously, you might want to buy the person who sent this to you a coffee or another treat. They deserve it.

Here’s more information if you’d like to learn more about backing up.

Share or Email This Post to a Friend

Do your friend and yourself a favor and share this post. On the left side of the page (or at the bottom of the post) are buttons you can use to share this post on Twitter, Facebook, LinkedIn, and Google+, or to email it directly to your friend. It will take just a few seconds and could save your friend’s data.

It could also save you from having to give someone the bad news that her finances, photos, manuscript, or other work are gone forever. That would be nice.

But your real reward will be in knowing you did the right thing.

Tell us in the comments how it went. We’d like to hear.

The post Have Friends Who Don’t Back Up? Share This Post! appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Simplify Your Jenkins Builds with AWS CodeBuild

Post Syndicated from Paul Roberts original https://aws.amazon.com/blogs/devops/simplify-your-jenkins-builds-with-aws-codebuild/

Jeff Bezos famously said, “There’s a lot of undifferentiated heavy lifting that stands between your idea and that success.” He went on to say, “…70% of your time, energy, and dollars go into the undifferentiated heavy lifting and only 30% of your energy, time, and dollars gets to go into the core kernel of your idea.”

If you subscribe to this maxim, you should not be spending valuable time focusing on operational issues related to maintaining the Jenkins build infrastructure. Companies such as Riot Games have over 1.25 million builds per year and have written several lengthy blog posts about their experiences designing a complex, custom Docker-powered Jenkins build farm. Dealing with Jenkins slaves at scale is a job in itself and Riot has engineers focused on managing the build infrastructure.

Typical Jenkins Build Farm

 

As with all technology, the Jenkins build farm architectures have evolved. Today, instead of manually building your own container infrastructure, there are Jenkins Docker plugins available to help reduce the operational burden of maintaining these environments. There is also a community-contributed Amazon EC2 Container Service (Amazon ECS) plugin that helps remove some of the overhead, but you still need to configure and manage the overall Amazon ECS environment.

There are various ways to create and manage your Jenkins build farm, but there has to be a way that significantly reduces your operational overhead.

Introducing AWS CodeBuild

AWS CodeBuild is a fully managed build service that removes the undifferentiated heavy lifting of provisioning, managing, and scaling your own build servers. With CodeBuild, there is no software to install, patch, or update. CodeBuild scales up automatically to meet the needs of your development teams. In addition, CodeBuild is an on-demand service where you pay as you go. You are charged based only on the number of minutes it takes to complete your build.

One AWS customer, Recruiterbox, helps companies hire simply and predictably through their software platform. Two years ago, they began feeling the operational pain of maintaining their own Jenkins build farms. They briefly considered moving to Amazon ECS, but chose an even easier path forward instead. Recuiterbox transitioned to using Jenkins with CodeBuild and are very happy with the results. You can read more about their journey here.

Solution Overview: Jenkins and CodeBuild

To remove the heavy lifting from managing your Jenkins build farm, AWS has developed a Jenkins AWS CodeBuild plugin. After the plugin has been enabled, a developer can configure a Jenkins project to pick up new commits from their chosen source code repository and automatically run the associated builds. After the build is successful, it will create an artifact that is stored inside an S3 bucket that you have configured. If an error is detected somewhere, CodeBuild will capture the output and send it to Amazon CloudWatch logs. In addition to storing the logs on CloudWatch, Jenkins also captures the error so you do not have to go hunting for log files for your build.

 

AWS CodeBuild with Jenkins Plugin

 

The following example uses AWS CodeCommit (Git) as the source control management (SCM) and Amazon S3 for build artifact storage. Logs are stored in CloudWatch. A development pipeline that uses Jenkins with CodeBuild plugin architecture looks something like this:

 

AWS CodeBuild Diagram

Initial Solution Setup

To keep this blog post succinct, I assume that you are using the following components on AWS already and have applied the appropriate IAM policies:

·         AWS CodeCommit repo.

·         Amazon S3 bucket for CodeBuild artifacts.

·         SNS notification for text messaging of the Jenkins admin password.

·         IAM user’s key and secret.

·         A role that has a policy with these permissions. Be sure to edit the ARNs with your region, account, and resource name. Use this role in the AWS CloudFormation template referred to later in this post.

 

Jenkins Installation with CodeBuild Plugin Enabled

To make the integration with Jenkins as frictionless as possible, I have created an AWS CloudFormation template here: https://s3.amazonaws.com/proberts-public/jenkins.yaml. Download the template, sign in the AWS CloudFormation console, and then use the template to create a stack.

 

CloudFormation Inputs

Jenkins Project Configuration

After the stack is complete, log in to the Jenkins EC2 instance using the user name “admin” and the password sent to your mobile device. Now that you have logged in to Jenkins, you need to create your first project. Start with a Freestyle project and configure the parameters based on your CodeBuild and CodeCommit settings.

 

AWS CodeBuild Plugin Configuration in Jenkins

 

Additional Jenkins AWS CodeBuild Plugin Configuration

 

After you have configured the Jenkins project appropriately you should be able to check your build status on the Jenkins polling log under your project settings:

 

Jenkins Polling Log

 

Now that Jenkins is polling CodeCommit, you can check the CodeBuild dashboard under your Jenkins project to confirm your build was successful:

Jenkins AWS CodeBuild Dashboard

Wrapping Up

In a matter of minutes, you have been able to provision Jenkins with the AWS CodeBuild plugin. This will greatly simplify your build infrastructure management. Now kick back and relax while CodeBuild does all the heavy lifting!


About the Author

Paul Roberts is a Strategic Solutions Architect for Amazon Web Services. When he is not working on Serverless, DevOps, or Artificial Intelligence, he is often found in Lake Tahoe exploring the various mountain ranges with his family.

New Network Load Balancer – Effortless Scaling to Millions of Requests per Second

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-network-load-balancer-effortless-scaling-to-millions-of-requests-per-second/

Elastic Load Balancing (ELB)) has been an important part of AWS since 2009, when it was launched as part of a three-pack that also included Auto Scaling and Amazon CloudWatch. Since that time we have added many features, and also introduced the Application Load Balancer. Designed to support application-level, content-based routing to applications that run in containers, Application Load Balancers pair well with microservices, streaming, and real-time workloads.

Over the years, our customers have used ELB to support web sites and applications that run at almost any scale — from simple sites running on a T2 instance or two, all the way up to complex applications that run on large fleets of higher-end instances and handle massive amounts of traffic. Behind the scenes, ELB monitors traffic and automatically scales to meet demand. This process, which includes a generous buffer of headroom, has become quicker and more responsive over the years and works well even for our customers who use ELB to support live broadcasts, “flash” sales, and holidays. However, in some situations such as instantaneous fail-over between regions, or extremely spiky workloads, we have worked with our customers to pre-provision ELBs in anticipation of a traffic surge.

New Network Load Balancer
Today we are introducing the new Network Load Balancer (NLB). It is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part. The Network Load Balancer is API-compatible with the Application Load Balancer, including full programmatic control of Target Groups and Targets. Here are some of the most important features:

Static IP Addresses – Each Network Load Balancer provides a single IP address for each VPC subnet in its purview. If you have targets in a subnet in us-west-2a and other targets in a subnet in us-west-2c, NLB will create and manage two IP addresses (one per subnet); connections to that IP address will spread traffic across the instances in the subnet. You can also specify an existing Elastic IP for each subnet for even greater control. With full control over your IP addresses, Network Load Balancer can be used in situations where IP addresses need to be hard-coded into DNS records, customer firewall rules, and so forth.

Zonality – The IP-per-subnet feature reduces latency with improved performance, improves availability through isolation and fault tolerance and makes the use of Network Load Balancers transparent to your client applications. Network Load Balancers also attempt to route a series of requests from a particular source to targets in a single subnet while still allowing automatic failover.

Source Address Preservation – With Network Load Balancer, the original source IP address and source ports for the incoming connections remain unmodified, so application software need not support X-Forwarded-For, proxy protocol, or other workarounds. This also means that normal firewall rules, including VPC Security Groups, can be used on targets.

Long-running Connections – NLB handles connections with built-in fault tolerance, and can handle connections that are open for months or years, making them a great fit for IoT, gaming, and messaging applications.

Failover – Powered by Route 53 health checks, NLB supports failover between IP addresses within and across regions.

Creating a Network Load Balancer
I can create a Network Load Balancer opening up the EC2 Console, selecting Load Balancers, and clicking on Create Load Balancer:

I choose Network Load Balancer and click on Create, then enter the details. I can choose an Elastic IP address for each subnet in the target VPC and I can tag the Network Load Balancer:

Then I click on Configure Routing and create a new target group. I enter a name, and then choose the protocol and port. I can also set up health checks that go to the traffic port or to the alternate of my choice:

Then I click on Register Targets and the EC2 instances that will receive traffic, and click on Add to registered:

I make sure that everything looks good and then click on Create:

The state of my new Load Balancer is provisioning, switching to active within a minute or so:

For testing purposes, I simply grab the DNS name of the Load Balancer from the console (in practice I would use Amazon Route 53 and a more friendly name):

Then I sent it a ton of traffic (I intended to let it run for just a second or two but got distracted and it created a huge number of processes, so this was a happy accident):

$ while true;
> do
>   wget http://nlb-1-6386cc6bf24701af.elb.us-west-2.amazonaws.com/phpinfo2.php &
> done

A more disciplined test would use a tool like Bees with Machine Guns, of course!

I took a quick break to let some traffic flow and then checked the CloudWatch metrics for my Load Balancer, finding that it was able to handle the sudden onslaught of traffic with ease:

I also looked at my EC2 instances to see how they were faring under the load (really well, it turns out):

It turns out that my colleagues did run a more disciplined test than I did. They set up a Network Load Balancer and backed it with an Auto Scaled fleet of EC2 instances. They set up a second fleet composed of hundreds of EC2 instances, each running Bees with Machine Guns and configured to generate traffic with highly variable request and response sizes. Beginning at 1.5 million requests per second, they quickly turned the dial all the way up, reaching over 3 million requests per second and 30 Gbps of aggregate bandwidth before maxing out their test resources.

Choosing a Load Balancer
As always, you should consider the needs of your application when you choose a load balancer. Here are some guidelines:

Network Load Balancer (NLB) – Ideal for load balancing of TCP traffic, NLB is capable of handling millions of requests per second while maintaining ultra-low latencies. NLB is optimized to handle sudden and volatile traffic patterns while using a single static IP address per Availability Zone.

Application Load Balancer (ALB) – Ideal for advanced load balancing of HTTP and HTTPS traffic, ALB provides advanced request routing that supports modern application architectures, including microservices and container-based applications.

Classic Load Balancer (CLB) – Ideal for applications that were built within the EC2-Classic network.

For a side-by-side feature comparison, see the Elastic Load Balancer Details table.

If you are currently using a Classic Load Balancer and would like to migrate to a Network Load Balancer, take a look at our new Load Balancer Copy Utility. This Python tool will help you to create a Network Load Balancer with the same configuration as an existing Classic Load Balancer. It can also register your existing EC2 instances with the new load balancer.

Pricing & Availability
Like the Application Load Balancer, pricing is based on Load Balancer Capacity Units, or LCUs. Billing is $0.006 per LCU, based on the highest value seen across the following dimensions:

  • Bandwidth – 1 GB per LCU.
  • New Connections – 800 per LCU.
  • Active Connections – 100,000 per LCU.

Most applications are bandwidth-bound and should see a cost reduction (for load balancing) of about 25% when compared to Application or Classic Load Balancers.

Network Load Balancers are available today in all AWS commercial regions except China (Beijing), supported by AWS CloudFormation, Auto Scaling, and Amazon ECS.

Jeff;

 

Game of Thrones Piracy Peaks After Season Finale

Post Syndicated from Ernesto original https://torrentfreak.com/game-of-thrones-piracy-peaks-after-season-finale-170828/

The seventh season of Game of Thrones has been the most-viewed thus far, with record-breaking TV ratings.

Traditionally, the season finale is among the most-viewed episodes of the season. This is true on official channels, but also on pirate sites.

Despite numerous legal options, Game of Thrones remains extremely popular among pirates. Minutes after the official broadcast ended last night people flocked to various torrent and streaming sites, to watch it for free.

Looking at the torrent download numbers we see that the latest episode is topping all previous ones of this season. At the time of writing, more than 400,000 people were actively sharing one of the many available torrents.

Some of the more popular GoT torrents

While the demand is significant, there is no all time “swarm record” as we saw two years ago.

In part, this may be due to improved legal options, but the recent rise of pirate streaming sites and services are also ‘stealing’ traffic. While there is no hard data available, millions of people now use streaming sites and services to watch pirated episodes of Game of Thrones.

Record or not, there is little doubt that Game of Thrones will end up being the most pirated show of the year once again. That will be the sixth year in a row, which is unprecedented.

In recent years, HBO has tried to contain piracy by sending DMCA takedown notices to pirate sites. In addition, the company also warned tens of thousands of BitTorrent downloaders directly. Nonetheless, many people still find their way to this unofficial market.

While HBO has grown used to mass-scale piracy in recent years, it encountered some other major setbacks this season. Hackers leaked preliminary outlines of various episodes before they aired. The same hackers also threatened to release the season finale, but that never happened.

There were two episode leaks this year, but these were unrelated to the aforementioned. The fourth episode leaked through the Indian media processing company Prime Focus Technologies, which resulted in several arrests. Two weeks later, HBO Spain accidentally made the sixth episode public days in advance, which spread online soon after.

On the upside. Piracy aside, the interest of the media and millions of ‘legal’ viewers appears to be on a high as well, so there’s certainly something left to celebrate.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

ROI is not a cybersecurity concept

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/08/roi-is-not-cybersecurity-concept.html

In the cybersecurity community, much time is spent trying to speak the language of business, in order to communicate to business leaders our problems. One way we do this is trying to adapt the concept of “return on investment” or “ROI” to explain why they need to spend more money. Stop doing this. It’s nonsense. ROI is a concept pushed by vendors in order to justify why you should pay money for their snake oil security products. Don’t play the vendor’s game.

The correct concept is simply “risk analysis”. Here’s how it works.

List out all the risks. For each risk, calculate:

  • How often it occurs.
  • How much damage it does.
  • How to mitigate it.
  • How effective the mitigation is (reduces chance and/or cost).
  • How much the mitigation costs.

If you have risk of something that’ll happen once-per-day on average, costing $1000 each time, then a mitigation costing $500/day that reduces likelihood to once-per-week is a clear win for investment.

Now, ROI should in theory fit directly into this model. If you are paying $500/day to reduce that risk, I could use ROI to show you hypothetical products that will …

  • …reduce the remaining risk to once-per-month for an additional $10/day.
  • …replace that $500/day mitigation with a $400/day mitigation.

But this is never done. Companies don’t have a sophisticated enough risk matrix in order to plug in some ROI numbers to reduce cost/risk. Instead, ROI is a calculation is done standalone by a vendor pimping product, or a security engineer building empires within the company.

If you haven’t done risk analysis to begin with (and almost none of you have), then ROI calculations are pointless.

But there are further problems. This is risk analysis as done in industries like oil and gas, which have inanimate risk. Almost all their risks are due to accidental failures, like in the Deep Water Horizon incident. In our industry, cybersecurity, risks are animate — by hackers. Our risk models are based on trying to guess what hackers might do.

An example of this problem is when our drug company jacks up the price of an HIV drug, Anonymous hackers will break in and dump all our financial data, and our CFO will go to jail. A lot of our risks come now from the technical side, but the whims and fads of the hacker community.

Another example is when some Google researcher finds a vuln in WordPress, and our website gets hacked by that three months from now. We have to forecast not only what hackers can do now, but what they might be able to do in the future.

Finally, there is this problem with cybersecurity that we really can’t distinguish between pesky and existential threats. Take ransomware. A lot of large organizations have just gotten accustomed to just wiping a few worker’s machines every day and restoring from backups. It’s a small, pesky problem of little consequence. Then one day a ransomware gets domain admin privileges and takes down the entire business for several weeks, as happened after #nPetya. Inevitably our risk models always come down on the high side of estimates, with us claiming that all threats are existential, when in fact, most companies continue to survive major breaches.

These difficulties with risk analysis leads us to punting on the problem altogether, but that’s not the right answer. No matter how faulty our risk analysis is, we still have to go through the exercise.

One model of how to do this calculation is architecture. We know we need a certain number of toilets per building, even without doing ROI on the value of such toilets. The same is true for a lot of security engineering. We know we need firewalls, encryption, and OWASP hardening, even without specifically doing a calculation. Passwords and session cookies need to go across SSL. That’s the starting point from which we start to analysis risks and mitigations — what we need beyond SSL, for example.

So stop using “ROI”, or worse, the abomination “ROSI”. Start doing risk analysis.

Growing up alongside tech

Post Syndicated from Eevee original https://eev.ee/blog/2017/08/09/growing-up-alongside-tech/

IndustrialRobot asks… or, uh, asked last month:

industrialrobot: How has your views on tech changed as you’ve got older?

This is so open-ended that it’s actually stumped me for a solid month. I’ve had a surprisingly hard time figuring out where to even start.


It’s not that my views of tech have changed too much — it’s that they’ve changed very gradually. Teasing out and explaining any one particular change is tricky when it happened invisibly over the course of 10+ years.

I think a better framework for this is to consider how my relationship to tech has changed. It’s gone through three pretty distinct phases, each of which has strongly colored how I feel and talk about technology.

Act I

In which I start from nothing.

Nothing is an interesting starting point. You only really get to start there once.

Learning something on my own as a kid was something of a magical experience, in a way that I don’t think I could replicate as an adult. I liked computers; I liked toying with computers; so I did that.

I don’t know how universal this is, but when I was a kid, I couldn’t even conceive of how incredible things were made. Buildings? Cars? Paintings? Operating systems? Where does any of that come from? Obviously someone made them, but it’s not the sort of philosophical point I lingered on when I was 10, so in the back of my head they basically just appeared fully-formed from the æther.

That meant that when I started trying out programming, I had no aspirations. I couldn’t imagine how far I would go, because all the examples of how far I would go were completely disconnected from any idea of human achievement. I started out with BASIC on a toy computer; how could I possibly envision a connection between that and something like a mainstream video game? Every new thing felt like a new form of magic, so I couldn’t conceive that I was even in the same ballpark as whatever process produced real software. (Even seeing the source code for GORILLAS.BAS, it didn’t quite click. I didn’t think to try reading any of it until years after I’d first encountered the game.)

This isn’t to say I didn’t have goals. I invented goals constantly, as I’ve always done; as soon as I learned about a new thing, I’d imagine some ways to use it, then try to build them. I produced a lot of little weird goofy toys, some of which entertained my tiny friend group for a couple days, some of which never saw the light of day. But none of it felt like steps along the way to some mountain peak of mastery, because I didn’t realize the mountain peak was even a place that could be gone to. It was pure, unadulterated (!) playing.

I contrast this to my art career, which started only a couple years ago. I was already in my late 20s, so I’d already spend decades seeing a very broad spectrum of art: everything from quick sketches up to painted masterpieces. And I’d seen the people who create that art, sometimes seen them create it in real-time. I’m even in a relationship with one of them! And of course I’d already had the experience of advancing through tech stuff and discovering first-hand that even the most amazing software is still just code someone wrote.

So from the very beginning, from the moment I touched pencil to paper, I knew the possibilities. I knew that the goddamn Sistine Chapel was something I could learn to do, if I were willing to put enough time in — and I knew that I’m not, so I’d have to settle somewhere a ways before that. I knew that I’d have to put an awful lot of work in before I’d be producing anything very impressive.

I did it anyway (though perhaps waited longer than necessary to start), but those aren’t things I can un-know, and so I can never truly explore art from a place of pure ignorance. On the other hand, I’ve probably learned to draw much more quickly and efficiently than if I’d done it as a kid, precisely because I know those things. Now I can decide I want to do something far beyond my current abilities, then go figure out how to do it. When I was just playing, that kind of ambition was impossible.


So, I played.

How did this affect my views on tech? Well, I didn’t… have any. Learning by playing tends to teach you things in an outward sprawl without many abrupt jumps to new areas, so you don’t tend to run up against conflicting information. The whole point of opinions is that they’re your own resolution to a conflict; without conflict, I can’t meaningfully say I had any opinions. I just accepted whatever I encountered at face value, because I didn’t even know enough to suspect there could be alternatives yet.

Act II

That started to seriously change around, I suppose, the end of high school and beginning of college. I was becoming aware of this whole “open source” concept. I took classes that used languages I wouldn’t otherwise have given a second thought. (One of them was Python!) I started to contribute to other people’s projects. Eventually I even got a job, where I had to work with other people. It probably also helped that I’d had to maintain my own old code a few times.

Now I was faced with conflicting subjective ideas, and I had to form opinions about them! And so I did. With gusto. Over time, I developed an idea of what was Right based on experience I’d accrued. And then I set out to always do things Right.

That’s served me decently well with some individual problems, but it also led me to inflict a lot of unnecessary pain on myself. Several endeavors languished for no other reason than my dissatisfaction with the architecture, long before the basic functionality was done. I started a number of “pure” projects around this time, generic tools like imaging libraries that I had no direct need for. I built them for the sake of them, I guess because I felt like I was improving some niche… but of course I never finished any. It was always in areas I didn’t know that well in the first place, which is a fine way to learn if you have a specific concrete goal in mind — but it turns out that building a generic library for editing images means you have to know everything about images. Perhaps that ambition went a little haywire.

I’ve said before that this sort of (self-inflicted!) work was unfulfilling, in part because the best outcome would be that a few distant programmers’ lives are slightly easier. I do still think that, but I think there’s a deeper point here too.

In forgetting how to play, I’d stopped putting any of myself in most of the work I was doing. Yes, building an imaging library is kind of a slog that someone has to do, but… I assume the people who work on software like PIL and ImageMagick are actually interested in it. The few domains I tried to enter and revolutionize weren’t passions of mine; I just happened to walk through the neighborhood one day and decided I could obviously do it better.

Not coincidentally, this was the same era of my life that led me to write stuff like that PHP post, which you may notice I am conspicuously not even linking to. I don’t think I would write anything like it nowadays. I could see myself approaching the same subject, but purely from the point of view of language design, with more contrasts and tradeoffs and less going for volume. I certainly wouldn’t lead off with inflammatory puffery like “PHP is a community of amateurs”.

Act III

I think I’ve mellowed out a good bit in the last few years.

It turns out that being Right is much less important than being Not Wrong — i.e., rather than trying to make something perfect that can be adapted to any future case, just avoid as many pitfalls as possible. Code that does something useful has much more practical value than unfinished code with some pristine architecture.

Nowhere is this more apparent than in game development, where all code is doomed to be crap and the best you can hope for is to stem the tide. But there’s also a fixed goal that’s completely unrelated to how the code looks: does the game work, and is it fun to play? Yes? Ship the damn thing and forget about it.

Games are also nice because it’s very easy to pour my own feelings into them and evoke feelings in the people who play them. They’re mine, something with my fingerprints on them — even the games I’ve built with glip have plenty of my own hallmarks, little touches I added on a whim or attention to specific details that I care about.

Maybe a better example is the Doom map parser I started writing. It sounds like a “pure” problem again, except that I actually know an awful lot about the subject already! I also cleverly (accidentally) released some useful results of the work I’ve done thusfar — like statistics about Doom II maps and a few screenshots of flipped stock maps — even though I don’t think the parser itself is far enough along to release yet. The tool has served a purpose, one with my fingerprints on it, even without being released publicly. That keeps it fresh in my mind as something interesting I’d like to keep working on, eventually. (When I run into an architecture question, I step back for a while, or I do other work in the hopes that the solution will reveal itself.)

I also made two simple Pokémon ROM hacks this year, despite knowing nothing about Game Boy internals or assembly when I started. I just decided I wanted to do an open-ended thing beyond my reach, and I went to do it, not worrying about cleanliness and willing to accept a bumpy ride to get there. I played, but in a more experienced way, invoking the stuff I know (and the people I’ve met!) to help me get a running start in completely unfamiliar territory.


This feels like a really fine distinction that I’m not sure I’m doing justice. I don’t know if I could’ve appreciated it three or four years ago. But I missed making toys, and I’m glad I’m doing it again.

In short, I forgot how to have fun with programming for a little while, and I’ve finally started to figure it out again. And that’s far more important than whether you use PHP or not.

Me on Restaurant Surveillance Technology

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/07/me_on_restauran.html

I attended the National Restaurant Association exposition in Chicago earlier this year, and looked at all the ways modern restaurant IT is spying on people.

But there’s also a fundamentally creepy aspect to much of this. One of the prime ways to increase value for your brand is to use the Internet to practice surveillance of both your customers and employees. The customer side feels less invasive: Loyalty apps are pretty nice, if in fact you generally go to the same place, as is the ability to place orders electronically or make reservations with a click. The question, Schneier asks, is “who owns the data?” There’s value to collecting data on spending habits, as we’ve seen across e-commerce. Are restaurants fully aware of what they are giving away? Schneier, a critic of data mining, points out that it becomes especially invasive through “secondary uses,” when the “data is correlated with other data and sold to third parties.” For example, perhaps you’ve entered your name, gender, and age into a taco loyalty app (12th taco free!). Later, the vendors of that app sell your data to other merchants who know where and when you eat, whether you are a vegetarian, and lots of other data that you have accidentally shed. Is that what customers really want?