Tag Archives: security

ROI is not a cybersecurity concept

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/08/roi-is-not-cybersecurity-concept.html

In the cybersecurity community, much time is spent trying to speak the language of business, in order to communicate to business leaders our problems. One way we do this is trying to adapt the concept of “return on investment” or “ROI” to explain why they need to spend more money. Stop doing this. It’s nonsense. ROI is a concept pushed by vendors in order to justify why you should pay money for their snake oil security products. Don’t play the vendor’s game.

The correct concept is simply “risk analysis”. Here’s how it works.

List out all the risks. For each risk, calculate:

  • How often it occurs.
  • How much damage it does.
  • How to mitigate it.
  • How effective the mitigation is (reduces chance and/or cost).
  • How much the mitigation costs.

If you have risk of something that’ll happen once-per-day on average, costing $1000 each time, then a mitigation costing $500/day that reduces likelihood to once-per-week is a clear win for investment.

Now, ROI should in theory fit directly into this model. If you are paying $500/day to reduce that risk, I could use ROI to show you hypothetical products that will …

  • …reduce the remaining risk to once-per-month for an additional $10/day.
  • …replace that $500/day mitigation with a $400/day mitigation.

But this is never done. Companies don’t have a sophisticated enough risk matrix in order to plug in some ROI numbers to reduce cost/risk. Instead, ROI is a calculation is done standalone by a vendor pimping product, or a security engineer building empires within the company.

If you haven’t done risk analysis to begin with (and almost none of you have), then ROI calculations are pointless.

But there are further problems. This is risk analysis as done in industries like oil and gas, which have inanimate risk. Almost all their risks are due to accidental failures, like in the Deep Water Horizon incident. In our industry, cybersecurity, risks are animate — by hackers. Our risk models are based on trying to guess what hackers might do.

An example of this problem is when our drug company jacks up the price of an HIV drug, Anonymous hackers will break in and dump all our financial data, and our CFO will go to jail. A lot of our risks come now from the technical side, but the whims and fads of the hacker community.

Another example is when some Google researcher finds a vuln in WordPress, and our website gets hacked by that three months from now. We have to forecast not only what hackers can do now, but what they might be able to do in the future.

Finally, there is this problem with cybersecurity that we really can’t distinguish between pesky and existential threats. Take ransomware. A lot of large organizations have just gotten accustomed to just wiping a few worker’s machines every day and restoring from backups. It’s a small, pesky problem of little consequence. Then one day a ransomware gets domain admin privileges and takes down the entire business for several weeks, as happened after #nPetya. Inevitably our risk models always come down on the high side of estimates, with us claiming that all threats are existential, when in fact, most companies continue to survive major breaches.

These difficulties with risk analysis leads us to punting on the problem altogether, but that’s not the right answer. No matter how faulty our risk analysis is, we still have to go through the exercise.

One model of how to do this calculation is architecture. We know we need a certain number of toilets per building, even without doing ROI on the value of such toilets. The same is true for a lot of security engineering. We know we need firewalls, encryption, and OWASP hardening, even without specifically doing a calculation. Passwords and session cookies need to go across SSL. That’s the starting point from which we start to analysis risks and mitigations — what we need beyond SSL, for example.

So stop using “ROI”, or worse, the abomination “ROSI”. Start doing risk analysis.

Security updates for Tuesday

Post Syndicated from corbet original https://lwn.net/Articles/731678/rss

Security updates have been issued by Debian (extplorer and libraw), Fedora (mingw-libsoup, python-tablib, ruby, and subversion), Mageia (avidemux, clamav, nasm, php-pear-CAS, and shutter), Oracle (xmlsec1), Red Hat (openssl tomcat), Scientific Linux (authconfig, bash, curl, evince, firefox, freeradius, gdm gnome-session, ghostscript, git, glibc, gnutls, groovy, GStreamer, gtk-vnc, httpd, java-1.7.0-openjdk, kernel, libreoffice, libsoup, libtasn1, log4j, mariadb, mercurial, NetworkManager, openldap, openssh, pidgin, pki-core, postgresql, python, qemu-kvm, samba, spice, subversion, tcpdump, tigervnc fltk, tomcat, X.org, and xmlsec1), SUSE (git), and Ubuntu (augeas, cvs, and texlive-base).

Empowerment, Engagement, and Education for Women in Tech

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/empowerment-engagement-and-education-for-women-in-tech/

I’ve been earning a living in the technology industry since 1977, when I worked in one of the first computer stores in the country as a teenager. Looking back over the past 40 years, and realizing that the Altair, IMSAI, Sol-20, and North Star Horizon machines that I learned about, built, debugged, programmed, sold, and supported can now be seen in museums (Seattle’s own Living Computer Museum is one of the best), helps me to appreciate that the world I live in changes quickly, and to understand that I need to do the same. This applies to technology, to people, and to attitudes.

I lived in a suburb of Boston in my early teens. At that time, diversity meant that one person in my public school had come all the way from (gasp) England a few years earlier. When I went to college I began to meet people from other countries and continents and to appreciate the fresh vantage points and approaches that they brought to the workplace and to the problems that we tackled together.

Back in those days, there were virtually no women working as software engineers, managers, or entrepreneurs. Although the computer store was owned by a couple and the wife did all of the management, this was the exception rather than the rule at that time, and for too many years after that. Today, I am happy to be part of a team that brings together the most capable people, regardless of their gender, race, background, or anything other than their ability to do a kick-ass job (Ana, Tara, Randall, Tina, Devin, and Sara, I’m talking about all of you).

We want to do all that we can to encourage young women to prepare to become the next generation of engineers, managers, and entrepreneurs. AWS is proud to support Girls Who Code (including the Summer Immersion Program), Girls in Tech, and other organizations supporting women and underrepresented communities in tech. I sincerely believe that these organizations will be able to move the needle in the right direction. However, like any large-scale social change, this is going to take some time with results visible in years and decades, and only with support & participation from those of us already in the industry.

In conjunction with me&Eve, we were able to speak with some of the attendees at the most recent Girls in Tech Catalyst conference (that’s our booth in the picture). Click through to see what the attendees had to say:

I’m happy to be part of an organization that supports such a worthwhile cause, and that challenges us to make our organization ever-more diverse. While reviewing this post with my colleagues I learned about We Power Tech, an AWS program designed to build skills and foster community and to provide access to Amazon executives who are qualified to speak about the program and about diversity. In conjunction with our friends at Accenture, we have assembled a strong Diversity at re:Invent program.

Jeff;

PS – I did my best to convince Ana, Tara, Tina, or Sara to write this post. Tara finally won the day when she told me “You have raised girls into women, and you are passionate in seeing them succeed in their chose fields with respect and equity. Your post conveying that could be powerful.”

Security updates for Monday

Post Syndicated from jake original https://lwn.net/Articles/731567/rss

Security updates have been issued by Arch Linux (newsbeuter), Debian (augeas, curl, ioquake3, libxml2, newsbeuter, and strongswan), Fedora (bodhi, chicken, chromium, cryptlib, cups-filters, cyrus-imapd, glibc, mingw-openjpeg2, mingw-postgresql, qpdf, and torbrowser-launcher), Gentoo (bzip2, evilvte, ghostscript-gpl, Ked Password Manager, and rar), Mageia (curl, cvs, fossil, jetty, kernel, kernel-linus, kernel-tmb, libmspack, mariadb, mercurial, potrace, ruby, and taglib), Oracle (kernel), Red Hat (xmlsec1), and Ubuntu (graphite2 and strongswan).

Top 10 Most Pirated Movies of The Week on BitTorrent – 08/21/17

Post Syndicated from Ernesto original https://torrentfreak.com/top-10-pirated-movies-week-bittorrent-082117/

This week we have two newcomers in our chart.

Baywatch is the most downloaded movie.

The data for our weekly download chart is estimated by TorrentFreak, and is for informational and educational reference only. All the movies in the list are Web-DL/Webrip/HDRip/BDrip/DVDrip unless stated otherwise.

RSS feed for the weekly movie download chart.

This week’s most downloaded movies are:
Movie Rank Rank last week Movie name IMDb Rating / Trailer
Most downloaded movies via torrents
1 (…) Baywatch 5.7 / trailer
2 (1) Guardians of the Galaxy Vol. 2 8.0 / trailer
3 (2) The Mummy 2017 5.8 / trailer
4 (3) King Arthur: Legend of the Sword 7.2 / trailer
5 (6) Wonder Woman (Subbed HDrip) 8.2 / trailer
6 (4) Spider-Man: Homecoming (HDTS) 8.0 / trailer
7 (…) Security 8.2 / trailer
8 (5) The Boss Baby 6.5 / trailer
9 (10) Despicable Me 3 (HDTS) 6.4 / trailer
10 (9) Ghost In the Shell 6.8 / trailer

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The end of Gentoo’s hardened kernel

Post Syndicated from corbet original https://lwn.net/Articles/731477/rss

Gentoo has long provided a hardened kernel package, but that is
coming to an end
. “As you may know the core of
sys-kernel/hardened-sources has been the grsecurity patches. Recently the
grsecurity developers have decided to limit access to these patches. As a
result, the Gentoo Hardened team is unable to ensure a regular patching
schedule and therefore the security of the users of these kernel
sources. Thus, we will be masking hardened-sources on the 27th of August
and will proceed to remove them from the package repository by the end of
September.

On ISO standardization of blockchains

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/08/on-iso-standardization-of-blockchains.html

So ISO, the primary international standards organization, is seeking to standardize blockchain technologies. On the surface, this seems a reasonable idea, creating a common standard that everyone can interoperate with.

But it can be silly idea in practice. I mean, it should not be assumed that this is a good thing to do.

The value of official standards

You don’t need the official imprimatur of a government committee for something to be a “standard”. The Internet itself is a prime example of that.

In the 1980s, the ISO and the IETF (Internet Engineering Task Force) pursued competing standards for creating a world-wide “internet”. The IETF was an informal group of technologist that had essentially no official standing.

The ISO version of the Internet failed. Their process was to bring multiple stakeholders from business, government, and universities together in committees to debate competing interests. The result was something so horrible that it could never work in practice.

The IETF succeeded. It consisted of engineers just building things. Rather than officially “standardized”, these things were “described”, so that others knew enough to build their own version that interoperated. Once lots of different people built interoperating versions of something, then it became a “standard”.

In other words, the way the Internet came to be, standardization followed interoperability — it didn’t create interoperability.

In the end, the ISO gave up on their standards and adopted the IETF standards. The ISO brought no value to the development of Internet standards. Whether they ratified the Internet’s “TCP/IP” standard, ignored it, or condemned it, the Internet would exist today anyway, and a competing ISO-blessed internetwork would not.

The same question exists for blockchain technologies. Groups are off busy innovating quickly, creating their own standards. If the ISO blesses one, or creates its own, it’s unlikely to have any impact on interoperability.

Blockchain vs. chaining blocks

The excitement over blockchains is largely driven by people who don’t know the details, who don’t understand the difference between a blockchain like Bitcoin and the problem they are trying to solve.

Consider a record keeping system, especially public records. Storing them in a blockchain seems like a natural idea.

But in fact, it’s a terrible idea. A Bitcoin-style blockchain has a lot of features you don’t want, like “proof-of-work” signing. It is also missing necessary features, like bulk storage with redundancy (backups). Sure, Bitcoin has redundancy, but by brute force, storing the blockchain in thousands of places around the Internet. This is far from what a public records system would need, which would store a lot more data with far fewer backup copies (fewer than 10).

The only real overlap between Bitcoin and a public records system is a “signing chain”. But this is something that already existed before Bitcoin. It’s what Bitcoin blockchain was built on top of — it’s not the blockchain itself.

It’s like people discovering “cryptography” for the first time when they looked at Bitcoin, ignoring the thousand year history of crypto, and now every time they see a need for “crypto” they think “Bitcoin blockchain”.

Consensus and forking

The entire point of Bitcoin, the reason it was created, was as the antithesis to centralized standardization like ISO. Standardizing blockchains misses the entire point of their existence. The Bitcoin manifesto is that standardization comes from acclamation not proclamation, and that many different standards are preferable to a single one.

This is not just a theoretical idea but one built into Bitcoin’s blockchain technology. “Consensus” is achieved by the proof-of-work mechanism, so that those who do the most work are the ones that drive the consensus. When irreconcilable differences arise, the blockchain “forks”, with each side continuing on with their now non-interoperable blockchains. Such forks are not a sin, but part of the natural evolution.

We saw this with the recent fork of Bitcoin. There are now so many transactions that they exceed the size of blocks. One group chose a change to make transactions smaller. Another group chose a change to make block sizes larger.

It is this problem, of consensus, that is the innovation that Bitcoin created with blockchains, not the chain signing of public transaction records.

Ethereum

What “blockchain standardization” is going to mean in practice is not the blockchain itself, but trying to standardize the Ethereum version. What makes Ethereum different is the “smart contracts” programming language, which has financial institutions excited.

This is a bad idea because from a cybersecurity perspective, Ethereum’s programming language is flawed. Different bugs in “smart contracts” have led to multiple $100-million hacks, such as the infamous “DAO collapse”.

While it has interesting possibilities, we should be scared of standardizing Ethereum’s language before it works.

Conclusion

People who matter are too busy innovating, creating their own blockchain standards. There is little that the ISO can do to improve this. Their official imprimatur is not needed to foster innovation and interoperability — if they are consequential at anything, it’ll just be interfering.

Friday Squid Blogging: Brittle Star Catches a Squid

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/08/friday_squid_bl_589.html

Watch a brittle star catch a squid, and then lose it to another brittle star.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Security updates for Friday

Post Syndicated from jake original https://lwn.net/Articles/731405/rss

Security updates have been issued by Debian (kernel and libmspack), Fedora (groovy18 and nasm), openSUSE (curl, java-1_8_0-openjdk, libplist, shutter, and thunderbird), Oracle (git, groovy, kernel, and mercurial), Red Hat (rh-git29-git), SUSE (openvswitch), and Ubuntu (c-ares, clamav, firefox, libmspack, and openjdk-7).

Unfixable Automobile Computer Security Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/08/unfixable_autom.html

There is an unpatchable vulnerability that affects most modern cars. It’s buried in the Controller Area Network (CAN):

Researchers say this flaw is not a vulnerability in the classic meaning of the word. This is because the flaw is more of a CAN standard design choice that makes it unpatchable.

Patching the issue means changing how the CAN standard works at its lowest levels. Researchers say car manufacturers can only mitigate the vulnerability via specific network countermeasures, but cannot eliminate it entirely.

Details on how the attack works are here:

The CAN messages, including errors, are called “frames.” Our attack focuses on how CAN handles errors. Errors arise when a device reads values that do not correspond to the original expected value on a frame. When a device detects such an event, it writes an error message onto the CAN bus in order to “recall” the errant frame and notify the other devices to entirely ignore the recalled frame. This mishap is very common and is usually due to natural causes, a transient malfunction, or simply by too many systems and modules trying to send frames through the CAN at the same time.

If a device sends out too many errors, then­ — as CAN standards dictate — ­it goes into a so-called Bus Off state, where it is cut off from the CAN and prevented from reading and/or writing any data onto the CAN. This feature is helpful in isolating clearly malfunctioning devices and stops them from triggering the other modules/systems on the CAN.

This is the exact feature that our attack abuses. Our attack triggers this particular feature by inducing enough errors such that a targeted device or system on the CAN is made to go into the Bus Off state, and thus rendered inert/inoperable. This, in turn, can drastically affect the car’s performance to the point that it becomes dangerous and even fatal, especially when essential systems like the airbag system or the antilock braking system are deactivated. All it takes is a specially-crafted attack device, introduced to the car’s CAN through local access, and the reuse of frames already circulating in the CAN rather than injecting new ones (as previous attacks in this manner have done).

Slashdot thread.

Security updates for Thursday

Post Syndicated from jake original https://lwn.net/Articles/731309/rss

Security updates have been issued by CentOS (git), Debian (firefox-esr and mariadb-10.0), Gentoo (bind and tnef), Mageia (kauth, kdelibs4, poppler, subversion, and vim), openSUSE (fossil, git, libheimdal, libxml2, minicom, nodejs4, nodejs6, openjpeg2, openldap2, potrace, subversion, and taglib), Oracle (git and kernel), Red Hat (git, groovy, httpd24-httpd, and mercurial), Scientific Linux (git), and SUSE (freeradius-server, ImageMagick, and subversion).

Do the Police Need a Search Warrant to Access Cell Phone Location Data?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/08/do_the_police_n.html

The US Supreme Court is deciding a case that will establish whether the police need a warrant to access cell phone location data. This week I signed on to an amicus brief from a wide array of security technologists outlining the technical arguments as why the answer should be yes. Susan Landau summarized our arguments.

A bunch of tech companies also submitted a brief.

New – VPC Endpoints for DynamoDB

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-vpc-endpoints-for-dynamodb/

Starting today Amazon Virtual Private Cloud (VPC) Endpoints for Amazon DynamoDB are available in all public AWS regions. You can provision an endpoint right away using the AWS Management Console or the AWS Command Line Interface (CLI). There are no additional costs for a VPC Endpoint for DynamoDB.

Many AWS customers run their applications within a Amazon Virtual Private Cloud (VPC) for security or isolation reasons. Previously, if you wanted your EC2 instances in your VPC to be able to access DynamoDB, you had two options. You could use an Internet Gateway (with a NAT Gateway or assigning your instances public IPs) or you could route all of your traffic to your local infrastructure via VPN or AWS Direct Connect and then back to DynamoDB. Both of these solutions had security and throughput implications and it could be difficult to configure NACLs or security groups to restrict access to just DynamoDB. Here is a picture of the old infrastructure.

Creating an Endpoint

Let’s create a VPC Endpoint for DynamoDB. We can make sure our region supports the endpoint with the DescribeVpcEndpointServices API call.


aws ec2 describe-vpc-endpoint-services --region us-east-1
{
    "ServiceNames": [
        "com.amazonaws.us-east-1.dynamodb",
        "com.amazonaws.us-east-1.s3"
    ]
}

Great, so I know my region supports these endpoints and I know what my regional endpoint is. I can grab one of my VPCs and provision an endpoint with a quick call to the CLI or through the console. Let me show you how to use the console.

First I’ll navigate to the VPC console and select “Endpoints” in the sidebar. From there I’ll click “Create Endpoint” which brings me to this handy console.

You’ll notice the AWS Identity and Access Management (IAM) policy section for the endpoint. This supports all of the fine grained access control that DynamoDB supports in regular IAM policies and you can restrict access based on IAM policy conditions.

For now I’ll give full access to my instances within this VPC and click “Next Step”.

This brings me to a list of route tables in my VPC and asks me which of these route tables I want to assign my endpoint to. I’ll select one of them and click “Create Endpoint”.

Keep in mind the note of warning in the console: if you have source restrictions to DynamoDB based on public IP addresses the source IP of your instances accessing DynamoDB will now be their private IP addresses.

After adding the VPC Endpoint for DynamoDB to our VPC our infrastructure looks like this.

That’s it folks! It’s that easy. It’s provided at no cost. Go ahead and start using it today. If you need more details you can read the docs here.

Security updates for Wednesday

Post Syndicated from ris original https://lwn.net/Articles/731167/rss

Security updates have been issued by CentOS (firefox, httpd, and java-1.7.0-openjdk), Fedora (cups-filters, potrace, and qpdf), Mageia (libsoup and mingw32-nsis), openSUSE (kernel), Oracle (httpd, kernel, spice, and subversion), Red Hat (httpd, java-1.7.1-ibm, and subversion), Scientific Linux (httpd), Slackware (xorg), SUSE (java-1_8_0-openjdk), and Ubuntu (firefox, linux, linux-aws, linux-gke, linux-raspi2, linux-snapdragon, linux-lts-xenial, postgresql-9.3, postgresql-9.5, postgresql-9.6, and ubufox).

AWS Partner Webinar Series – August 2017

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-partner-webinar-series-august-2017/

We love bringing our customers helpful information and we have another cool series we are excited to tell you about. The AWS Partner Webinar Series is a selection of live and recorded presentations covering a broad range of topics at varying technical levels and scale. A little different from our AWS Online TechTalks, each AWS Partner Webinar is hosted by an AWS solutions architect and an AWS Competency Partner who has successfully helped customers evaluate and implement the tools, techniques, and technologies of AWS.

Check out this month’s webinars and let us know which ones you found the most helpful! All schedule times are shown in the Pacific Time (PDT) time zone.

Security Webinars

Sophos
Seeing More Clearly: ATLO Software Secures Online Training Solutions for Correctional Facilities with SophosUTM on AWS Link.
August 17th, 2017 | 10:00 AM PDT

F5
F5 on AWS: How MailControl Improved their Application Visibility and Security
August 23, 2017 | 10:00 AM PDT

Big Data Webinars

Tableau, Matillion, 47Lining, NorthBay
Unlock Insights and Reduce Costs by Modernizing Your Data Warehouse on AWS
August 22, 2017 | 10:00 AM PDT

Storage Webinars

StorReduce
How Globe Telecom does Primary Backups via StorReduce to the AWS Cloud
August 29, 2017 | 8:00 AM PDT

Commvault
Moving Forward Faster: How Monash University Automated Data Movement for 3500 Virtual Machines to AWS with Commvault
August 29, 2017 | 1:00 PM PDT

Dell EMC
Moving Forward Faster: Protect Your Workloads on AWS With Increased Scale and Performance
August 30, 2017 | 11:00 AM PDT

Druva
How Hatco Protects Against Ransomware with Druva on AWS
September 13, 2017 | 10:00 AM PDT

Security updates for Tuesday

Post Syndicated from ris original https://lwn.net/Articles/731035/rss

Security updates have been issued by Arch Linux (audiofile, git, jdk7-openjdk, libytnef, mercurial, spice, strongswan, subversion, and xorg-server), Debian (gajim, krb5, and libraw), Fedora (kernel, postgresql, sscep, subversion, and varnish), Mageia (firefox, phpldapadmin, and x11-server), Red Hat (kernel and spice), SUSE (subversion), and Ubuntu (libgd2).

Wirzenius: Retiring Obnam

Post Syndicated from corbet original https://lwn.net/Articles/730986/rss

Lars Wirzenius announces
that he is ending development of the Obnam backup system. “After
some careful thought, I fear that the maintainability problems of Obnam can
realistically only be solved by a complete rewrite from scratch, and I’m
not up to doing that. If you use Obnam, you should migrate to some other
backup solution. Don’t worry, you have until the end of the year. I will be
around and I intend to fix any serious bugs in Obnam; in particular,
security flaws. But you should start looking for a replacement sooner
rather than later.
” LWN looked at
Obnam
in 2012.