2016-08-17 java, unicode, emoji

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3316

Чудех се дали да кръстя това “fuck you, Java”.

От някакво време гледаме проблеми с пращането на emoticon-и (ако не знаете, unicode стана пълен с всякакви лайна) – в някакви случаи се намазват, не пристигат вярно и т.н.. При нас през JNI стринговете се пращат до един C/C++ lib, от който всъщност излизат през мрежата от там, като по пътя има малко debug, който да каже какво излиза…

Седя аз и гледам как за намазващото се emoji по някаква причина получавам 6 байта от java-та, вместо 4 (което очаквам). 6-те байта ми изглеждат странно, не наподобяват UTF-8 (поне това, което аз знам), и след някакво четене откривам, че педе^Wпрекрасните хора от java под UTF-8 разбират modified UTF-8, или някакви неща като по-големите unicode символи се кодират така, че нищо друго не може да ги схване, освен друга java. Това обърква SMS центровете по пътя и всякакви други реализации и води до странни за дебъгване проблеми.

Разбира се, това се случва само с определени много много големи emoji-та, които ги има само на определени телефони в стандартните клавиатури, което па води до много фалшиви следи, като например “това са го строшили в android 5 при samsung”, “сървъра ги яде”, “космически излъчвания” и т.н..
(“клавиатурите” в android са софтуерни компоненти, дето явно всеки малоумен производител си пише сам)

Решението поне в нашия случай е в JNI-то да се превежда техния utf8 до нормалния. Ровейки се из интернета, намерих още подобни оплаквания, но явно не всички успяват да се ударят в това, понеже преди това се оказва, че например mysql-ската им база не ползва верния storage type, node.js-а им има само UCS2, който па хептен не може да ги събере и т.н..

Много хора просто казват “не ползвайте тия работи”. Сериозно ми се иска и за мен да беше опция…

Go 1.7 released

Post Syndicated from corbet original http://lwn.net/Articles/697352/rss

Version 1.7 of the Go language
has been released. “There is one tiny language change in this
release. The section on terminating statements clarifies that to determine
whether a statement list ends in a terminating statement, the ‘final
non-empty statement’ is considered the end, matching the existing behavior
of the gc and gccgo compiler toolchains.
” On the other hand, there
appear to be significant optimization improvements; see the release notes for details.

Security advisories for Tuesday

Post Syndicated from ris original http://lwn.net/Articles/697336/rss

Debian-LTS has updated extplorer (archive traversal).

Fedora has updated jasper (F24: multiple vulnerabilities) and kernel (F24; F23: denial of service).

openSUSE has updated harfbuzz
(Leap42.1, 13.2: multiple vulnerabilities) and squid (Leap42.1: multiple vulnerabilities).

Oracle has updated kernel 4.1.12 (OL7; OL6:
information disclosure), kernel 3.8.13 (OL7; OL6: information disclosure).

SUSE has updated php5 (SLE11-SP2:
multiple vulnerabilities).

Ubuntu has updated openssh (two vulnerabilities).

Major NSA/Equation Group Leak

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/major_nsaequati.html

The NSA was badly hacked in 2013, and we’re just now learning about it.

A group of hackers called “The Shadow Brokers” claim to have hacked the NSA, and are posting data to prove it. The data is source code from “The Equation Group,” which is a sophisticated piece of malware exposed last year and attributed to the NSA. Some details:

The Shadow Brokers claimed to have hacked the Equation Group and stolen some of its hacking tools. They publicized the dump on Saturday, tweeting a link to the manifesto to a series of media companies.

The dumped files mostly contain installation scripts, configurations for command and control servers, and exploits targeted to specific routers and firewalls. The names of some of the tools correspond with names used in Snowden documents, such as “BANANAGLEE” or “EPICBANANA.”

Nicholas Weaver has analyzed the data and believes it real:

But the proof itself, appear to be very real. The proof file is 134 MB of data compressed, expanding out to a 301 MB archive. This archive appears to contain a large fraction of the NSA’s implant framework for firewalls, including what appears to be several versions of different implants, server side utility scripts, and eight apparent exploits for a variety of targets.

The exploits themselves appear to target Fortinet, Cisco, Shaanxi Networkcloud Information Technology (sxnc.com.cn) Firewalls, and similar network security systems. I will leave it to others to analyze the reliability, versions supported, and other details. But nothing I’ve found in either the exploits or elsewhere is newer than 2013.

Because of the sheer volume and quality, it is overwhelmingly likely this data is authentic. And it does not appear to be information taken from comprised systems. Instead the exploits, binaries with help strings, server configuration scripts, 5 separate versions of one implant framework, and all sort of other features indicate that this is analyst-side code­ — the kind that probably never leaves the NSA.

I agree with him. This just isn’t something that can be faked in this way. (Good proof would be for The Intercept to run the code names in the new leak against their database, and confirm that some of the previously unpublished ones are legitimate.)

This is definitely not Snowden stuff. This isn’t the sort of data he took, and the release mechanism is not one that any of the reporters with access to the material would use. This is someone else, probably an outsider…probably a government.

Weaver again:

But the big picture is a far scarier one. Somebody managed to steal 301 MB of data from a TS//SCI system at some point between 2013 and today. Possibly, even probably, it occurred in 2013. But the theft also could have occurred yesterday with a simple utility run to scrub all newer documents. Relying on the file timestamps­ — which are easy to modify­ — the most likely date of acquisition was June 11, 2013. That is two weeks after Snowden fled to Hong Kong and six days after the first Guardian publication. That would make sense, since in the immediate response to the leaks as the NSA furiously ran down possibly sources, it may have accidentally or deliberately eliminated this adversary’s access.

Okay, so let’s think about the game theory here. Some group stole all of this data in 2013 and kept it secret for three years. Now they want the world to know it was stolen. Which governments might behave this way? The obvious list is short: China and Russia. Were I betting, I would bet Russia, and that it’s a signal to the Obama Administration: “Before you even think of sanctioning us for the DNC hack, know where we’ve been and what we can do to you.”

They claim to be auctioning off the rest of the data to the highest bidder. I think that’s PR nonsense. More likely, that second file is random nonsense, and this is all we’re going to get. It’s a lot, though. Yesterday was a very bad day for the NSA.

EDITED TO ADD: Snowden’s comments. He thinks it’s an “NSA malware staging server” that was hacked.

EDITED TO ADD (8/18): Dave Aitel also thinks it’s Russia.

EDITED TO ADD (8/19): Two news articles.

Cisco has analyzed the vulnerabilities for their products found in the data. They found several that they patched years ago, and one new one they didn’t know about yet. See also this about the vulnerabilities.

EDITED TO ADD (8/20): More about the vulnerabilities found in the data.

Previously unreleased material from the Snowden archive proves that this data dump is real, and that the Equation Group is the NSA.

Managing the Backup of Multiple Windows Servers

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/managing-windows-server-backup/

Replace LTO with B2

In a recent post we looked at how you could use Backblaze B2 in concert with CloudBerry to backup your Windows Server data to the Backblaze cloud.  In the process, you could save up to 75% over similar solutions by using the B2/CloudBerry combination.  But, what if you needed to backup 5 Windows Servers, or 10, or 100?  Would the same solution work?  More importantly, would the savings scale with the number of servers?  The answer is yes to all these questions, but there’s a much easier way, use Backblaze B2 in combination with the CloudBerry Managed Backup Service (MBS).

The B2/MBS integration

The B2/MBS integration is now available on the Windows platform.  The goals of the solution are to:

  • easily manage the backup of multiple servers, workstations, and computers
  • economically store that data off-site, and
  • have the data be restorable when needed.

To accomplish this, data on a given Windows Server is first encrypted on the source server then transferred according to the CloudBerry backup plan to Backblaze B2 Cloud Storage, where it is stored in encrypted form.  When needed, data can be restored in near-real time by using the MBS management console.  The data is retrieved from B2, passed to the destination server where it is decrypted, and is then ready as needed.

IT professionals with a Window Servers farm or Managed Service Providers (MSP) managing multiple clients, will both find the CloudBerry MBS / Backblaze B2 solution easy and inexpensive to use.

Managed Service Providers

Many smaller organizations don’t have an IT department or the expertise to manage their computer systems and servers.  Instead they rely on a local Managed Service Provider (MSP) to do the IT heavy-lifting.  One of those IT functions is to manage the backup process for their clients, with the added responsibility of making sure the data from each client is kept separate and protected.

(click on image for larger view)(click on image for larger view)

The CloudBerry Managed Backup Service allows a Managed Service Provider to set-up and manage backups for multiple clients from one console.  Backup plans are defined for each system of each client so the MSP can ensure data security while optimizing the backups for each client.  When data on a given Windows Server needs to be stored off-site, the MSP simply selects Backblaze B2 as the cloud storage destination for that system.  The CloudBerry backup plan ensures the data to be backed up is first encrypted and then routed to Backblaze B2 for cloud storage, where it resides until it is needed.  Data can be restored as soon as it uploaded to B2 by using the CloudBerry MBS console.

IT Departments and LTO Tape Backup Systems

Besides keeping the boss’s computer running, a primary function of IT is to protect the organization’s data.  This data resides in servers, workstations, desktops, and laptops throughout the organization.  Tape backup systems (usually known as LTO systems), are a common method to achieve off-site backup of an organization’s data.  While many IT folks lament that they have to use LTO systems, it is generally considered the least expensive way to store data off-site, even if it means taking hours or even days to recover data stored on tape.  And in a recent study by Backblaze 19.92% of LTO users reported having trouble recovering data from LTO tape.

Still, many organizations have years of data stored on LTO tapes, and although they may want to change to cloud storage they can’t just discard LTO.  The B2/CloudBerry MBS solution provides a viable way to transition away from LTO.

Replace LTO with B2

The process starts with defining backup plans in CloudBerry MBS for the servers and systems that are currently being backed up to LTO.  For example, for each server you do a full backup once a week/month/year and you also do an incremental backup each day.  The transition starts with replacing LTO with Backblaze B2 as the destination for the daily incremental backups.  With B2, these daily backups will now be off-site immediately – without having to remember to take the tape home!  It also means the files from any given incremental backup can be recovered within a few minutes by using the CloudBerry MBS console. Even better all the management can be done remotely, you don’t need to be standing next to the server (or the LTO system) to back up the data.

Over time, the CloudBerry backup plan can be modified so that all the organization’s data can be stored in Backblaze B2, and the LTO system can be retired.  Of course you may want to keep an LTO system or two to read old tapes, but there is no need to buy new LTO equipment and tapes and then spend hours a week mounting tapes, cleaning drive heads, and hoping you can read the tape.

Summary

If you have one or two Windows Servers that need backup, the CloudBerry Windows Server Backup solution will work fine.  Otherwise the CloudBerry MBS solution is probably a better fit from both a management and cost point-of-view.  In either case, selecting Backblaze B2 as the cloud storage destination can dramatically reduce your off-site storage costs and make the total cost of ownership of the joint solution much less than you ever expected.

The post Managing the Backup of Multiple Windows Servers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Raspberry Pi at Camp Bestival

Post Syndicated from Helen Drury original https://www.raspberrypi.org/blog/raspberry-pi-at-camp-bestival/

Festival goers relax on the grass in front of huge silver letters: "LOVE CAMP BESTIVAL"

Camp Bestival is the family-oriented version of the more adult-focused Bestival, and attracts 30,000 parents and children each year. Everything has been designed with families in mind, including shows and activity tents, all set within the beautiful grounds of Lulworth Castle.
A huge crowd in front of Lulworth Castle at Camp Bestival. The sun is setting behind the battlements.

This year’s theme was Space. We’re pretty keen on space ourselves, and we’re not ones to shirk a party, so we figured: why not take along something else fun and interesting for kids to do alongside watching Mr Tumble or the Clangers, by showing them how to create their own space animations and design LED displays? Not to mention having welcoming chats with curious parents to answer the all-important question “So what is a Raspberry Pi?” while their kids are off programming in Scratch.

So, having loaded up every square inch of the camper van with equipment and swag, we set off to Lulworth. Naturally, as the event was space-themed, we took along our office friend Flat Tim for support. He was very excited, if a little overdressed.

A life-sized cardboard cut-out of British astronaut Tim Peake wearing a spacesuit, standing in the gangway of a camper van. Plastic beach spades hang beside him

Located in the very busy Science Tent every day across the long weekend, we offered young visitors the chance to try out Code Club’s Lost in Space and Space Junk animation programming activities – why not try out Lost in Space for yourself? Alongside this, we set up workstations with Raspberry Pis showcasing Astro Pi and the Sense HAT’s capabilities, from programming LEDs to simple Python activities sensing the environment. At one point we were joined by a six-year-old who wowed us all with her new programming skills!

Montage: a photo of a young girl with a flower garland in her hair, lost in concentration at a Raspberry Pi workstation; and a photo of the screen showing some of the code she is working on. She is making the Sense HAT display messages including, "I like doing sports" and "I like having hugs with Mummy."

Four children concentrate on activities at Raspberry Pi workstations, with a crowd of older siblings and parents around

Raspberry Pi staff and volunteers talk to families in the Science Tent

We visited our friends at the UK Space Agency in the Mission Control tent, and they kindly lent us one of their spacesuits to go with our Astro Pi activities. Dan certainly looked the part in it.

Tony from UK Space helps Raspberry Pi's Dan Grammatica don a spacesuit
Raspberry Pi's Dan Grammatica, wearing a spacesuit, and Dave Hazeldean

Evenings were spent experiencing the festival at night, from parades to live music, before falling into bed exhausted but happy!

A giant astronaut, glowing purple and blue, towers above the crowd after dark
An actor dressed as an exotic alien, with glowing fairy wings and an exoskeleton that incorporates stilts, walks among the crowd at dusk

No festival is complete without fun giveaways, such as our Code Club, Raspberry Pi and Astro Pi temporary tattoos. They were almost as popular as our activities:

Philip Colligan on Twitter

It’s all about #tattoos at @CampBestival – @Raspberry_Pi and @CodeClub activities in the Science Tent #CampBestivalpic.twitter.com/wHPmpnyQ4l

The prize for best timing goes to this young person, who picked up the 1000th (and last!) Raspberry Pi/Code Club bag in the final half-hour before we went home!

A young girl smiles and holds up a red drawstring bag with a large white Raspberry Pi logo printed on it

To everyone who visited us and joined in with our digital making activities, thank you for stopping by! We hope you enjoyed visiting us, and that you feel inspired to try some more projects via our free learning resources.

Special thanks, too, to the rest of the Raspberry Pi Camp B crew – Carrie Anne, Daniel, Dave, Alex and Chris.

Finally, there’s one thing we couldn’t share with festival goers at Camp Bestival because it was too windy, but we did manage a quick photo, so we can share it with you now: flying the Raspberry Pi flag!

A white flag with the raspberry and green Raspberry Pi logo and the words "Raspberry Pi," flying in a stiff breeze against a cloudy sky

The post Raspberry Pi at Camp Bestival appeared first on Raspberry Pi.

Updated Whitepaper Available: AWS Best Practices for DDoS Resiliency

Post Syndicated from Andrew Kiggins original https://aws.amazon.com/blogs/security/updated-whitepaper-available-aws-best-practices-for-ddos-resiliency/

AWS is committed to providing you high availability, security, and resiliency in the face of bad actors on the Internet. As part of this commitment, AWS provides tools, best practices, and AWS services that you can use to build distributed denial of services (DDoS)–resilient applications.

We recently released the 2016 version of the AWS Best Practices for DDoS Resiliency Whitepaper, which can be helpful if you have public-facing endpoints that might attract unwanted DDoS activity. You can benefit from reading this whitepaper if you:

  • Are looking for prescriptive DDoS guidance.
  • Need guidance for building new apps resilient to DDoS attacks.
  • Need to verify whether your architecture is optimized for DDoS resiliency and makes the best use of services such as Amazon CloudFront and Elastic Load Balancing.

The updated whitepaper builds on the descriptions of various attack types, such as volumetric attacks and application layer attacks, and explains which best practices are most effective at managing them. We have added explanations about where services and features fit in to the strategy of DDoS mitigation and how they can be used to protect your applications. Also, the whitepaper’s “Summary of best practices” table provides a checklist to help you identify opportunities to improve your architecture by using the whitepaper’s prescriptive guidance.

If you have comments about this updated whitepaper, submit them in the “Comments” section below.

– Andrew

Updated Whitepaper Available: AWS Best Practices for DDoS Resiliency

Post Syndicated from Andrew Kiggins original https://blogs.aws.amazon.com/security/post/Tx6QAIBSQTJPHB/Updated-Whitepaper-Available-AWS-Best-Practices-for-DDoS-Resiliency

AWS is committed to providing you high availability, security, and resiliency in the face of bad actors on the Internet. As part of this commitment, AWS provides tools, best practices, and AWS services that you can use to build distributed denial of services (DDoS)–resilient applications.

We recently released the 2016 version of the AWS Best Practices for DDoS Resiliency Whitepaper, which can be helpful if you have public-facing endpoints that might attract unwanted DDoS activity. You can benefit from reading this whitepaper if you:

  • Are looking for prescriptive DDoS guidance.
  • Need guidance for building new apps resilient to DDoS attacks.
  • Need to verify whether your architecture is optimized for DDoS resiliency and makes the best use of services such as Amazon CloudFront and Elastic Load Balancing.

The updated whitepaper builds on the descriptions of various attack types, such as volumetric attacks and application layer attacks, and explains which best practices are most effective at managing them. We have added explanations about where services and features fit in to the strategy of DDoS mitigation and how they can be used to protect your applications. Also, the whitepaper’s “Summary of best practices” table provides a checklist to help you identify opportunities to improve your architecture by using the whitepaper’s prescriptive guidance.

If you have comments about this updated whitepaper, submit them in the “Comments” section below.

– Andrew

Powerful Bit-Flipping Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/powerful_bit-fl.html

New research: “Flip Feng Shui: Hammering a Needle in the Software Stack,” by Kaveh Razavi, Ben Gras, Erik Bosman Bart Preneel, Cristiano Giuffrida, and Herbert Bos.

Abstract: We introduce Flip Feng Shui (FFS), a new exploitation vector which allows an attacker to induce bit flips over arbitrary physical memory in a fully controlled way. FFS relies on hardware bugs to induce bit flips over memory and on the ability to surgically control the physical memory layout to corrupt attacker-targeted data anywhere in the software stack. We show FFS is possible today with very few constraints on the target data, by implementing an instance using the Rowhammer bug and memory deduplication (an OS feature widely deployed in production). Memory deduplication allows an attacker to reverse-map any physical page into a virtual page she owns as long as the page’s contents are known. Rowhammer, in turn, allows an attacker to flip bits in controlled (initially unknown) locations in the target page.

We show FFS is extremely powerful: a malicious VM in a practical cloud setting can gain unauthorized access to a co-hosted victim VM running OpenSSH. Using FFS, we exemplify end-to-end attacks breaking OpenSSH public-key authentication, and forging GPG signatures from trusted keys, thereby compromising the Ubuntu/Debian update mechanism. We conclude by discussing mitigations and future directions for FFS attacks.

Applying the Business Source Licensing (BSL)

Post Syndicated from Michael "Monty" Widenius original http://monty-says.blogspot.com/2016/08/applying-business-source-licensing-bsl.html

I believe that Open Source is one of the best ways to develop software. However, as I have written in blogs before, the Open Source model presents challenges to creating a software company that has the needed resources to continually invest in product development and innovation.

One reason for this is a lack of understanding of the costs associated with developing and extending software. As one example of what I regard to be unrealistic user expectations, here is a statement from a large software company when I asked them to support MariaDB development with financial support:

As you may remember, we’re a fairly traditional and conservative company. A donation from us would require feature work in exchange for the donation. Unfortunately, I cannot think of a feature that I would want developed that we would be willing to pay for this year.”

This thinking is flawed on many fronts — a new feature can take more than a year to develop! It also shows that the company saw that features create value they would invest in, but was not willing to pay for features that had already been developed and was not prepared to invest into keeping alive a product they depend upon. They also don’t trust the development team with the ability to independently define new features that would bring value. Without that investment, a technology company cannot invest in ongoing research and development, thereby dooming its survival.

To be able to compete with closed source technology companies who have massive profit margins, one needs income.

Dual licensing on Free Software, as we applied it at MySQL, works only for a limited subset of products (something I have called ‘infrastructure software’) that customers need to combine with their own closed source software and distribute to their customers. Most software products are not like that. This is why David Axmark and I created the Business Source license (BSL), a license designed to harmonize producing Open Source software and running a successful software company.

The intent of BSL is to increase the overall freedom and innovation in the software industry, for customers, developers, user and vendors. Finally, I hope that BSL will pave the way for a new business model that sustains software development without relying primarily on support.

For those who are interested in the background, Linus Nyman, a doctoral student from Hanken School of Economics in Finland), and I worked together on an academic article on the BSL.

Today, MariaDB Corporation is excited to introduce the beta release of MariaDB MaxScale 2.0, our database proxy, which is released under BSL. I am very happy to see MariaDB MaxScale being released under BSL, rather than under an Open Core or Closed Source license.  Developing software under BSL will provide more resources to enhance it for future releases, in similar ways as Dual Licensing did for MySQL. MariaDB Corporation will over time create more BSL products. Even with new products coming under BSL, MariaDB Server will continue to be licensed under GPL in perpetuity. Keep in mind that because MariaDB Server extends earlier MySQL GPL code it is forever legally bound by the original GPL license of MySQL.

In addition to putting MaxScale under BSL, we have also created a framework to make it easy for anyone else to license their software under BSL.

Here follows the copyright notice used in the MaxScale 2.0 source code:

/*
* Copyright (c) 2016 MariaDB Corporation Ab
*
* Use of this software is governed by the Business Source License
* included in the LICENSE.TXT file and at www.mariadb.com/bsl.
*
* Change Date: 2019-01-01
*
* On the date above, in accordance with the Business Source
* License, use of this software will be governed by version 2
* or later of the General Public License.
*/

Two out of three top characteristics of the BSL are already shown here: The Change Date and the Change License. Starting on 1 January 2019 (the Change Date), MaxScale 2.0 is governed by GPLv2 or later (the Change License).

The centrepiece of the LICENSE.TXT file itself is this text:

Use Limitation: Usage of the software is free when your application uses the Software with a total of less than three database server instances for production purposes.

This third top characteristic is in effect until the Change Date.

What this means is that the software can be distributed, used, modified, etc., for free, within the use limitation. Beyond it, a commercial relationship is required – which, in the case of MaxScale 2.0, is a MariaDB Enterprise Subscription, which permits the use of MaxScale with three or more database servers.

You can find the full license text for MaxScale at mariadb.com/bsl and a general BSL FAQ at mariadb.com/bsl-faq-adopting. Feel free to copy or refer to them for your own BSL software!

The key characteristics of BSL are as follows:

  • The source code of BSL software is available in full from day one.
  • Users of BSL software can modify, distribute and compile the source.
  • Code contributions are encouraged and accepted through the “new BSD” license.
  • The BSL is purposefully designed to avoid vendor lock-in. With vendor lock in, I here mean that users of BSL software are not depending on one single vendor for support, fixing bugs or enhancing the BSL product.
  • The Change Date and Change License provide a time-delayed safety net for users, should the vendor stop developing the software.
  • Testing BSL software is always free of cost.
  • Production use of the software is free of cost within the use limitation.
  • Adoption of BSL software is encouraged with use limitations that provide ample freedom.
  • Monetisation of BSL software is driven by incremental sales in cases where the use limitation applies.

Whether BSL will be widely adopted remains to be seen. It’s certainly my desire that this new business model will inspire companies who develop Closed Source software or Open Core software to switch to BSL, which will ultimately result in more Open Source software in the community. With BSL, companies can realize a similar amount of revenue for the company, as they could with closed source or open core, while the free of cost usage in core production scenarios establishes a much larger user base to drive testing, innovation and adoption.

My Keynote at GUADEC 2016

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2016/08/16/guadec-2016.html

Last Friday, I gave the first keynote at GUADEC 2016. I was delighted for
the invitation from the GNOME Foundation to deliver this talk, which I
entitled Confessions of a command line geek: why I don’t use GNOME
but everyone else should
.

The Chaos Computer Club
assisted the GUADEC organizers in recording the talks
, so you can see
here a great recording of my talk here (and
also, the slides).
Whether the talk itself is great — that’s for you to
watch and judge, of course.

The focus of this talk is why the GNOME desktop is such a central
component for the future of software freedom. Too often, we assume that
the advent of tablets and other mobile computing platforms means the laptop
and desktop will disappear. And, maybe the desktop will disappear, but the
laptop is going nowhere. And we need a good interface that gives software
freedom to the people who use those laptops. GNOME is undoubtedly the best
system we have for that task.

There is competition. The competition is now, undeniably, Apple. Unlike
Microsoft, who hitherto dominated desktops, Apple truly wants to make
beautifully designed, and carefully crafted products that people will not
just live with, but actually love. It’s certainly possible to love
something that harms you, and Apple is so carefully adept creating products
that not only refuse to give you software freedom, but Apple goes a step
further to regularly invent new ways to gain lock-down control and
thwarting modification by their customers.

GUADEC 2016 trip sponsored by the GNOME Foundation!

We have a great challenge before us, and my goal in the keynote was to
express that the GNOME developers are best poised to fight that battle and
that they should continue in earnest in their efforts, and to offer my help
— in whatever way they need it — to make it happen. And, I
offer this help even though I readily admit that I don’t need
GNOME for myself, but we as a community need it to advance
software freedom.

I hope you all enjoy the talk, and also check
out Werner
Koch’s keynote, We want more centralization, do we?
, which
was also about a very important issue. (There was
also an
LWN article about Werner’s keynote if you prefer to read to watching
.)
And, finally, I thank the GNOME Foundation for covering my travel expenses
for this trip.

AWS OpsWorks Endpoints Available in 11 Regions

Post Syndicated from Daniel Huesch original https://aws.amazon.com/blogs/devops/aws-opsworks-endpoints-available-in-11-regions/

AWS OpsWorks, a service that helps you configure and operate applications of all shapes and sizes using Chef automation, has just added support for the Asia Pacific (Seoul) Region and launched public endpoints in Frankfurt, Ireland, N. California, Oregon, Sao Paolo, Singapore, Sydney, and Tokyo.

Previously, customers had to manage OpsWorks stacks for these regions using our N. Virginia endpoint. Using an OpsWorks endpoint in the same region as your stack reduces API latencies, improves instance response times, and limits impact from cross-region dependency failures.

A full list of endpoints can be found in AWS Regions and Endpoints.

AWS OpsWorks Endpoints Available in 11 Regions

Post Syndicated from Daniel Huesch original http://blogs.aws.amazon.com/application-management/post/Tx1HUUGTAYRMNU/AWS-OpsWorks-Endpoints-Available-in-11-Regions

AWS OpsWorks, a service that helps you configure and operate applications of all shapes and sizes using Chef automation, has just added support for the Asia Pacific (ICN) Region and launched public endpoints in Frankfurt, Ireland, N. California, Oregon, Sao Paolo, Singapore, Sydney, and Tokyo.

Previously, customers had to manage OpsWorks stacks for these regions using our N. Virginia endpoint. Using an OpsWorks endpoint in the same region as your stack reduces API latencies, improves instance response times, and limits impact from cross-region dependency failures.

A full list of endpoints can be found in AWS Regions and Endpoints.

Now Organize Your AWS Resources by Using up to 50 Tags per Resource

Post Syndicated from Durgesh Nandan original https://blogs.aws.amazon.com/security/post/Tx3O5RCX34VOGY6/Now-Organize-Your-AWS-Resources-by-Using-up-to-50-Tags-per-Resource

Tagging AWS resources simplifies the way you organize and discover resources, allocate costs, and control resource access across services. Many of you have told us that as the number of applications, teams, and projects running on AWS increases, you need more than 10 tags per resource. Based on this feedback, we now support up to 50 tags per resource. You do not need to take additional action—you can begin applying as many as 50 tags per resource today.

With tags, you can use resource groups to filter resources across services and create dashboards to help you manage resources. Tag Editor enables you to tag one or more resources at a time in the AWS Management Console. To help you categorize, track, and explore your AWS costs, you can use Cost Explorer and cost allocation tags to obtain detailed billing reports. Additionally, some AWS services enable you to control access to resources based on tags by using IAM policies.

As of August 15, 2016, the following services and their corresponding resource types support 50 tags. (You can also download the complete list as a Word document or text file.)

If you have comments about this post, submit them in the “Comments” section below.

– Durgesh

Preliminary systemd.conf 2016 Schedule

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/preliminary-systemdconf-2016-now-available.html

A Preliminary systemd.conf 2016 Schedule is Now Available!

We have just published a first, preliminary version of the
systemd.conf 2016
schedule
. There
is a small number of white slots in the schedule still, because we’re
missing confirmation from a small number of presenters. The missing
talks will be added in as soon as they are confirmed.

The schedule consists of 5 workshops by high-profile speakers during
the workshop day, 22 exciting talks during the main conference days,
followed by one full day of hackfests.

Please sign up for the conference soon! Only a limited number of
tickets are available, hence make sure to secure yours quickly before
they run out! (Last year we sold out.) Please sign up here for the
conference!

Preliminary systemd.conf 2016 Now Available!

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/preliminary-systemdconf-2016-now-available.html

A Preliminary systemd.conf 2016 Schedule is Now Available!

We have just published a first, preliminary version of the
systemd.conf 2016
schedule
. There
is a small number of white slots in the schedule still, because we’re
missing confirmation from a small number of presenters. The missing
talks will be added in as soon as they are confirmed.

The schedule consists of 5 workshops by high-profile speakers during
the workshop day, 22 exciting talks during the main conference days,
followed by one full day of hackfests.

Please sign up for the conference soon! Only a limited number of
tickets are available, hence make sure to secure yours quickly before
they run out! (Last year we sold out.) Please sign up here for the
conference!

National interest is exploitation, not disclosure

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/08/national-interest-is-exploitation-not.html

Most of us agree that more accountability/transparency is needed in how the government/NSA/FBI exploits 0days. However, the EFF’s positions on the topic are often absurd, which prevent our voices from being heard.

One of the EFF’s long time planks is that the government should be disclosing/fixing 0days rather than exploiting them (through the NSA or FBI). As they phrase it in a recent blog post:

as described by White House Cybersecurity Coordinator, Michael Daniel: “[I]n the majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest.” Other knowledgeable insiders—from former National Security Council Cybersecurity Directors Ari Schwartz and Rob Knake to President Obama’s hand-picked Review Group on Intelligence and Communications Technologies—have also endorsed clear, public rules favoring disclosure.

The EFF isn’t even paying attention to what the government said. The majority of vulnerabilities are useless to the NSA/FBI. Even powerful bugs like Heartbleed or Shellshock are useless, because they can’t easily be weaponized. They can’t easily be put into a point-and-shoot tool and given to cyberwarriors.

Thus, it’s a tautology saying “majority of cases vulns should be disclosed”. It has no bearing on the minority of bugs the NSA is interested in — the cases where we want more transparency and accountability.

These minority of bugs are not discovered accidentally. Accidental bugs have value to the NSA, so the NSA spends considerable amount of money hunting down different bugs that would be of use, and in many cases, buying useful vulns from 0day sellers. The EFF pretends the political issue is about 0days the NSA happens to come across accidentally — the real political issue is about the ones the NSA spent a lot of money on.

For these bugs, the minority of bugs the NSA sees, we need to ask whether it’s in the national interest to exploit them, or to disclose/fix them. And the answer to this question is clearly in favor of exploitation, not fixing. It’s basic math.

An end-to-end Apple iOS 0day (with sandbox escape and persistance) is worth around $1 million, according to recent bounties from Zerodium and Exodus Intel.

There are two competing national interests with such a bug. The first is whether such a bug should be purchased and used against terrorist iPhones in order to disrupt ISIS. The second is whether such a bug should be purchased and disclosed/fixed, to protect American citizens using iPhones.

Well, for one thing, the threat is asymmetric. As Snowden showed, the NSA has widespread control over network infrastructure, and can therefore insert exploits as part of a man-in-the-middle attack. That makes any browser-bugs, such as the iOS bug above, much more valuable to the NSA. No other intelligence organization, no hacker group, has that level of control over networks, especially within the United States. Non-NSA actors have to instead rely upon the much less reliable “watering hole” and “phishing” methods to hack targets. Thus, this makes the bug of extreme value for exploitation by the NSA, but of little value in fixing to protect Americans.

The NSA buys one bug per version of iOS. It only needs one to hack into terrorist phones. But there are many more bugs. If it were in the national interest to buy iOS 0days, buying just one will have little impact, since many more bugs still lurk waiting to be found. The government would have to buy many bugs to make a significant dent in the risk.

And why is the government helping Apple at the expense of competitors anyway? Why is it securing iOS with its bug-bounty program and not Android? And not Windows? And not Adobe PDF? And not the million other products people use?

The point is that no sane person can argue that it’s worth it for the government to spend $1 million per iOS 0day in order to disclose/fix. If it were in the national interest, we’d already have federal bug bounties of that order, for all sorts of products. Long before the EFF argues that it’s in the national interest that purchased bugs should be disclosed rather than exploited, the EFF needs to first show that it’s in the national interest to have a federal bug bounty program at all.

Conversely, it’s insane to argue it’s not worth $1 million to hack into terrorist iPhones. Assuming the rumors are true, the NSA has been incredibly effective at disrupting terrorist networks, reducing the collateral damage of drone strikes and such. Seriously, I know lots of people in government, and they have stories. Even if you discount the value of taking out terrorists, 0days have been hugely effective at preventing “collateral damage” — i.e. the deaths of innocents.

The NSA/DoD/FBI buying and using 0days is here to stay. Nothing the EFF does or says will ever change that. Given this constant, the only question is how We The People get more visibility into what’s going on, that our representative get more oversight, that the courts have clearer and more consistent rules. I’m the first to stand up and express my worry that the NSA might unleash a worm that takes down the Internet, or the FBI secretly hacks into my home devices. Policy makers need to address these issues, not the nonsense issues promoted by the EFF.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Post Syndicated from ris original http://lwn.net/Articles/697270/rss

Android Police takes
a look
at a new OS from Google. “Enter “Fuchsia.” Google’s own
description for it on the project’s GitHub page is simply, “Pink + Purple == Fuchsia (a new Operating System)”. Not very revealing, is it? When you begin to dig deeper into Fuchsia’s documentation, everything starts to make a little more sense.

First, there’s the Magenta
kernel
based on the ‘LittleKernel’ project. Just
like with Linux and Android, the Magenta kernel powers the larger Fuchsia
operating system. Magenta is being designed as a competitor to commercial
embedded OSes, such as FreeRTOS or
ThreadX.” Fuchsia
also uses the Flutter user interface, the
Dart programming language, and
Escher, “a renderer that supports light diffusion, soft shadows, and
other visual effects, with OpenGL or Vulkan under the hood
“.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close