Tag Archives: encryption

For your eyes only (or Adding better encryption to MariaDB)

Post Syndicated from Michael "Monty" Widenius original http://monty-says.blogspot.com/2014/05/for-your-eyes-only-or-adding-better.html

With MariaDB and MySQL we have always taken security seriously.In MariaDB 10.0 we added roles to make it easier to administrate many users.MariaDB and MySQL has also many different encryption functions, but what has been neglected in the past is to make encryption easy to use.This is now about to change.I recently had a meeting with Elmar Eperiesi-Beck from eperi about simplifying the usage of encryption. We agreed to start a close collaboration around encryption for MariaDB with an agenda to deliver something very secure and easy to use soon.The things we are initially focusing on are:Adding column level encryption.This will be done at the field level, invisible for the storage engine.Block level encryption for certain storage engines.Initially we will target InnoDB and XtraDB.MariaDB will initially support storing the security keys on a remote file systems, accessed only at startup, and later also support using a daemon for key management.The above will make your encrypted data in MariaDB secure for:Database users that has user access to the database.Anyone that would attempt to steal the hard disk with the database.By using the daemon approach a MariaDB installation will even be secure against database administrators, as they will not have any way to access the key data.eperi has 11 years of experience with encryption and I am very happy to see them engage with MariaDB to provide better security to MariaDB users!

For your eyes only (or Adding better encryption to MariaDB)

Post Syndicated from Michael "Monty" Widenius original http://monty-says.blogspot.com/2014/05/for-your-eyes-only-or-adding-better.html

With MariaDB and MySQL we have always taken security seriously.In MariaDB 10.0 we added roles to make it easier to administrate many users.MariaDB and MySQL has also many different encryption functions, but what has been neglected in the past is to make encryption easy to use.This is now about to change.I recently had a meeting with Elmar Eperiesi-Beck from eperi about simplifying the usage of encryption. We agreed to start a close collaboration around encryption for MariaDB with an agenda to deliver something very secure and easy to use soon.The things we are initially focusing on are:Adding column level encryption.This will be done at the field level, invisible for the storage engine.Block level encryption for certain storage engines.Initially we will target InnoDB and XtraDB.MariaDB will initially support storing the security keys on a remote file systems, accessed only at startup, and later also support using a daemon for key management.The above will make your encrypted data in MariaDB secure for:Database users that has user access to the database.Anyone that would attempt to steal the hard disk with the database.By using the daemon approach a MariaDB installation will even be secure against database administrators, as they will not have any way to access the key data.eperi has 11 years of experience with encryption and I am very happy to see them engage with MariaDB to provide better security to MariaDB users!

DKIM Troubleshooting Series: Authentication Considerations

Post Syndicated from Adrian Hamciuc original http://sesblog.amazon.com/post/Tx2NCXUV8X3Z0WS/DKIM-Troubleshooting-Series-Authentication-Considerations

Hi, and welcome to another entry in the Amazon SES DKIM troubleshooting series. So far we have focused on technical issues, but now it’s time to take a step back and look at the bigger picture. Exactly why did we go to all this trouble in setting up DKIM for our domain?
My emails are signed and the signature validates. Does this mean I’m safe from spoofing attacks?
Even if all our email is DKIM signed and validated by all ISPs, this is no time to rest. The world is full of ethically dubious people who want to impersonate us and steal our customers, and DKIM is only step one in protecting ourselves.
So what exactly is DKIM doing to help us with this? The way this standard ensures the integrity of our emails is to calculate a hash of the message’s body and some of its headers, and then to digitally sign the hash. As long as the private key is kept secure, and the public key is accessible through DNS, the DKIM signature gives ISPs confidence that the domain that signs the message acknowledges its content through the signature. Let’s have another look at the DKIM signature to see exactly what we’re talking about:

DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/simple;
s=xtk53kxcy4p3t6ztbrffs6d54rsrrhh6; d=ses-example.com;
t=1366720445;
h=From:To:Subject:MIME-Version:Content-Type:Content-Transfer-Encoding:Date:Message-ID;
bh=lcj/Sl5qKl6K6zwFUwb7Flgnngl892pW574kmS1hrS0=;
b=nhVMQLmSh7/DM5PW7xPV4K/PN4iVY0a5OF4YYk2L7jgUq9hHQlckopxe82TaAr64
eVTcBhHHj9Bwtzkmuk88g4G5UUN8J+AAsd/JUNGoZOBS1OofSkuAQ6cGfRGanF68Ag7
nmmEjEi+JL5JQh//u+EKTH4TVb4zdEWlBuMlrdTg=
The "d" tag represents the domain that signs the email. In this case that would be our own domain, ses-example.com. This domain takes responsibility for the email, guaranteeing its integrity through the signature. The "h" tag contains a list of all of the headers of this email that are signed (and thus have their integrity guaranteed) by the signing domain. The "bh" tag is the result of a hash function applied to the email’s body (very useful to ensure that the content itself hasn’t been tampered with). Finally, the "b" tag is the actual signature itself, applied to the body hash and headers. If that checks out, then both the message body and all of the headers included in the "h" tag are considered to be authenticated by the signing domain.
That’s all very nice, but how does it stop an attacker?
The hashing and key encryption algorithms we use are well known, so surely anyone can generate their own key pair and calculate the signature themselves, right? In fact… this is actually true. Anyone can sign any email with a DKIM signature, the problem is getting ISPs to trust that that signature is ours and not the attacker’s. The key here is in the "s" and "d" tags of the signature. The "d" tag is our domain, and the "s" (or "selector") tag indicates which of our domain’s signing keys has been used for this particular email. It’s these two tags that help build the location of the public key in DNS and form the obstacle in the attacker’s path. In this signature’s case, any ISP that wants to validate our email will try to retrieve the public key from a record xtk53kxcy4p3t6ztbrffs6d54rsrrhh6._domainkey.ses-example.com ( <selector>._domainkey.<domain> ). An attacker can calculate hashes and signatures all they want but they won’t have permission to publish any key they control into our DNS, so they will never be able to compute a signature that has our domain as the "d" tag (and thus has its integrity guaranteed by us). DNS spoofing, it turns out, is not so easy to perform (although it has happened)! With only the public key visible in DNS, it is not computationally easy for any attacker to deduce the private part. Just in case, SES regularly rotates the keys, to thwart any such attempt.
Setting up DMARC can also signal ISPs to drop all emails that aren’t authenticated (via DKIM or SPF). If an attacker wants to impersonate our domain they can either put in a broken signature or no signature at all. In both cases, DMARC signals ISPs to quarantine (put in the Spam folder) or drop those emails. That is definitely a good step forward in dealing with phishing spam!
SES also offers other forms of authentication such as SPF, which are highly recommended and will further improve our security.
Next Steps
In the next blog entry, we will have a look at another email-related concept that is indirectly influenced by DKIM: deliverability.

DKIM Troubleshooting Series: Authentication Considerations

Post Syndicated from Adrian Hamciuc original http://sesblog.amazon.com/post/Tx2NCXUV8X3Z0WS/DKIM-Troubleshooting-Series-Authentication-Considerations

Hi, and welcome to another entry in the Amazon SES DKIM troubleshooting series. So far we have focused on technical issues, but now it’s time to take a step back and look at the bigger picture. Exactly why did we go to all this trouble in setting up DKIM for our domain?
My emails are signed and the signature validates. Does this mean I’m safe from spoofing attacks?
Even if all our email is DKIM signed and validated by all ISPs, this is no time to rest. The world is full of ethically dubious people who want to impersonate us and steal our customers, and DKIM is only step one in protecting ourselves.
So what exactly is DKIM doing to help us with this? The way this standard ensures the integrity of our emails is to calculate a hash of the message’s body and some of its headers, and then to digitally sign the hash. As long as the private key is kept secure, and the public key is accessible through DNS, the DKIM signature gives ISPs confidence that the domain that signs the message acknowledges its content through the signature. Let’s have another look at the DKIM signature to see exactly what we’re talking about:

DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/simple;
s=xtk53kxcy4p3t6ztbrffs6d54rsrrhh6; d=ses-example.com;
t=1366720445;
h=From:To:Subject:MIME-Version:Content-Type:Content-Transfer-Encoding:Date:Message-ID;
bh=lcj/Sl5qKl6K6zwFUwb7Flgnngl892pW574kmS1hrS0=;
b=nhVMQLmSh7/DM5PW7xPV4K/PN4iVY0a5OF4YYk2L7jgUq9hHQlckopxe82TaAr64
eVTcBhHHj9Bwtzkmuk88g4G5UUN8J+AAsd/JUNGoZOBS1OofSkuAQ6cGfRGanF68Ag7
nmmEjEi+JL5JQh//u+EKTH4TVb4zdEWlBuMlrdTg=
The "d" tag represents the domain that signs the email. In this case that would be our own domain, ses-example.com. This domain takes responsibility for the email, guaranteeing its integrity through the signature. The "h" tag contains a list of all of the headers of this email that are signed (and thus have their integrity guaranteed) by the signing domain. The "bh" tag is the result of a hash function applied to the email’s body (very useful to ensure that the content itself hasn’t been tampered with). Finally, the "b" tag is the actual signature itself, applied to the body hash and headers. If that checks out, then both the message body and all of the headers included in the "h" tag are considered to be authenticated by the signing domain.
That’s all very nice, but how does it stop an attacker?
The hashing and key encryption algorithms we use are well known, so surely anyone can generate their own key pair and calculate the signature themselves, right? In fact… this is actually true. Anyone can sign any email with a DKIM signature, the problem is getting ISPs to trust that that signature is ours and not the attacker’s. The key here is in the "s" and "d" tags of the signature. The "d" tag is our domain, and the "s" (or "selector") tag indicates which of our domain’s signing keys has been used for this particular email. It’s these two tags that help build the location of the public key in DNS and form the obstacle in the attacker’s path. In this signature’s case, any ISP that wants to validate our email will try to retrieve the public key from a record xtk53kxcy4p3t6ztbrffs6d54rsrrhh6._domainkey.ses-example.com ( <selector>._domainkey.<domain> ). An attacker can calculate hashes and signatures all they want but they won’t have permission to publish any key they control into our DNS, so they will never be able to compute a signature that has our domain as the "d" tag (and thus has its integrity guaranteed by us). DNS spoofing, it turns out, is not so easy to perform (although it has happened)! With only the public key visible in DNS, it is not computationally easy for any attacker to deduce the private part. Just in case, SES regularly rotates the keys, to thwart any such attempt.
Setting up DMARC can also signal ISPs to drop all emails that aren’t authenticated (via DKIM or SPF). If an attacker wants to impersonate our domain they can either put in a broken signature or no signature at all. In both cases, DMARC signals ISPs to quarantine (put in the Spam folder) or drop those emails. That is definitely a good step forward in dealing with phishing spam!
SES also offers other forms of authentication such as SPF, which are highly recommended and will further improve our security.
Next Steps
In the next blog entry, we will have a look at another email-related concept that is indirectly influenced by DKIM: deliverability.

Using IIS SMTP on Windows 2008/2012 with Amazon SES

Post Syndicated from Rohan Deshpande original http://sesblog.amazon.com/post/TxAXMJU3AAN5JA/Using-IIS-SMTP-on-Windows-2008-2012-with-Amazon-SES

A natural extension for customers using Windows Server 2012 on AWS is to use Amazon SES for sending email. This post shows you how to configure the IIS SMTP service that is included with Windows to send email through Amazon SES. You can use the same configuration on Windows Server 2008 and Windows Server 2008 R2.
Set Up Windows Server 2012
From the Amazon EC2 management console, launch a new Microsoft Windows Server 2012 Base EC2 instance.
Microsoft Windows Server 2012 AWS instance
Connect to the instance and log into it using Remote Desktop by following the instructions in Getting Started with Amazon EC2 Windows Instances. It is highly recommended that you change your password after you first log in. Launch the Server Manager Dashboard and install the Web Server role. Make sure you install the IIS 6 Management Compatibility tools.
Web Server role for Windows
Next, install the SMTP Server feature.
SMTP Server feature for Windows
We have installed the necessary Windows components. It is time to configure the SMTP service.
Configure IIS SMTP Service
Go back to the Server Manager Dashboard. From the Tools menu, launch the Internet Information Services (IIS) 6.0 Manager.
IIS 6 manager
Right-click SMTP Virtual Server #1 and select Properties.
SMTP Server properties
On the Access tab, click Relay… under Relay Restrictions.
Setup Relaying
For the purpose of this post, we will assume that email is generated on this server. If the application that generates the email runs on a separate server, you need to grant relaying access for that server in IIS SMTP.
Click Add… and then enter 127.0.0.1 for the address.
Grant localhost relaying permissions
We have now granted access for this server to relay email to Amazon SES through the IIS SMTP service.
Relaying permitted for localhost
Now switch to the Delivery tab. Your server must send email to Amazon SES using an authenticated encrypted connection. Click Outbound Security.
Delivery properties
Pick Basic Authentication. Enter your SES SMTP username and SES SMTP password on this screen. You can obtain these credentials from the Amazon SES SMTP console. For more information, see the Developer Guide. Note that your SMTP credentials are different from your AWS credentials. Also, ensure that TLS encryption is checked.
Outbound security configuration
On the Outbound Connections dialog, ensure that the port is 25 or 587. Click Advanced… and enter email-smtp.us-east-1.amazonaws.com for the Smart host name.
Outbound host
You are finished with the configuration. Right-click SMTP Virtual Server #1 again, and then restart the service to pick up the new configuration. Send an email through this SMTP server. You can examine the message headers to confirm that it was delivered through Amazon SES.
Final Thoughts
You have now configured the IIS SMTP service on Windows Server 2012 to send email using Amazon SES. If you have comments or feedback about this post or about Amazon SES, please post them in the Amazon SES forum. Happy sending with Amazon SES!

Using IIS SMTP on Windows 2008/2012 with Amazon SES

Post Syndicated from Rohan Deshpande original http://sesblog.amazon.com/post/TxAXMJU3AAN5JA/Using-IIS-SMTP-on-Windows-2008-2012-with-Amazon-SES

A natural extension for customers using Windows Server 2012 on AWS is to use Amazon SES for sending email. This post shows you how to configure the IIS SMTP service that is included with Windows to send email through Amazon SES. You can use the same configuration on Windows Server 2008 and Windows Server 2008 R2.
Set Up Windows Server 2012
From the Amazon EC2 management console, launch a new Microsoft Windows Server 2012 Base EC2 instance.
Microsoft Windows Server 2012 AWS instance
Connect to the instance and log into it using Remote Desktop by following the instructions in Getting Started with Amazon EC2 Windows Instances. It is highly recommended that you change your password after you first log in. Launch the Server Manager Dashboard and install the Web Server role. Make sure you install the IIS 6 Management Compatibility tools.
Web Server role for Windows
Next, install the SMTP Server feature.
SMTP Server feature for Windows
We have installed the necessary Windows components. It is time to configure the SMTP service.
Configure IIS SMTP Service
Go back to the Server Manager Dashboard. From the Tools menu, launch the Internet Information Services (IIS) 6.0 Manager.
IIS 6 manager
Right-click SMTP Virtual Server #1 and select Properties.
SMTP Server properties
On the Access tab, click Relay… under Relay Restrictions.
Setup Relaying
For the purpose of this post, we will assume that email is generated on this server. If the application that generates the email runs on a separate server, you need to grant relaying access for that server in IIS SMTP.
Click Add… and then enter 127.0.0.1 for the address.
Grant localhost relaying permissions
We have now granted access for this server to relay email to Amazon SES through the IIS SMTP service.
Relaying permitted for localhost
Now switch to the Delivery tab. Your server must send email to Amazon SES using an authenticated encrypted connection. Click Outbound Security.
Delivery properties
Pick Basic Authentication. Enter your SES SMTP username and SES SMTP password on this screen. You can obtain these credentials from the Amazon SES SMTP console. For more information, see the Developer Guide. Note that your SMTP credentials are different from your AWS credentials. Also, ensure that TLS encryption is checked.
Outbound security configuration
On the Outbound Connections dialog, ensure that the port is 25 or 587. Click Advanced… and enter email-smtp.us-east-1.amazonaws.com for the Smart host name.
Outbound host
You are finished with the configuration. Right-click SMTP Virtual Server #1 again, and then restart the service to pick up the new configuration. Send an email through this SMTP server. You can examine the message headers to confirm that it was delivered through Amazon SES.
Final Thoughts
You have now configured the IIS SMTP service on Windows Server 2012 to send email using Amazon SES. If you have comments or feedback about this post or about Amazon SES, please post them in the Amazon SES forum. Happy sending with Amazon SES!

BP Loses Personal Data

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/SR0ZsOW6_t0/bp-loses-personal-data.html

The AP and other news sources are reporting that BP lost a laptop containing the personal information of 13,000 people who applied for compensation for damages. The laptop was unencrypted, but was password protected. BP has sent notification letters to those effected.This is just another reminder that laptop encryption makes life easier…and may even cost less than notification letters!

_uacct = “UA-1423386-1”;
urchinTracker();

BP Loses Personal Data

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/SR0ZsOW6_t0/bp-loses-personal-data.html

The AP and other news sources are reporting that BP lost a laptop containing the personal information of 13,000 people who applied for compensation for damages. The laptop was unencrypted, but was password protected. BP has sent notification letters to those effected.This is just another reminder that laptop encryption makes life easier…and may even cost less than notification letters!

_uacct = “UA-1423386-1”;
urchinTracker();

Mac OSX 10.7 to include full disk encryption

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/Jy_Sl5uXF2g/mac-osx-107-to-include-full-disk.html

Apple’s recent developer preview announcement for 10.7 notes that it will include:”the all new FileVault, that provides high performance full disk encryption for local and external drives, and the ability to wipe data from your Mac instantaneously”This means that both Windows (BitLocker) and MacOS (FileVault) will have free, OS integrated full disk encryption.

_uacct = “UA-1423386-1”;
urchinTracker();

Mac OSX 10.7 to include full disk encryption

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/Jy_Sl5uXF2g/mac-osx-107-to-include-full-disk.html

Apple’s recent developer preview announcement for 10.7 notes that it will include:”the all new FileVault, that provides high performance full disk encryption for local and external drives, and the ability to wipe data from your Mac instantaneously”This means that both Windows (BitLocker) and MacOS (FileVault) will have free, OS integrated full disk encryption.

_uacct = “UA-1423386-1”;
urchinTracker();

One gpg –gen-key per Decade

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2008/12/09/gpg-gen-key-decade.html

Today is an interesting anniversary (of sorts) for my cryptographic
infrastructure. Nine years ago today, I generated the 1024 bit DSA key,
DB41B387, that has been my GPG key every day since then. I remember
distinctly that on the 350 MhZ machine I used at the time, it took quite
a while to generate, even though I made sure the entropy pool remained
nice and full by pounding on the keyboard.

The horribleness of the
recent Debian vulnerability
meant that I have spent a much time
this year pondering the pedigree my personal cryptographic
infrastructure. Of course, my key was far too old to have been
generated on a Debian-based system that had that particular
vulnerability. However, the issue that really troubled me this
past summer was this:

Some DSA keys may be compromised by only their use. A strong
key (i.e., generated with a ‘good’ OpenSSL) but used locally
on a machine with a ‘bad’ OpenSSL must be considered to be
compromised. This is due to an ‘attack’ on DSA that allows the
secret key to be found if the nonce used in the signature is reused or
known.

Not being particularly hard core on cryptographic knowledge — most of my expertise comes from only one class I took 11 years ago on
Encryption, Compression, and Secure Hashing in graduate school —
I found this alarming and tried my best to do some ancillary reading.
It seems that DSA keys, in many ways, are less than optimal. It seems
(to my mostly uneducated eye) in skimming academic papers that DSA keys
are tougher to deploy right and keep secure, which leads to these sorts
of possible problems.

I’ve resolved to switch entirely to RSA keys. The great thing about
RSA is its simplicity and ease of understanding. I grok factoring and
understand better the complexity situation of the factoring problem
(this time, from the two graduate courses I took on Complexity
Theory, so my comfort is more solid :). I also find it intriguing that
a child can learn how to factor in grade school, yet we can’t teach a
computer to do it efficiently. (By contrast, I didn’t learn the
discrete logarithm problem until my Freshman year of college, and I
still have to look up the details to remind myself.) So, the
“simplicity brings clarity” idea hints that RSA is a better
choice.

Fact is, there was only one reason why I revoked my ancient RSA
keys and generated DSA ones in the 1990s. The RSA patent and the strict
licensing of that patent by RSA Data Security, Inc. made it impossible
to implement RSA in Free Software back then. So, when I switched from
proprietary PGP to GPG, my keys wouldn’t import. Indeed, that one RSA
patent alone set back the entire area of Free Software cryptography at least ten years.

So, when I decided this evening that I’d need to generate a new key and
begin promulgating it at key-signing parties sometime before DB41B387
turns ten, I realized I actually have the freedom to choose my
encryption algorithm now! Sadly, it took almost these entire nine years
to get there. Our community did not only have to wait out this
unassailable patent. (RSA is among the most novel and non-obvious ideas
that most computer professionals will ever seen in their lives). Once
the RSA patent finally expired0, we had to then slowly but
surely implement and deploy it in cryptographic programs, from
scratch.

I’m still glad that we’re free of the RSA patent, but I fear among the
mountain of “software patents” granted each year, that the
“new RSA” — a perfectly valid, non-obvious and novel
patent that reads on software and fits both the industry’s and patent
examiner’s definition of “high quality” — is waiting
to be discovered and used as a weapon to halt Free Software again. When
I finally type gpg –gen-key (now with
–expert mode!) for the first time in nine years, I hope
I’ll only experience the gladness of being able to generate an RSA key,
and succeed in ignoring the fact that RMS’
old essay about this issue remains a cautionary tale
to this very
day. Software patents are a serious long-term threat and must be
eradicated entirely for the sake of software freedom. The biggest threat among them will always be the “valid”, “high quality”
software patents, not the invalid, poor quality ones.

0 Technically speaking,
RSA didn’t need to expire. In a seemingly bizarre
move
, RSA Data Security, Inc. granted a Free license to the
patent a few weeks before the actual expiration date. To
this day, I believe the same theory I espoused at the time:
their primary goal in doing this was merely to ruin all the
“RSA is Free” parties that had been planned.

Stop Obsessing and Just Do It: VoIP Encryption Is Easier than You Think

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2008/06/20/voip-encryption-easy.html

Ian Sullivan showed me
an article
that he read about eavesdropping on Internet telephony calls
. I’m baffled at
the obsession about this issue on two fronts. First, I am amazed that
people want to hand their phone calls over to yet another proprietary
vendor (aka Skype) using unpublished, undocumented non-standard
protocols and who respects your privacy even less than the traditional
PSTN vendors. Second, I don’t understand why cryptography experts
believe we need to develop complicated new technology to solve this
problem in the medium term.

At SFLC, I set up the telephony system as VoIP with encryption on
every possible leg. While SFLC sometimes uses Skype, I don’t, of course, because it is (a)
proprietary software and (b) based on an undocumented protocol, (c)
controlled by a company that has less respect for users’ privacy than
the PSTN companies themselves. Indeed, security was actually last on
our list for reasons to reject Skype, because we already had a simple
solution for encrypting our telephony traffic: All calls are made
through a VPN.

Specifically, at SFLC, I set up a system whereby all users have an OpenVPN connection back to the
home office. From there, they have access to register a SIP client to
an internal Asterisk server living inside the VPN network.
Using that SIP phone, they could call any SFLC employee, fully encrypted. That call
continues either on the internal secured network, or back out over the
same VPN to the other SIP client. Users can also dial out from there to any
PSTN DID.

Of course, when calling the PSTN, the encryption ends at SFLC’s office, but that’s the PSTN’s fault, not ours. No technological solution — save using a modem to turn that traffic digital — can easily solve that. However,
with minimal effort, and using existing encryption subsystems, we have
end-to-end encryption for all employee-to-employee calls.

And it could go even further with a day’s effort of work! I have a
pretty simple idea on how to have an encrypted call to anyone
who happens to have a SIP client and an OpenVPN client. My plan is to
make a public OpenVPN server that accepts connection from any
host at all, that would then allow encrypted “phone the
office” calls to any SFLC phone with any SIP client anywhere on
the Internet. In this way, anyone wishing end-to-end phone encryption
to the SFLC need only connect to that publicly accessible OpenVPN and
dial our extensions with their SIP client over that line. This solution
even has the added bonus that it avoids the common firewall and NAT
related SIP problems, since all traffic gets tunneled through the
OpenVPN: if OpenVPN (which is, unlike SIP, a single-port UDP/IP protocol)
works, SIP automatically does!

The main criticism of this technique regards the silliness of two
employees at a conference in San Francisco bouncing all the way through
our NYC offices just to make a call to each other. While the Bandwidth
Wasting Police might show up at my door someday, I don’t actually find
this to be a serious problem. The last mile is always the problem in
Internet telephony, so a call that goes mostly across a single set of
last mile infrastructure in a particular municipality is no worse nor
better than one that takes a long haul round trip. Very occasionally,
there is a half second of delay when you have a few VPN-based users on a
conference call together, but that has a nice social side effect of
stopping people from trying to interrupt each other.

Finally, the article linked above talks about the issue of variable bit
rate compression changing packet size such that even encrypted packets
yield possible speech information, since some sounds need larger packets
than others. This problem is solved simply for us with two systems: (a)
we
use µ-law,
a very old, constant bit rate codec
, and (b) a tiny bit of entropy
is added to our packets by default, because the encryption is occurring
for all traffic across the VPN connection, not just the phone
call itself. Remember: all the traffic is going together across the one
OpenVPN UDP port, so an eavesdropper would need to detangle the VoIP
traffic from everything else. Indeed, I could easily make (b) even
stronger by simply having the SIP client open another connection back to
the asterisk host and exchange payloads generated
from /dev/random back and forth while the phone call is
going on.

This is really one of those cases where the simpler the solution, the
more secure it is. Trying to focus on “encryption of VoIP and VoIP only” is
what leads us to the kinds of vulnerabilities described in that article.
VoIP isn’t like email, where you always need an encryption-unaware
delivery mechanism between Alice and Bob. I
believe I’ve described a simple mechanism that can allow anyone with an
Asterisk box, an OpenVPN server, and an Internet connection to publish to the world easy instructions for phoning them securely with merely a SIP client plus and OpenVPN client. Why don’t
we just take the easy and more secure route and do our VoIP this
way?

stet and AGPLv3

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2007/11/21/stet-and-agplv3.html

Many people don’t realize that the GPLv3 process actually began long
before the November 2005 announcement. For me and a few others, the GPLv3
process started much earlier. Also, in my view, it didn’t actually end
until this week, the FSF released the AGPLv3. Today, I’m particularly
proud that stet was the first software released covered by the terms of
that license.

The GPLv3 process focused on the idea of community, and a community is
built from bringing together many individual experiences. I am grateful
for all my personal experiences throughout this process. Indeed, I
would guess that other GPL fans like myself remember, as I do, the first
time the heard the phrase “GPLv3”. For me, it was a bit
early — on Tuesday 8 January 2002 in a conference room at MIT. On
that day, Richard Stallman, Eben Moglen and I sat down to have an
all-day meeting that included discussions regarding updating GPL. A key
issue that we sought to address was (in those days) called the
“Application Service Provider (ASP) problem” — now
called “Software as a Service (SaaS)”.

A few days later, on the telephone with Moglen2 one morning, as I stood in my
kitchen making oatmeal, we discussed this problem. I pointed out the
oft-forgotten section 2(c) of the GPL [version 2]. I argued that contrary
to popular belief, it does have restrictions on some minor
modifications. Namely, you have to maintain those print statements for
copyright and warranty disclaimer information. It’s reasonable, in other
words, to restrict some minor modifications to defend freedom.

We also talked about that old Computer Science problem of having a
program print its own source code. I proposed that maybe we needed a
section 2(d) that required that if a program prints its own source to
the user, that you can’t remove that feature, and that the feature must
always print the complete and corresponding source.

Within two months, Affero
GPLv1 was published
— an authorized fork of the GPL to test
the idea. From then until AGPLv3, that “Affero clause”
has had many changes, iterations and improvements, and I’m grateful
for all the excellent feedback, input and improvements that have gone
into it. The
result, the
Affero GPLv3 (AGPLv3) released on Monday
, is an excellent step
forward for software freedom licensing. While the community process
indicated that the preference was for the Affero clause to be part of
a separate license, I’m nevertheless elated that the clause continues
to live on and be part of the licensing infrastructure defending
software freedom.

Other than coining the Affero clause, my other notable personal
contribution to the GPLv3 was management of a software development
project to create the online public commenting system. To do the
programming, we contracted with Orion Montoya, who has extensive
experience doing semantic markup of source texts from an academic
perspective. Orion gave me my first introduction to the whole
“Web 2.0” thing, and I was amazed how useful the result was;
it helped the leaders of the process easily grok the public response.
For example, the intensity highlighting — which shows the hot
spots in the text that received the most comments — gives a very
quick picture of sections that are really of concern to the public. In
reviewing the drafts today, I was reminded that the big red area in
section 1 about “encryption and authorization codes”
is
substantially
changed and less intensely highlighted by draft 4
. That quick-look
gives a clear picture of how the community process operated to get a
better license for everyone.

Orion, a Classics scholar as an undergrad, named the
software stet for its original Latin definition: “let it
stand as it is”. It was his hope that stet (the software) would
help along the GPLv3 process so that our whole community, after filing
comments on each successive draft, could look at the final draft and
simply say: Stet!

Stet has a special place in software history, I believe, even if it’s
just a purely geeky one. It is the first software system in history to
be meta-licensed. Namely, it was software whose output was its own
license. It’s with that exciting hacker concept that I put up today
a Trac instance
for stet, licensed under the terms of the AGPLv3 [ which is now on
Gitorious ]
1.

Stet is by no means ready for drop-in production. Like most software
projects, we didn’t estimate perfectly how much work would be needed.
We got lazy about organization early on, which means it still requires a
by-hand install, and new texts must be carefully marked up by hand.
We’ve moved on to other projects, but hopefully SFLC will host the Trac
instance indefinitely so that other developers can make it better.
That’s what copylefted FOSS is all about — even when it’s
SaaS.

1Actually, it’s
under AGPLv3 plus an exception to allow for combining with the
GPLv2-only Request Tracker, with which parts of stet combine.

2Update
2016-01-06:After writing this blog post, I found
evidence in my email archives from early 2002, wherein Henry Poole (who
originally suggested the need for Affero GPL to FSF), began cc’ing me anew
on an existing thread. In that thread, Poole quoted text from Moglen
proposing the original AGPLv1 idea to Poole. Moglen’s quoted text in
Poole’s email proposed the idea as if it were solely Moglen’s own. Based
on the timeline of the emails I have, Moglen seems to have written to Poole
within 36-48 hours of my original formulation of the idea.

While I do not accuse Moglen of plagiarism, I believe he does at least
misremember my idea as his own, which is particularly surprising, as Moglen
(at that time, in 2002) seemed unfamiliar with the Computer Science concept
of a quine; I had to explain that concept as part of my presentation of my
idea. Furthermore, Moglen and I discussed this matter in a personal
conversation in 2007 (around the time I made this blog post originally) and
Moglen said to me: “you certainly should take credit for the Affero
GPL”. Thus, I thought the matter was thus fully settled back in
2007, and thus Moglen’s post-2007 claims of credit that write me out of
Affero GPL’s history are simply baffling. To clear up the confusion his
ongoing claims create, I added this footnote to communicate unequivocally
that my memory of that phone call is solid, because it was the first time I
ever came up with a particularly interesting licensing idea, so the memory
became extremely precious to me immediately. I am therefore completely
sure I was the first to propose the original idea of mandating preservation
of a quine-like feature in AGPLv1§2(d) (as a fork/expansion of
GPLv2§2(c)) on the telephone to Moglen, as described above. Moglen
has never produced evidence to dispute my recollection, and even agreed
with the events as I told them back in 2007.

Nevertheless, unlike Moglen, I do admit that creation of the final text of
AGPLv1 was a collaborative process, which included contributions from
Moglen, Poole, RMS, and a lawyer (whose name I don’t recall) whom Poole
hired. AGPLv3§13’s drafting was similarly collaborative, and included
input from Richard Fontana, David Turner, and Brett Smith, too.

Finally, I note my surprise at this outcome. In my primary community
— the Free Software community — people are generally extremely
good at giving proper credit. Unlike the Free Software community, legal
communities apparently are cutthroat on the credit issue, so I’ve
learned.

Mango Lassi

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/mango-lassi.html

Yesterday, at the GNOME Summit in Boston I did a quick presentation of my new desktop input sharing
hotness thingy, called “Mango Lassi” (Alternatively known as “GNOME Input Sharing”). Something like a Synergy done right, or an x2x that doesn’t suck.

So, for those who couldn’t attend, here’s a screenshot, which doesn’t really tell how great it is, and which might also be a bit confusing:

Mango Lassi Screenshot

And here’s a list of random features already available:

Discover desktops to share mouse and keyboards with automatically via Avahi.
Fully peer-to-peer. All Mango Lassi instances are both client and server at the same time. Other hosts may enter or leave a running session at any time.
No need to open X11 up for the network

You have a 50% chance that for your setup you don’t need any configuration
at all. In the case of the other 50% you might need to swap the order of your
screens manually in a simple dialog, because Mango Lassi didn’t guess correctly which
screen is left and which screen is right.

libnotify integration so that it tells you whenever a desktop joins or leaves your session.

Shows a nice OSD on your screen when your screen’s input is currently being redirected to another screen.
Uses all those nifty GNOME APIs, like D-Bus-over-TCP, Avahi, libnotify, Gtk, …
Supports both the X11 clipboard and the selection, supporting all content types, and not just simple text — i.e. you can copy and paste image data between Gimp on your screens
Lot’s of bugs and useless debug output, since this is basically the work of just three weekends.
Tray icon

And here’s a list of missing features:

Drag’n’drop between screens. (I figured out how this could work, it’s just
a matter of actually implementing this, which is probably considerable work,
because this would require some UI work, to show a download dialog and
suchlike.)

Integration with Matthias’ GTK+ window migration patches, which would allow dragging GTK+ windows between screens. The migration code for GTK+ basically works. It’s just a matter of getting them merged in GTK+ proper, and hooking them up properly with Mango Lassi, which probably needs some kind of special support in Metacity so that we get notified when a window drag is happening and the pointer comes near the edges of the screens.

Encryption, authentication: Best solution would probably be that D-Bus would get native TLS support which we could then make use of.

Support for legacy operating systems like Windows/MacOS. I personally don’t
care much about this. However, Zeroconf implementations and D-Bus is available on
Windows/MacOS too, and the exposed D-Bus interfaces are not too X11-centric, so
this should be doable without too much work.

UI Love, actually hooking up the desktop order changing buttons, save and restore the order automatically.

MPX support (this would *rock*)

And finally, here’s where you can get it:

git clone http://git.0pointer.de/repos/mango-lassi.git/

gitweb

Oh, and I don’t take feature wishlist requests for this project. If you need
a feature, implement it yourself. It’s Free Software after all! I’d be happy if
someone would be willing to work on Mango Lassi in a way that it can become a
really good GNOME citizen and maybe even a proper part of it. But personally
I’ll probably only work on it to a level where it does all I need to work with
my Laptop and my Desktop PC on my desk in a sane way. I am almost 100% busy
with PulseAudio these days, and thus
unable to give Mango Lassi the love it could use. So, stand up now, if you want
to take over maintainership!

Hmm, Mango Lassi could use some good artwork, starting with an icon. I am
quite sure that someone with better graphic skills then me could easily create
a delicious icon perhaps featuring a glass of fresh, juicy Mango
Lassi
. I’d be very thankful for every icon submission!

Mango Lassi

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/mango-lassi.html

Yesterday, at the GNOME Summit in Boston I did a quick presentation of my new desktop input sharing
hotness thingy, called “Mango Lassi” (Alternatively known as “GNOME Input Sharing”). Something like a Synergy done right, or an x2x that doesn’t suck.

So, for those who couldn’t attend, here’s a screenshot, which doesn’t really tell how great it is, and which might also be a bit confusing:

Mango Lassi Screenshot

And here’s a list of random features already available:

  • Discover desktops to share mouse and keyboards with automatically via Avahi.
  • Fully peer-to-peer. All Mango Lassi instances are both client and server at the same time. Other hosts may enter or leave a running session at any time.
  • No need to open X11 up for the network
  • You have a 50% chance that for your setup you don’t need any configuration
    at all. In the case of the other 50% you might need to swap the order of your
    screens manually in a simple dialog, because Mango Lassi didn’t guess correctly which
    screen is left and which screen is right.
  • libnotify integration so that it tells you whenever a desktop joins or leaves your session.
  • Shows a nice OSD on your screen when your screen’s input is currently being redirected to another screen.
  • Uses all those nifty GNOME APIs, like D-Bus-over-TCP, Avahi, libnotify, Gtk, …
  • Supports both the X11 clipboard and the selection, supporting all content types, and not just simple text — i.e. you can copy and paste image data between Gimp on your screens
  • Lot’s of bugs and useless debug output, since this is basically the work of just three weekends.
  • Tray icon

And here’s a list of missing features:

  • Drag’n’drop between screens. (I figured out how this could work, it’s just
    a matter of actually implementing this, which is probably considerable work,
    because this would require some UI work, to show a download dialog and
    suchlike.)
  • Integration with Matthias’ GTK+ window migration patches, which would allow dragging GTK+ windows between screens. The migration code for GTK+ basically works. It’s just a matter of getting them merged in GTK+ proper, and hooking them up properly with Mango Lassi, which probably needs some kind of special support in Metacity so that we get notified when a window drag is happening and the pointer comes near the edges of the screens.
  • Encryption, authentication: Best solution would probably be that D-Bus would get native TLS support which we could then make use of.
  • Support for legacy operating systems like Windows/MacOS. I personally don’t
    care much about this. However, Zeroconf implementations and D-Bus is available on
    Windows/MacOS too, and the exposed D-Bus interfaces are not too X11-centric, so
    this should be doable without too much work.
  • UI Love, actually hooking up the desktop order changing buttons, save and restore the order automatically.
  • MPX support (this would *rock*)

And finally, here’s where you can get it:

git clone http://git.0pointer.de/repos/mango-lassi.git/

gitweb

Oh, and I don’t take feature wishlist requests for this project. If you need
a feature, implement it yourself. It’s Free Software after all! I’d be happy if
someone would be willing to work on Mango Lassi in a way that it can become a
really good GNOME citizen and maybe even a proper part of it. But personally
I’ll probably only work on it to a level where it does all I need to work with
my Laptop and my Desktop PC on my desk in a sane way. I am almost 100% busy
with PulseAudio these days, and thus
unable to give Mango Lassi the love it could use. So, stand up now, if you want
to take over maintainership!

Hmm, Mango Lassi could use some good artwork, starting with an icon. I am
quite sure that someone with better graphic skills then me could easily create
a delicious icon perhaps featuring a glass of fresh, juicy Mango
Lassi
. I’d be very thankful for every icon submission!

User-Empowered Security via encfs

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2007/04/10/encfs.html

One of my biggest worries in using a laptop is that data
can suddenly become available to anyone in the world if a laptop is
lost or stolen. I was reminded of this during the mainstream
media coverage
1 of this issue last year.

There’s the old security through obscurity perception of running
GNU/Linux systems. Proponents of this theory argue that most thieves
(or impromptu thieves, who find a lost laptop but decide not to return
it to its owner) aren’t likely to know how to use a GNU/Linux system,
and will probably wipe the drive before selling it or using it.
However, with the popularity of Free Software rising, this old standby
(which never should have been a standby anyway, of course) doesn’t
even give an illusion of security anymore.

I have been known as a computer security paranoid in my time, and I
keep a rather strict regiment of protocols for my own personal
computer security. But, I don’t like to inflict new onerous security
procedures on the otherwise unwilling. Generally, people will find
methods around security procedures when they aren’t fully convinced
they are necessary, and you’re often left with a situation just as bad
or worse than when you started implementing your new procedures.

My solution for the lost/stolen laptop security problem was therefore
two-fold: (a) education among the userbase about how common it is to
have a laptop lost or stolen, and (b) providing a simple user-space
mechanism for encrypting sensitive data on the laptop. Since (a) is
somewhat obvious, I’ll talk about (b) in detail.

I was fortunate that, in parallel, my friend Paul and one of my
coworkers discovered how easy it is to use encfs and
told me about it. encfs uses the Filesystem in
Userspace (FUSE) to store encrypted data right in a user’s own home
directory. And, it is trivially easy to set up! I used Paul’s tutorial
myself, but there are many published all over the Internet.

My favorite part of this solution is that rather than an onerous
mandated procedure, encfs turns security into user
empowerment. My colleague James wrote up a tutorial for our internal
Wiki, and I’ve simply encouraged users to take a look and consider
encrypting their confidential data. Even though not everyone has
taken it up yet, many already have. When a new security measure
requires substantial change in behavior of the user, the measure works
best when users are given an opportunity to adopt it at their own
pace. FUSE deserves a lot of credit in this regard, since it lets
users switch their filesystem to encryption in pieces (unlike other
cryptographic filesystems that require some planning ahead). For my
part, I’ve been slowly moving parts of my filesystem into an encrypted
area as I move aside old habits gradually.

I should note that this solution isn’t completely without cost. First,
there is no metadata encryption, but I am really not worried about
interlopers finding out how big our nameless files and directories are
and who created them (anyway, with an SVN checkout, the interesting
metadata is in .svn, so it’s encrypted in this case).
Second, we’ve found that I/O intensive file operations take
approximately twice as long (both under ext3 and XFS) when using
encfs. I haven’t moved my email archives to my encrypted
area yet because of the latter drawback. However, for all my other
sensitive data (confidential text documents, IRC chat logs, financial
records, ~/.mozilla, etc.), I don’t really notice the
slow-down using a 1.6 Ghz CPU with ample free RAM. YMMV.

1
BTW, I’m skeptical about the FBI’s claim in that
old Washington Post article which states

“review of the equipment by computer forensic teams has
determined that the data base remains intact and has not been
accessed since it was stolen”. I am mostly clueless about
computer forensics; however, barring any sort of physical seal on
the laptop or hard drive casing, could a forensics expert tell if
someone had pulled out the drive, put it in another computer, did a
dd if=/dev/hdb of=/dev/hda, and then put it back as it
was found?

Es ist vollbracht!

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/2.6.19.html

Yes, finally Linux
2.6.19
has been released. So you wonder why is this something to blog about? — Because
it is the first Linux version that contains my super-cool MSI
Laptop driver
, one of the most impressing attainments of mankind, only
excelled perhaps by KRYPTOCHEF,
the only tool in existence which does fullbit encryption.

Es ist vollbracht!

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/2.6.19.html

Yes, finally Linux
2.6.19
has been released. So you wonder why is this something to blog about? — Because
it is the first Linux version that contains my super-cool MSI
Laptop driver
, one of the most impressing attainments of mankind, only
excelled perhaps by KRYPTOCHEF,
the only tool in existence which does fullbit encryption.

Announcing SECCURE

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/seccure.html

Yesterday my brother released his second Free Software package, the SECCURE Elliptic Curve Crypto Utility for Reliable Encryption. (Recursive acronyms, yay!)

The seccure toolset implements a selection of asymmetric algorithms based on elliptic curve cryptography (ECC). In particular, it offers public key encryption / decryption and signature generation / verification. ECC schemes offer a much better key size to security ratio than classical systems (RSA, DSA). Keys are short enough to make direct specification of keys on the command line possible (sometimes this is more convenient than the management of PGP-like key rings). seccure builds on this feature and therefore is the tool of choice whenever lightweight asymmetric cryptography — independent of key servers, revocation certificates, the Web of Trust, or even configuration files — is required.

Anyone willing to work on the Debian RFP?

(The first Free Software package of him is ssss, an implementation of Shamir’s secret sharing scheme)

Announcing SECCURE

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/seccure.html

Yesterday my brother released his second Free Software package, the SECCURE Elliptic Curve Crypto Utility for Reliable Encryption. (Recursive acronyms, yay!)

The seccure toolset implements a selection of asymmetric algorithms based on elliptic curve cryptography (ECC). In particular, it offers public key encryption / decryption and signature generation / verification. ECC schemes offer a much better key size to security ratio than classical systems (RSA, DSA). Keys are short enough to make direct specification of keys on the command line possible (sometimes this is more convenient than the management of PGP-like key rings). seccure builds on this feature and therefore is the tool of choice whenever lightweight asymmetric cryptography — independent of key servers, revocation certificates, the Web of Trust, or even configuration files — is required.

Anyone willing to work on the Debian RFP?

(The first Free Software package of him is ssss, an implementation of Shamir’s secret sharing scheme)