Tag Archives: Skype

Skype Log Viewer Download – View Logs on Windows

Post Syndicated from Darknet original https://www.darknet.org.uk/2017/11/skype-log-viewer-download/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

Skype Log Viewer Download – View Logs on Windows

Skype Log Viewer allows you to download and view the Skype history and log files, on Windows, without actually downloading the Skype client itself.

What does Skype Log Viewer do?

This program allows you to view all of your Skype chat logs and then easily export them as text files.

It correctly organizes them by conversation and makes sure that group conversations do not get jumbled with one on one chats.

Read the rest of Skype Log Viewer Download – View Logs on Windows now! Only available at Darknet.

Concerns About The Blockchain Technology

Post Syndicated from Bozho original https://techblog.bozho.net/concerns-blockchain-technology/

The so-called (and marketing-branded) “blockchain technology” is promised to revolutionize every industry. Anything, they say, will become decentralized, free from middle men or government control. Services will thrive on various installments of the blockchain, and smart contracts will automatically enforce any logic that is related to the particular domain.

I don’t mind having another technological leap (after the internet), and given that I’m technically familiar with the blockchain, I may even be part of it. But I’m not convinced it will happen, and I’m not convinced it’s going to be the next internet.

If we strip the hype, the technology behind Bitcoin is indeed a technical masterpiece. It combines existing techniques (likes hash chains and merkle trees) with a very good proof-of-work based consensus algorithm. And it creates a digital currency, which ontop of being worth billions now, is simply cool.

But will this technology be mass-adopted, and will mass adoption allow it to retain the technological benefits it has?

First, I’d like to nitpick a little bit – if anyone is speaking about “decentralized software” when referring to “the blockchain”, be suspicious. Bitcon and other peer-to-peer overlay networks are in fact “distributed” (see the pictures here). “Decentralized” means having multiple providers, but doesn’t mean each user will be full-featured nodes on the network. This nitpicking is actually part of another argument, but we’ll get to that.

If blockchain-based applications want to reach mass adoption, they have to be user-friendly. I know I’m being captain obvious here (and fortunately some of the people in the area have realized that), but with the current state of the technology, it’s impossible for end users to even get it, let alone use it.

My first serious concern is usability. To begin with, you need to download the whole blockchain on your machine. When I got my first bitcoin several years ago (when it was still 10 euro), the blockchain was kind of small and I didn’t notice that problem. Nowadays both the Bitcoin and Ethereum blockchains take ages to download. I still haven’t managed to download the ethereum one – after several bugs and reinstalls of the client, I’m still at 15%. And we are just at the beginning. A user just will not wait for days to download something in order to be able to start using a piece of technology.

I recently proposed downloading snapshots of the blockchain via bittorrent to be included in the Ethereum protocol itself. I know that snapshots of the Bitcoin blockchain have been distributed that way, but it has been a manual process. If a client can quickly download the huge file up to a recent point, and then only donwload the latest ones in the the traditional way, starting up may be easier. Of course, the whole chain would have to be verified, but maybe that can be a background process that doesn’t stop you from using whatever is built ontop of the particular blockchain. (I’m not sure if that will be secure enough, and that, say potential Sybil attacks on the bittorrent part won’t make it undesirable, it’s just an idea).

But even if such an approach works and is adopted, that would still mean that for every service you’d have to download a separate blockchain. Of course, projects like Ethereum may seem like the “one stop shop” for cool blockchain-based applications, but fragmentation is already happening – there are alt-coins bundled with various services like file storage, DNS, etc. That will not be workable for end-users. And it’s certainly not an option for mobile, which is the dominant client now. If instead of downloading the entire chain, something like consistent hashing is used to distribute the content in small portions among clients, it might be workable. But how will trust work in that case, I don’t know. Maybe it’s possible, maybe not.

And yes, I know that you don’t necessarily have to install a wallet/client in order to make use of a given blockchain – you can just have a cloud-based wallet. Which is fairly convenient, but that gets me to my nitpicking from a few paragraphs above and to may second concern – this effectively turns a distributed system into a decentralized one – a limited number of cloud providers hold most of the data (just as a limited number of miners hold most of the processing power). And then, even though the underlying technology allows for a distributed deployment, we’ll end-up again with simply decentralized or even de-facto cenetralized, if mergers and acquisitions lead us there (and they probably will). And in order to be able to access our wallets/accounts from multiple devices, we’d use a convenient cloud service where we’d login with our username and password (because the private key is just too technical and hard for regular users). And that seems to defeat the whole idea.

Not only that, but there is an inevitable centralization of decisions (who decides on the size of the block, who has commit rights to the client repository) as well as a hidden centralization of power – how much GPU power does the Chinese mining “farms” control and can they influence the network significantly? And will the average user ever know that or care (as they don’t care that Google is centralized). I think that overall, distributed technologies will follow the power law, and the majority of data/processing power/decision power will be controller by a minority of actors. And so our distributed utopia will not happen in its purest form we dream of.

My third concern is incentive. Distributed technologies that have been successful so far have a pretty narrow set of incentives. The internet was promoted by large public institutions, including government agencies and big universitives. Bittorrent was successful mainly because it allowed free movies and songs with 2 clicks of the mouse. And Bitcoin was successful because it offered financial benefits. I’m oversimplifying of course, but “government effort”, “free & easy” and “source of more money” seem to have been the successful incentives. On the other side of the fence there are dozens of failed distributed technologies. I’ve tried many of them – alternative search engines, alternative file storage, alternative ride-sharings, alternative social networks, alternative “internets” even. None have gained traction. Because they are not easier to use than their free competitors and you can’t make money out of them (and no government bothers promoting them).

Will blockchain-based services have sufficient incentives to drive customers? Will centralized competitors just easily crush the distributed alternatives by being cheaper, more-user friendly, having sales departments that can target more than hardcore geeks who have no problem syncing their blockchain via the command line? The utopian slogans seem very cool to idealists and futurists, but don’t sell. “Free from centralized control, full control over your data” – we’d have to go through a long process of cultural change before these things make sense to more than a handful of people.

Speaking of services, often examples include “the sharing economy”, where one stranger offers a service to another stranger. Blockchain technology seems like a good fit here indeed – the services are by nature distributed, why should the technology be centralized? Here comes my fourth concern – identity. While for the cryptocurrencies it’s actually beneficial to be anonymous, for most of the real-world services (i.e. the industries that ought to be revolutionized) this is not an option. You can’t just go in the car of publicKey=5389BC989A342…. “But there are already distributed reputation systems”, you may say. Yes, and they are based on technical, not real-world identities. That doesn’t build trust. I don’t trust that publicKey=5389BC989A342… is the same person that got the high reputation. There may be five people behind that private key. The private key may have been stolen (e.g. in a cloud-provider breach).

The values of companies like Uber and AirBNB is that they serve as trust brokers. They verify and vouch for their drivers and hosts (and passengers and guests). They verify their identity through government-issued documents, skype calls, selfies, compare pictures to documents, get access to government databases, credit records, etc. Can a fully distributed service do that? No. You’d need a centralized provider to do it. And how would the blockchain make any difference then? Well, I may not be entirely correct here. I’ve actually been thinking quite a lot about decentralized identity. E.g. a way to predictably generate a private key based on, say biometrics+password+government-issued-documents, and use the corresponding public key as your identifier, which is then fed into reputation schemes and ultimately – real-world services. But we’re not there yet.

And that is part of my fifth concern – the technology itself. We are not there yet. There are bugs, there are thefts and leaks. There are hard-forks. There isn’t sufficient understanding of the technology (I confess I don’t fully grasp all the implementation details, and they are always the key). Often the technology is advertised as “just working”, but it isn’t. The other day I read an article (lost the link) that clarifies a common misconception about smart contracts – they cannot interact with the outside world – they can’t call APIs (e.g. stock market prices, bank APIs), they can’t push or fetch data from anywhere but the blockchain. That mandates the need, again, for a centralized service that pushes the relevant information before smart contracts can pick it up. I’m pretty sure that all cool-sounding applications are not possible without extensive research. And even if/when they are, writing distributed code is hard. Debugging a smart contract is hard. Yes, hard is cool, but that doesn’t drive economic value.

I have mostly been referring to public blockchains so far. Private blockchains may have their practical application, but there’s one catch – they are not exactly the cool distributed technology that the Bitcoin uses. They may be called “blockchains” because they…chain blocks, but they usually centralize trust. For example the Hyperledger project uses PKI, with all its benefits and risks. In these cases, a centralized authority issues the identity “tokens”, and then nodes communicate and form a shared ledger. That’s a bit easier problem to solve, and the nodes would usually be on actual servers in real datacenters, and not on your uncle’s Windows XP.

That said, hash chaining has been around for quite a long time. I did research on the matter because of a side-project of mine and it seems providing a tamper-proof/tamper-evident log/database on semi-trusted machines has been discussed in many computer science papers since the 90s. That alone is not “the magic blockchain” that will solve all of our problems, no matter what gossip protocols you sprinkle ontop. I’m not saying that’s bad, on the contrary – any variation and combinations of the building blocks of the blockchain (the hash chain, the consensus algorithm, the proof-of-work (or stake), possibly smart contracts), has potential for making useful products.

I know I sound like the a naysayer here, but I hope I’ve pointed out particular issues, rather than aimlessly ranting at the hype (though that’s tempting as well). I’m confident that blockchain-like technologies will have their practical applications, and we will see some successful, widely-adopted services and solutions based on that, just as pointed out in this detailed report. But I’m not convinced it will be revolutionizing.

I hope I’m proven wrong, though, because watching a revolutionizing technology closely and even being part of it would be quite cool.

The post Concerns About The Blockchain Technology appeared first on Bozho's tech blog.

Defending anti-netneutrality arguments

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/07/defending-anti-netneutrality-arguments.html

Last week, activists proclaimed a “NetNeutrality Day”, trying to convince the FCC to regulate NetNeutrality. As a libertarian, I tweeted many reasons why NetNeutrality is stupid. NetNeutrality is exactly the sort of government regulation Libertarians hate most. Somebody tweeted the following challenge, which I thought I’d address here.

The links point to two separate cases.

  • the Comcast BitTorrent throttling case
  • a lawsuit against Time Warning for poor service
The tone of the tweet suggests that my anti-NetNeutrality stance cannot be defended in light of these cases. But of course this is wrong. The short answers are:

  • the Comcast BitTorrent throttling benefits customers
  • poor service has nothing to do with NetNeutrality

The long answers are below.

The Comcast BitTorrent Throttling

The presumption is that any sort of packet-filtering is automatically evil, and against the customer’s interests. That’s not true.
Take GoGoInflight’s internet service for airplanes. They block access to video sites like NetFlix. That’s because they often have as little as 1-mbps for the entire plane, which is enough to support many people checking email and browsing Facebook, but a single person trying to watch video will overload the internet connection for everyone. Therefore, their Internet service won’t work unless they filter video sites.
GoGoInflight breaks a lot of other NetNeutrality rules, such as providing free access to Amazon.com or promotion deals where users of a particular phone get free Internet access that everyone else pays for. And all this is allowed by FCC, allowing GoGoInflight to break NetNeutrality rules because it’s clearly in the customer interest.
Comcast’s throttling of BitTorrent is likewise clearly in the customer interest. Until the FCC stopped them, BitTorrent users were allowed unlimited downloads. Afterwards, Comcast imposed a 300-gigabyte/month bandwidth cap.
Internet access is a series of tradeoffs. BitTorrent causes congestion during prime time (6pm to 10pm). Comcast has to solve it somehow — not solving it wasn’t an option. Their options were:
  • Charge all customers more, so that the 99% not using BitTorrent subsidizes the 1% who do.
  • Impose a bandwidth cap, preventing heavy BitTorrent usage.
  • Throttle BitTorrent packets during prime-time hours when the network is congested.
Option 3 is clearly the best. BitTorrent downloads take hours, days, and sometimes weeks. BitTorrent users don’t mind throttling during prime-time congested hours. That’s preferable to the other option, bandwidth caps.
I’m a BitTorrent user, and a heavy downloader (I scan the Internet on a regular basis from cloud machines, then download the results to home, which can often be 100-gigabytes in size for a single scan). I want prime-time BitTorrent throttling rather than bandwidth caps. The EFF/FCC’s action that prevented BitTorrent throttling forced me to move to Comcast Business Class which doesn’t have bandwidth caps, charging me $100 more a month. It’s why I don’t contribute the EFF — if they had not agitated for this, taking such choices away from customers, I’d have $1200 more per year to donate to worthy causes.
Ask any user of BitTorrent which they prefer: 300gig monthly bandwidth cap or BitTorrent throttling during prime-time congested hours (6pm to 10pm). The FCC’s action did not help Comcast’s customers, it hurt them. Packet-filtering would’ve been a good thing, not a bad thing.

The Time-Warner Case
First of all, no matter how you define the case, it has nothing to do with NetNeutrality. NetNeutrality is about filtering packets, giving some priority over others. This case is about providing slow service for everyone.
Secondly, it’s not true. Time Warner provided the same access speeds as everyone else. Just because they promise 10mbps download speeds doesn’t mean you get 10mbps to NetFlix. That’s not how the Internet works — that’s not how any of this works.
To prove this, look at NetFlix’s connection speed graphis. It shows Time Warner Cable is average for the industry. It had the same congestion problems most ISPs had in 2014, and it has the same inability to provide more than 3mbps during prime-time (6pm-10pm) that all ISPs have today.

The YouTube video quality diagnostic pages show Time Warner Cable to similar to other providers around the country. It also shows the prime-time bump between 6pm and 10pm.
Congestion is an essential part of the Internet design. When an ISP like Time Warner promises you 10mbps bandwidth, that’s only “best effort”. There’s no way they can promise 10mbps stream to everybody on the Internet, especially not to a site like NetFlix that gets overloaded during prime-time.
Indeed, it’s the defining feature of the Internet compared to the old “telecommunications” network. The old phone system guaranteed you a steady 64-kbps stream between any time points in the phone network, but it cost a lot of money. Today’s Internet provide a free multi-megabit stream for free video calls (Skype, Facetime) around the world — but with the occasional dropped packets because of congestion.
Whatever lawsuit money-hungry lawyers come up with isn’t about how an ISP like Time Warner works. It’s only about how they describe the technology. They work no different than every ISP — no different than how anything is possible.
Conclusion

The short answer to the above questions is this: Comcast’s BitTorrent throttling benefits customers, and the Time Warner issue has nothing to do with NetNeutrality at all.

The tweet demonstrates that NetNeutrality really means. It has nothing to do with the facts of any case, especially the frequency that people point to ISP ills that have nothing actually to do with NetNeutrality. Instead, what NetNeutrality really about is socialism. People are convinced corporations are evil and want the government to run the Internet. The Comcast/BitTorrent case is a prime example of why this is a bad idea: government definitions of what customers want is actually far different than what customers actually want.

КЗП пита защо СМС-ите на кирилица стрували повече отколкото на латиница

Post Syndicated from Delian Delchev original http://feedproxy.google.com/~r/delian/~3/rUccfhWEzQw/blog-post_9.html

КЗП иска обяснение защо е неграмотна и не разбира как работят кодовите (а не криптиращите) таблици. На операторите им е трудно да отговорят с цензорен език и без препоръка да се върнат в училище.
За другите неграмотни, също обяснявам.
Един СМС предава 160 байта данни. Защо 160 байта е друг въпрос, но не е важно.
СМС-а може да транспортира всичко, от текстови съобщения до системна информация и дори данни. Една от ранните имплементации на WAP протокола е върху SMS.
Какво си предават през СМС изпращача и получателя е без значение за оператора. Той не се интересува от съдържанието на съобщението, нито променя начина по който го предава в зависимост от съдържанието. За да го прави, би му трябвало инфраструктура, която би увеличила цената на обработка на СМС многократно, и следователно и цената за крайните потребители.
Само крайните устройства (терминалите/телефоните) избират как да кодират съобщението и какво му е съдържанието.
Ако крайните устрийства използват Concatenated SMS/EMS (допълнителна информация към вашият текст в смс-а, която казваа как текста е кодиран и дали има свързани в едно малки SMS-и), можете да изпращате текст съдържащ едновременно всякакви знаци от Unicode кодовите таблици, както и съобщения по-дълги от 160 байта, и дори файлове и картинки. Те просто се разбиват на съобщения от по 160 байта, в които се слага header, който казва как са кодирани знаците и данните вътре, и на как няколкото SMS-а може да са свързани.
Това не е избор на оператора, отново това е конфигурация и избор на телефонният апарат. 
Tелефонният апарат би декодирал байтовете в СМС-а до текст, в зависимост от кодовата си таблица по подразбиране.
И сега, когато изпращате нещо на кирилица, се използва (отново избор на телефоните е, а не на операторите, но за да имате съвместимост този избор трябва да е еднакъв и от двата комуникиращи си телефона, и за това ако променяте настройките може и да не ви разбере получателя) типично UTF-8 кодова таблица. Тоест латинските знаци се предават с 1 байт, но кирилцата, гръцките знаци, и разширените латински знаци, и китайските знаци и математическите символи и т.н. се предават с 2 байта.
Тоест ако пишете СМС само на латиница, може и да имате 160 знака, но ако пишете само с кирилски букви, ще имате 80 знака в съобщение.
Ако добавите EMS/Concatenated SMS (който се включва автоматично, в момента в който напишете по-дълъг текст, на латиница над 160 знака, на кирилица около 80-тина знака), той ще раздели съобщението ви на малки смс-и, и ще добави във всяко едно допълнителен хеадър, с който ще идентифицира например е тези смс-и съставляват едно съобщение и в какъв ред са те.
Тоест, ако напишете текст с 160 латински знака, той ще заеме точно един СМС за да бъде транспортиран.
Но ако напишете текст с 160 кирилски знака, той ще заеме 3 (ТРИ!) СМС-а за да бъде транспортиран. Причината е, че ви трябва минимум 8 байта хеадър за Concatenated SMS, и следователно можете да пренесете само 72 знака в един SMS. Тоест ще ви трябва един SMS за първите 76 знака, един за вторите 76 знака и един за третите 8 знака.
Така реално ще ползвате 3 смс-а и съответно ще платите 3.
А защо кирилицата се кодира с 2 байта а не с 1, обвинявайте тия дето са измислили компюторите, че не са знаели и ползвали всекидневно кирилца.

Дори и не искам да коментирам по същество, защо КЗП се занимава с глупости, от които няма техническа полза за гражданите. В момента над 80% от договорите за мобилни услуги в страната на практика включват неограничено количество СМС-и, и въпреки това средното количество СМС-и, които гражданите изпращат на месец намалява с всеки изминал ден и слиза стабилно под 30-40, половината от които реклами и системни съобщения. За пример преди 10г средното количество СМС-и на абонат бе 3-4 пъти по-голямо. СМС е мъртва услуга, защото е заменена от по ефективни (и предаващи много по големи и мултимедийно богати) услуги работещи върху данни (Skype, Viber, Facebook messenger, Yahoo messenger, Whatsapp, Snapchat, Google Hangouts/Duo/Allo, etc).
Заради ниската употреба на тази услуга, се обмисля на европейско ниво тя да бъде извадена извън системните услуги (тоест да не е задължителна, за да си получаваш например роаминг смс-а). И отново заради ниската и употреба операторите я таксуват на фиксирана такса. Където вече няма значение каква е цената на един СМС (тя е нула) и колко СМС-а са нужни за да напише някой псувня на български.
А усилията на КЗП да вдига патриотичен шум в предизборно време около безсмислена активност, от която реално няма много хора, да могат да се възползват, е изключително интересна

Малко за новият Skype

Post Syndicated from Delian Delchev original http://feedproxy.google.com/~r/delian/~3/oiRuHXEumRI/skype.html

Новият Скайп е много интересен. По същество е написан изцяло на JavaScript върху Electron (по подобие на Viber за някой платформи) и използва XMPP за комуникация (вероятно върху WebSockets wss). Аудиото и видеото са имплементирани през WebRTC използвайки стандартни кодеци (за електрон), и си е буквално едно WWW приложение. Единственото допълнение, е Conference Mixer плъгин за елекетрон. Всъщност ако ползвате Skype за Web те почти сигурно споделят основната част от кода си. Мрежата отзад е майкрософтският CDN, като като за бакенд вероятно използва опростен messaging bus и някакъв сторидж за информацията (подозирам стандартни за Azure, за да може да ползва CDN-а). Вече няма нищо общо с оригиналният скайп, нито като архитектура, нито като инфраструктура. Връзката със старата мрежа е вероятно с Bridge някъде, но тя постепенно се спира. Skype за MAC и Skype за Windows вече използват само новият модел (старите вече не се поддържат от тази седмица за тези платформи), както и Skype за всички мобилни платформи. Само Skype за Линукс (този, който не е новата Alpha) е останал да ползва старата мрежа, но Bridge е в толкова ужасно състояние, че не се получават доста от съобщенията и има много проблеми. Архитектурата на новият Скайп е много подобна на тази на Lync от едно време (само че той бе базиран на IE). Когато Microsoft казваха на времето, че ще обединят Skype и Lync, аз си мислех, че ще убият Lync и ще мигрират клиентите към Skype. Те обаче са имали предвид обратното.

AWS Week in Review – October 24, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-october-24-2016/

Another busy week in AWS-land! Today’s post included submissions from 21 internal and external contributors, along with material from my RSS feeds, my inbox, and other things that come my way. To join in the fun, create (or find) some awesome AWS-related content and submit a pull request!

Monday

October 24

Tuesday

October 25

Wednesday

October 26

Thursday

October 27

Friday

October 28

Saturday

October 29

Sunday

October 30

New & Notable Open Source

  • aws-git-backed-static-website is a Git-backed static website generator powered entirely by AWS.
  • rds-pgbadger fetches log files from an Amazon RDS for PostgreSQL instance and generates a beautiful pgBadger report.
  • aws-lambda-redshift-copy is an AWS Lambda function that automates the copy command in Redshift.
  • VarnishAutoScalingCluster contains code and instructions for setting up a shared, horizontally scalable Varnish cluster that scales up and down using Auto Scaling groups.
  • aws-base-setup contains starter templates for developing AWS CloudFormation-based AWS stacks.
  • terraform_f5 contains Terraform scripts to instantiate a Big IP in AWS.
  • claudia-bot-builder creates chat bots for Facebook, Slack, Skype, Telegram, GroupMe, Kik, and Twilio and deploys them to AWS Lambda in minutes.
  • aws-iam-ssh-auth is a set of scripts used to authenticate users connecting to EC2 via SSH with IAM.
  • go-serverless sets up a go.cd server for serverless application deployment in AWS.
  • awsq is a helper script to run batch jobs on AWS using SQS.
  • respawn generates CloudFormation templates from YAML specifications.

New SlideShare Presentations

New Customer Success Stories

  • AlbemaTV – AbemaTV is an Internet media-services company that operates one of Japan’s leading streaming platforms, FRESH! by AbemaTV. The company built its microservices platform on Amazon EC2 Container Service and uses an Amazon Aurora data store for its write-intensive microservices—such as timelines and chat—and a MySQL database on Amazon RDS for the remaining microservices APIs. By using AWS, AbemaTV has been able to quickly deploy its new platform at scale with minimal engineering effort.
  • Celgene – Celgene uses AWS to enable secure collaboration between internal and external researchers, allow individual scientists to launch hundreds of compute nodes, and reduce the time it takes to do computational jobs from weeks or months to less than a day. Celgene is a global biopharmaceutical company that creates drugs that fight cancer and other diseases and disorders. Celgene runs its high-performance computing research clusters, as well as its research collaboration environment, on AWS.
  • Under Armour – Under Armour can scale its Connected Fitness apps to meet the demands of more than 180 million global users, innovate and deliver new products and features more quickly, and expand internationally by taking advantage of the reliability and high availability of AWS. The company is a global leader in performance footwear, apparel, and equipment. Under Armour runs its growing Connected Fitness app platform on the AWS Cloud.

New YouTube Videos

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Eavesdropping on Typing Over Voice-Over-IP

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/10/eavesdropping_o_6.html

Interesting research: “Don’t Skype & Type! Acoustic Eavesdropping in Voice-Over-IP“:

Abstract: Acoustic emanations of computer keyboards represent a serious privacy issue. As demonstrated in prior work, spectral and temporal properties of keystroke sounds might reveal what a user is typing. However, previous attacks assumed relatively strong adversary models that are not very practical in many real-world settings. Such strong models assume: (i) adversary’s physical proximity to the victim, (ii) precise profiling of the victim’s typing style and keyboard, and/or (iii) significant amount of victim’s typed information (and its corresponding sounds) available to the adversary.

In this paper, we investigate a new and practical keyboard acoustic eavesdropping attack, called Skype & Type (S&T), which is based on Voice-over-IP (VoIP). S&T relaxes prior strong adversary assumptions. Our work is motivated by the simple observation that people often engage in secondary activities (including typing) while participating in VoIP calls. VoIP software can acquire acoustic emanations of pressed keystrokes (which might include passwords and other sensitive information) and transmit them to others involved in the call. In fact, we show that very popular VoIP software (Skype) conveys enough audio information to reconstruct the victim’s input ­ keystrokes typed on the remote keyboard. In particular, our results demonstrate
that, given some knowledge on the victim’s typing style and the keyboard, the attacker attains top-5 accuracy of 91:7% in guessing a random key pressed by the victim. (The accuracy goes down to still alarming 41:89% if the attacker is oblivious to both the typing style and the keyboard). Finally, we provide evidence that Skype & Type attack is robust to various VoIP issues (e.g., Internet bandwidth fluctuations and presence of voice over keystrokes), thus confirming feasibility of this attack.

News article.

Call me Ishmael

Post Syndicated from Lorna Lynch original https://www.raspberrypi.org/blog/call-me-ishmael/

“I write this sitting in the kitchen sink”. “It was the best of times, it was the worst of times”. “When Gregor Samsa woke one morning from troubled dreams, he found himself transformed right there in his bed into some sort of monstrous insect”. “It was the day my grandmother exploded”. The opening line of a novel can catch our attention powerfully, and can stay with us long after the book itself is finished. A memorable first line is endlessly quotable, and lends itself to parody (“It is a truth universally acknowledged that a zombie in possession of brains must be in want of more brains”). Sometimes, a really cracking first line can even inspire a group of talented people to create a unique and beautiful art object, with a certain tiny computer at its heart. 

IMG_5975

Stephanie Kent demonstrates the Call Me Ishmael Phone at ALA 2016

If you read the roundup of our trip to ALA 2016, you will already have caught a glimpse of this unusual Pi-powered project: the Call Me Ishmael Phone. The idea originated back in 2014 when founders Logan Smalley and Stephanie Kent were discussing their favourite opening lines of books: they were both struck by Herman Melville’s laconic phrase in Moby Dick, and began wondering, “What if Ishmael had a phone number? What if you actually could call him?” Their Call Me Ishmael project began with a phone number (people outside the US can Skype Ishmael instead), an answering machine, and an invitation to readers to tell Ishmael a story about a book they love, and how it has shaped their life. The most interesting, funny, and poignant stories are transcribed by Stephanie on a manual typewriter and shared on social media. Here’s a playlist of some of the team’s favourites: 

Having created Ishmael’s virtual world, Stephanie and Logan collaborated with artist and maker Ayodamola Okunseinde to build the physical Call Me Ishmael Phone. Ayo took a commercially available retro-style telephone and turned it into an interactive book-recommendation device. For the prototype, he used a Raspberry Pi 2 Model B, but the production model of the phone uses the latest Pi 3. He explains, “we have a USB stick drive connected to the Pi that holds audio files, configuration, and identification data for each unit. We also have a small USB-powered speaker that amplifies the audio output from the Pi”. The Pis are controlled by a Python script written by programmer Andy Cavatorta.

CMI-Phone-in-Shop_Steph_Andy_Ayo-min-min

Stephanie, Andy, and Ayo in the workshop. 

The phone can be installed in a library, bookshop, or another public space. The phone is loaded with a number of book reviews, some mapped to individual buttons on the phone, and some which can be selected at random. When a person presses the dial buttons on the phone, the GPIO pins detect the input. This subsequently triggers an audio file to play. If, during play, another button is pressed, the Pi switches audio output to the associated button. Hanging up the phone causes the termination of the playing audio file. The system consists of several units in different locations that have audio and data files pushed to them daily from a control server. The system also has an app that allows users to push and pull content from individual Pis as well as triggering a particular phone to ring.

CMI-Phone-Avid-center-floor-stand-flush-right-min-min

The finished unit installed in a bookshop.

The Call Me Ishmael Phone is a thoughtful project which uses the Raspberry Pi in a very unusual way: it’s not often that programming and literature intersect like this. We’re delighted to see it, and we can’t wait to see what ways the makers might come up with to use the Raspberry Pi in future. And if you have a book which has changed your life, why not call Ishmael and share your story?

The post Call me Ishmael appeared first on Raspberry Pi.

Create and Deploy a Chat Bot to AWS Lambda in Five Minutes

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/create-and-deploy-a-chat-bot-to-aws-lambda-in-five-minutes/

This is a guest post by Gojko Adzic, creator of ClaudiaJS

 

The new Claudia.JS Bot Builder project helps JavaScript developers to easily create chat-bots for Facebook, Telegram, Skype, and Slack, and deploy them to AWS Lambda and Amazon API Gateway in minutes.

The key idea behind this project is to remove all the boilerplate code and common infrastructure tasks, so you can focus on writing the really important part of the bot — your business workflows. Everything else is handled by the Claudia Bot Builder.

The Claudia Bot Builder library simplifies messaging workflows, automatically sets up the correct web hooks, and guides you through configuration steps, so you don’t have to research individual implementation protocols. It automatically converts the incoming messages from various platforms into a common format, so you can handle them easily. It also automatically packages the responses into the correct templates, so you do not have to worry about different message response formats. This means that you can write and deploy a single bot with just a few lines of code, and operate it on various bot platforms using AWS Lambda. Check out the two-minute video Create chat-bots easily using Claudia Bot Builder to see how easy it is to set up a bot on AWS using the new tool.

Here’s a simple example:

Prerequisites

The Claudia Bot Builder works with the Node.JS 4.3.2 AWS Lambda installation. It requires using the Claudia.JS deployment tool, which you can install using NPM:

npm install claudia -g

If you already have Claudia installed, make sure it’s up to date. The Claudia Bot Builder support requires version 1.4.0 or later.

Creating a simple text bot

First, create an empty folder, and a new NPM project inside it. Make sure to give it a descriptive name:

npm init

Then, add the claudia-bot-builder library as a project dependency:

npm install claudia-bot-builder -S

For this particular bot, generate some dynamic content using the huh excuse generator. Add that as a project dependency:

npm install huh -S

Now create the bot. Create a file called bot.js and paste the following content:

var botBuilder = require('claudia-bot-builder'),
    excuse = require('huh');

module.exports = botBuilder(function (request) {
  return 'Thanks for sending ' + request.text  + 
      '. Your message is very important to us, but ' + 
      excuse.get();
});

That’s pretty much it. You can now deploy the bot to AWS and configure it for Facebook Messenger, by using Claudia:

claudia create --region us-east-1 --api-module bot --configure-fb-bot

Now would be a good time to configure a new Facebook page and a messenger application, as explained in the Facebook Messenger Getting Started Guide. The bot installer prints the web hook URL and the verification token, which you can copy to your Facebook Messenger configuration page. You can then generate the page access token from Facebook. Copy that back to Claudia when asked, and you’re almost done.

In a few moments, your bot will be live, and you can talk to it from the page you created. That was easy, wasn’t it?

If you’d like other Facebook users to talk to it as well, submit it for application review from the Facebook App Developer page.

Deploying to other platforms

The Claudia Bot Builder can also help you set up this bot for all the other platforms. Just run claudia update and provide the additional configuration option:

  • For Slack slash commands, use –configure-slack-slash-command
  • For Skype, use –configure-skype-bot
  • For Telegram, use –configure-telegram-bot

More complex workflows

The example bot just responds with silly excuses so for homework, do something more interesting with it.

The request object passed into the message handling function contains the entire message in the text field, but it also has some other pieces of data for more complex work. The sender field identifies the user sending the message, so you can create threads of continuity and sessions. The type field contains the identifier of the bot endpoint that received the message (for example, skype or facebook) so you can respond differently to different bot systems. The originalRequest field contains the entire unparsed original message, so you can handle platform-specific requests and go beyond simple text.

For examples, check out:

  • Fact Bot, which looks up facts about topics on WikiData and creates Facebook Messenger menus.
  • Space Explorer Bot, A small FB Messenger chat bot using NASA API

Although it’s enough just to return a string value for simple cases, and the Bot Builder packages it correctly for individual bot engines, you can return a more complex object and get platform-specific features, for example, Facebook buttons. In that case, make sure to use the type field of the request to decide on additional features.

For asynchronous workflows, send back a Promise object, and resolve it with the response later. The convention is the same: if the promise gets resolved with a string, the Claudia Bot Builder automatically packages it into the correct template based on the bot endpoint that received a message. Reply with an object instead of a string, and the Bot Builder will not do any specific parsing, letting you take advantage of more advanced bot features for individual platforms. Remember to configure your Lambda function for longer execution if you plan to use asynchronous replies; by default, AWS limits this to 3 seconds.

Try it out live

You can see this bot in action and play with it live from the GitHub Claudia Examples repository.

More information

For more information on the Claudia Bot Builder, and some nice example projects, check out the Claudia Bot Builder GitHub project repository. For questions and suggestions, visit the Claudia project chat room on Gitter.

Signal

Post Syndicated from Йовко Ламбрев original http://yovko.net/signal/

Реших да редуцирам каналите си за връзка, най-вече по отношение на всевъзможните messenger-и за директни съобщения.
Винаги предпочитам електронната поща за основна комуникация, понеже мога да подреждам (или пренебрегвам) по приоритет писмата, които заслужават внимание и евентуален отговор, и разполагам с end-to-end криптиране при нужда. Тук е актуалният ми PGP ключ. А ако не знаете някой от моите email адреси, винаги можете да ползвате този начин.
Преглеждам пощата си поне един-два пъти дневно, освен когато съм в почивка, без Интернет или работя по някой спешен проблем или проект. Но не получавам нотификации за нея на смартфона си – това е адски разсейващо и контрапродуктивно. „Любимо“ ми е някой да ми звънне по телефона с изречението: Току-що ти изпратих mail. Видя ли го?
За директни съобщения занапред ще използвам основно Signal на Open Whisper Systems като прилична база за отворена и сигурна платформа, която заслужава да бъде ползвана, популяризирана и подкрепена от потребителите. Временно, като резервна опция, оставям и WhatsApp заради няколко близки приятели, които предпочитат навика и не осъзнават необходимостта от сигурна комуникация, поради което ще отнеме време да бъдат убедени.
"I don't need privacy, I've nothing to hide" argues "I don't need free speech, I've nothing to say." Rights = Power https://t.co/AOMc79DIOS
— Edward Snowden (@Snowden) November 4, 2015

Okay, ако сте с iPhone или Mac можете да ми изпратите и iMessage като друга резервна опция, с едно наум, че сигурността и там е според зависи от Apple.
Принципно не ползвам Skype освен след предварителна уговорка за конкретен разговор. Нито Viber и Facebook Messenger (те дори не успяха да ми харесат). Спирам също и Hangouts, и Telegram, както и всякакви други, защото ми идват в повече.
Опитайте Signal – семпло и леко приложение – за криптирани писмени съобщения и разговори. Освен, че е свободно и open source е и безплатно. Има го за iPhone и Android, а скоро и за web. И даже Snowden го благослови 😉
I use Signal every day. #notesforFBI (Spoiler: they already know) https://t.co/KNy0xppsN0
— Edward Snowden (@Snowden) November 2, 2015

The LinkedIn Lawsuit Is a Step Forward But Doesn’t Go Far Enough

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2014/09/22/linkedin.html

Years ago, I wrote a blog post about how
I don’t
use Google Plus, Google Hangouts, Facebook, Twitter, Skype, LinkedIn or
other proprietary network services
. I talked in that post about how
I’m under constant and immense social pressure to use these services.
(It’s often worse than the peer pressure one experiences as a teenager.)

I discovered a few months ago, however, that one form of this peer
pressure was actually a product of nefarious practices by one of the
vendors — namely Linked In. Today, I learned
a lawsuit
is now proceeding against Linked In
on behalf of the users whose contacts
were spammed repeatedly by Linked In’s clandestine use of people’s address
books.

For my part, I suppose I should be glad that I’m “well
connected”, but that means I get multiple emails from Linked In
almost every single day, and indeed, as the article (linked to above)
states, each person’s spam arrives three times over a period of weeks. I
was initially furious at people whom I’d met for selling my contact
information to Linked In (which of course, they did), but many of them
indeed told me they were never informed by Linked In that such spam
generation would occur once they’d complete the sale of all their contact
data to Linked In.

This is just yet another example of proprietary software companies
mistreating users. If we had a truly federated Linked-In-like service,
we’d be able to configure our own settings in this regard. But, we don’t
have that. (I don’t think anyone is even writing one.) This is precisely
why it’s important to boycott these proprietary solutions, so at the very
least, we don’t complacently forget that they’re proprietary, or
inadvertently mistreat our colleagues who don’t use those services in the
interim.

Finally, the lawsuit seems to focus solely on the harm caused to Linked In
users who were embarrassed professionally. (I can say that indeed I was
pretty angry at many of my contacts for a while when I thought they were
choosing to spam me three times each, so that harm is surely real.) But
the
violation CAN-SPAM
act
by Linked In should also not be ignored and I hope someone will
take action on that point, too.

Be Sure to Comment on FCC’s NPRM 14-28

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2014/06/04/fcc-14-28.html

I remind everyone today, particularly USA Citizens, to be sure to comment
on
the FCC’s
Notice of Proposed Rulemaking (NPRM) 14-28
. They even did a sane thing
and provided
an email address you can write to rather than using their poorly designed
web forums
,
but PC
Magazine published relatively complete instructions for other ways
.
The deadline isn’t for a while yet, but it’s worth getting it done so you
don’t forget. Below is my letter in case anyone is interested.

Dear FCC Commissioners,

I am writing in response to NPRM 14-28 — your request for comments regarding
the “Open Internet”.

I am a trained computer scientist and I work in the technology industry.
(I’m a software developer and software freedom activist.) I have subscribed
to home network services since 1989, starting with the Prodigy service, and
switching to Internet service in 1991. Initially, I used a PSTN single-pair
modem and eventually upgraded to DSL in 1999. I still have a DSL line, but
it’s sadly not much faster than the one I had in 1999, and I explain below
why.

In fact, I’ve watched the situation get progressively worse, not better,
since the Telecommunications Act of 1996. While my download speeds are
little bit faster than they were in the late 1990s, I now pay
substantially more for only small increases of upload speeds, even in a
major urban markets. In short, it’s become increasingly more difficult
to actually purchase true Internet connectivity service anywhere in the
USA. But first, let me explain what I mean by “true Internet
connectivity”.

The Internet was created as a peer-to-peer medium where all nodes were
equal. In the original design of the Internet, every device has its own
IP address and, if the user wanted, that device could be addressed
directly and fully by any other device on the Internet. For its part,
the network in between the two nodes were intended to merely move the
packets between those nodes as quickly as possible — treating all those
packets the same way, and analyzing those packets only with publicly
available algorithms that everyone agreed were correct and fair.

Of course, the companies who typically appeal to (or even fight) the FCC
want the true Internet to simply die. They seek to turn the promise of
a truly peer-to-peer network of equality into a traditional broadcast
medium that they control. They frankly want to manipulate the Internet
into a mere television broadcast system (with the only improvement to
that being “more stations”).

Because of this, the three following features of the Internet —
inherent in its design — that are now extremely difficult for
individual home users to purchase at reasonable cost from so-called
“Internet providers” like Time Warner, Verizon, and Comcast:

A static IP address, which allows the user to be a true, equal node on
the Internet. (And, related: IPv6 addresses, which could end the claim
that static IP addresses are a precious resource.)

An unfiltered connection, that allows the user to run their own
webserver, email server and the like. (Most of these companies block TCP
ports 80 and 25 at the least, and usually many more ports, too).

Reasonable choices between the upload/download speed tradeoff.

For example, in New York, I currently pay nearly $150/month to an
independent ISP just to have a static, unfiltered IP address with 10
Mbps down and 2 Mbps up. I work from home and the 2 Mbps up is
incredibly slow for modern usage. However, I still live in the Slowness
because upload speeds greater than that are extremely price-restrictive
from any provider.

In other words, these carriers have designed their networks to
prioritize all downloading over all uploading, and to purposely place
the user behind many levels of Network Address Translation and network
filtering. In this environment, many Internet applications simply do
not work (or require complex work-arounds that disable key features).
As an example: true diversity in VoIP accessibility and service has
almost entirely been superseded by proprietary single-company services
(such as Skype) because SIP, designed by the IETF (in part) for VoIP
applications, did not fully anticipate that nearly every user would be
behind NAT and unable to use SIP without complex work-arounds.

I believe this disastrous situation centers around problems with the
Telecommunications Act of 1996. While
the
ILECs

are theoretically required to license network infrastructure fairly at bulk
rates to

CLEC
s,
I’ve frequently seen — both professional and personally — wars
waged against
CLECs by
ILECs. CLECs
simply can’t offer their own types of services that merely “use”
the ILECs’ connectivity. The technical restrictions placed by ILECs force
CLECs to offer the same style of service the ILEC offers, and at a higher
price (to cover their additional overhead in dealing with the CLECs)! It’s
no wonder there are hardly any CLECs left.

Indeed, in my 25 year career as a technologist, I’ve seen many nasty
tricks by Verizon here in NYC, such as purposeful work-slowdowns in
resolution of outages and Verizon technicians outright lying to me and
to CLEC technicians about the state of their network. For my part, I
stick with one of the last independent ISPs in NYC, but I suspect they
won’t be able to keep their business going for long. Verizon either (a)
buys up any CLEC that looks too powerful, or, (b) if Verizon can’t buy
them, Verizon slowly squeezes them out of business with dirty tricks.

The end result is that we don’t have real options for true Internet
connectivity for home nor on-site business use. I’m already priced
out of getting a 10 Mbps upload with a static IP and all ports usable.
I suspect within 5 years, I’ll be priced out of my current 2 Mbps upload
with a static IP and all ports usable.

I realize the problems that most users are concerned about on this issue
relate to their ability to download bytes from third-party companies
like Netflix. Therefore, it’s all too easy for Verizon to play out this
argument as if it’s big companies vs. big companies.

However, the real fallout from the current system is that the cost for
personal Internet connectivity that allows individuals equal existence
on the network is so high that few bother. The consequence, thus, is
that only those who are heavily involved in the technology industry even
know what types of applications would be available if everyone had a
static IP with all ports usable and equal upload and download speeds
of 10 Mbs or higher.

Yet, that’s the exact promise of network connectivity that I was taught
about as an undergraduate in Computer Science in the early 1990s. What
I see today is the dystopian version of the promise. My generation of
computer scientists have been forced to constrain their designs of
Internet-enabled applications to fit a model that the network carriers
dictate.

I realize you can’t possibly fix all these social ills in the network
connectivity industry with one rule-making, but I hope my comments have
perhaps given a slightly different perspective of what you’ll hear from
most of the other commenters on this issue. I thank you for reading my
comments and would be delighted to talk further with any of your staff
about these issues at your convenience.

Sincerely,

Bradley M. Kuhn,
a citizen of the USA since birth, currently living in New York, NY.

No, You Won’t See Me on Twitter, Facebook, Linkedin, Google Plus, Google Hangouts, nor Skype

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/11/24/google-plus.html

Most folks outside of technology fields and the software freedom
movement can’t grok why I’m not on Facebook. Facebook’s marketing has
reached most of the USA’s non-technical Internet users. On the upside,
Facebook gave the masses access to something akin to blogging. But, as
with most technology controlled by for-profit companies, Facebook is
proprietary software. Facebook, as a software application, is written
in a mix of server-side software that no one besides Facebook
employees can study, modify and share. On the client-side, Facebook is
an obfuscated, proprietary software Javascript application, which is
distributed to the user’s browser when they access facebook.com. Thus,
in my view, using Facebook is no different than installing a proprietary
binary program on my GNU/Linux desktop.

Most of the press critical of Facebook has focused on privacy, data
mining of users’ data on behalf of advertisers, and other types of data
autonomy concerns. Such concerns remain incredibly important too.
Nevertheless, since the advent of the software freedom community’s
concerns about network services a few years ago, I’ve maintained this
simple principle, that I still find correct: While I can agree that
merely liberating all software for an online application is not a
sufficient condition to treat the online users well, the
liberation of the software is certainly a necessary condition
for the freedom of the users. Releasing freely all code for the online
application the first step for freedom, autonomy, and privacy of the
users. Therefore, I certainly don’t give in myself to running
proprietary software on my
FaiF desktops. I simply
refuse to use Facebook.

Meanwhile, when Google Plus was announced, I didn’t see any fundamental
difference from Facebook. Of course, there are differences on the
subtle edges: for example, I do expect that Google will respect data
portability more than Facebook. However, I expect data mining for
advertisers’ behalf will be roughly the same, although Google will
likely be more subtle with advertising tie-in than Facebook, and thus
users will not notice it as much.

But, since I’m firstly a software freedom activist, on the primary
issue of my concern, there is absolutely no difference between Facebook
and Google Plus. Google Plus’ software is a mix of server-side
trade-secret software that only Google employees can study, share, and
modify, and a client-side proprietary Javascript application downloaded
into the users’ browsers when they access the website.

Yet, in a matter of just a few months, much of the online conversation
in the software freedom community has moved to Google Plus, and I’ve
heard very few people lament this situation. It’s not that I believe
we’ll succeed against proprietary software tomorrow, and I understand
fully that (unlike me) most people in the software freedom community
have important reasons to interact regularly with those outside of our
community. It’s not that I chastise software freedom developers and
activist for maintaining a minimal presence on these services to
interact with those who aren’t committed to our cause.

My actual complaint here is that Google Plus is becoming the default
location for discussion of software freedom issues. I’ve noticed
because I’ve recently discovered that I’ve missed a lot of community
conversations that are only occurring on Google Plus. (I’ve similarly
noticed that many of my Free Software contacts spam me to join Linkedin,
so I assume something similar is occurring there as well.)

What’s more, I’ve received more pressure than ever before to sign up
for not only Google Plus, but for Twitter, Linkedin, Google Hangout, Skype and other
socially-oriented online communication services. Indeed, just in the
last ten days, I’ve had three different software freedom development
projects and/or organizations request that I sign up for a proprietary
online communication service merely to attend a meeting or conference
call. (Update on 2013-02-16: I still get such requests on a monthly basis.) Of course, I refused, but I’ve not felt peer pressure this strong
since I was a teenager.

Indeed, the advent of proprietary social networking software adds a new
challenge to those of us who want to stand firm and resist proprietary
software. As adoption of services like Facebook, Twitter, Google Plus,
Skype, Linkedin and Google Hangouts increases, those of us who resist using proprietary
software will come under ever-increasing peer pressure. Disturbingly,
I’ve found that peer pressure comes not only from folks outside
our community, but also from those who have, for
years, otherwise been supporters of the software freedom
movement.

When I point out that I use only Free Software, some respond that
Skype, Facebook, and Google Plus are convenient and do things that can’t
be done easily with Free Software currently. I don’t argue that point.
It’s easy to resist Microsoft Windows, or Internet Explorer, or any
other proprietary software that is substandard and works poorly. But
proprietary software developers aren’t necessarily stupid, nor
untalented. In fact, proprietary software developers are highly paid to
write easy-to-use, beautiful and enticing software (cross-reference
Apple, BTW). The challenge the software freedom community faces is not
merely to provide alternatives to the worst proprietary software, but to
also replace the most enticing proprietary software available. Yet, if
FaiF Software developers settle into being users of that enticing
proprietary software, the key inspiration for development
disappears.

The best motivator to write great new software is to solve a problem
that’s not yet solved. To inspire ourselves as FaiF Software
developers, we can’t complacently settle into use of proprietary
software applications as part of our daily workflow. That’s why you
won’t find me on Google Plus, Google Hangout, Facebook, Skype, Linkedin, Twitter or
any other proprietary software network service. You can phone with me
with SIP, you can read my blog and identi.ca feed, and chat with me on
IRC and XMPP, and those are the only places that I’ll be until there’s
Free Software replacements for those other services. I sometimes kid
myself into believing that I’m leading by example, but sadly few in the
software freedom community seem to be following.

Denouncing vs. Advocating: In Defense of the Occasional Denouncement

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2009/10/11/denouncing-v-advocating.html

For the last decade, I’ve regularly seen complaints when we harder-core
software freedom advocates spend some time criticizing proprietary
software in addition to our normal work preserving, protecting and
promoting software freedom. While I think entire campaigns focused on
criticism are warranted in only extreme cases, I do believe that
denouncement of certain threatening proprietary technologies is a
necessary part of the software freedom movement, when done sparingly.

Denouncements are, of course, negative, and in general, negative
tactics are never as valuable as positive ones. Negative campaigns
alienate some people, and it’s always better to talk about the advantages
of software freedom than focus on the negative of proprietary
software.

The place where negative campaigns that denounce are simply necessary,
in my view, is when the practice either (a) will somehow completely
impeded the creation of FLOSS or (b) has become, or is becoming,
widespread among people who are otherwise supportive of software
freedom.

I can think quickly of two historical examples of the first type: UCITA
and DRM. UCITA was a State/Commonwealth-level law in the USA that was
proposed to make local laws more consistent regarding software
distribution. Because the implications were so bad
for software freedom (details of which are beyond scope of this post but
can be learned at the link)
, and because it was so unlikely that we
could get the UCITA drafts changed, it was necessary to publicly denounce
the law and hope that it didn’t pass. (Fortunately, it only ever passed
in my home state of Maryland and in Virginia. I am still, probably
pointlessly, careful never to distribute software when I visit my
hometown. 🙂

DRM, for its part, posed an even greater threat to software freedom
because its widespread adoption would require proprietarization of all
software that touched any television, movie, music, or book media. There
was also a concerted widespread pro-DRM campaign from USA corporations.
Therefore, grassroots campaigns denouncing DRM are extremely necessary
even despite that they are primarily negative in operation.

The second common need for denouncement when use of a proprietary
software package has become acceptable in the software freedom community.
The most common examples are usually specific proprietary software
programs that have become (or seem about to become) “all but
standard” part of the toolset for Free Software developers and
advocates.

Historically, this category included Java, and that’s why there were
anti-Java campaigns in the Free Software community that ran concurrently
with Free Software Java development efforts. The need for the former is
now gone, of course, because the latter efforts were so successful and we
have a fully FaiF Java system. Similarly, denouncement of Bitkeeper was
historically necessary, but is also now moot because of the advent and
widespread popularity of Mercurial, Git, and Bazaar.

Today, there are still a few proprietary programs that quickly rose to
ranks of “must install on my GNU/Linux system” for all but the
hardest-core Free Software advocates. The key examples are Adobe Flash
and Skype. Indeed, much to my chagrin, nearly all of my co-workers at
SFLC insist on using Adobe Flash, and nearly every Free Software developer
I meet at conferences uses it too. And, despite excellent VoIP technology
available as Free Software, Skype has sadly become widely used in our
community as well.

When a proprietary system becomes as pervasive in our community as
these have (or looks like it might), it’s absolutely time for
denouncement. It’s often very easy to forget that we’re relying more and
more heavily on proprietary software. When a proprietary system
effectively becomes the “default” for use on software freedom
systems, it means fewer people will be inspired to write a
replacement. (BTW, contribute to Gnash!) It means that Free Software
advocates will, in direct contradiction of their primary mission, start to
advocate that users install that proprietary software, because it
seems to make the FaiF platform “more useful”.

Hopefully, by now, most of us in the software freedom community agree
that proprietary software is a long term trap that we want to avoid.
However, in the short term, there is always some new shiny thing.
Something that appeals to our prurient desire for software that
“does something cool”. Something that just seems so
convenient that we convince ourselves we cannot live without it, so we
install it. Over time, short term becomes the long term, and suddenly we
have gaping holes in the Free Software infrastructure that only the very
few notice because the rest just install the proprietary thing. For
example, how many of us bother to install Linux Libre,
even long enough to at least know which of our hardware
components needs proprietary software? Even I have to admit I don’t do
this, and probably should.

An old adage of software development is that software is always better
if the developers of it actually have to use the thing from day to day.
If we agree that our goal is ultimately convincing everyone to run only
Free Software (and for that Free Software to fit their needs), then we
have to trailblaze by avoiding running proprietary software ourselves. If
you do run proprietary software, I hope you won’t celebrate the fact or
encourage others to do so. Skype is particularly insidious here, because
it’s a community application. Encouraging people to call you on Skype is
the same as emailing someone a Microsoft Word document: it’s encouraging
someone to install a proprietary application just to work with you.

Finally, I think the only answer to the FLOSS community
celebrating the arrival of some new proprietary program for
GNU/Linux is to denounce it, as a counterbalance to the fervor that such
an announcement causes. My podcast co-host Karen
often calls me the canary in the software coalmine because I am
usually the first to notice something that is bad for the advancement of
software freedom before anyone else does. In playing this role, I often
end up denouncing a few things here and there, although I can still count
on my two hands the times I’ve done so. I agree that advocacy should be
the norm, but the occasional denouncement is also a necessary part of the
picture.

(Note: this blog is part of an ongoing public discussion of a software
program that is not too popular yet, but was heralded widely as a win for
Free Software in the USA. I didn’t mention it by name mainly because I
don’t want to give it more press than it’s already gotten, as it is one of
this programs that is becoming a standard GNU/Linux user
application (at least in the USA), but hasn’t yet risen to the level of
ubiquity of the other examples I give above. Here’s to hoping that it
doesn’t.)

Skype

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/skype.html

A quick update on Skype: the next Skype version will include native
PulseAudio support. And not only that but they even tag their audio
streams properly
. This enables PulseAudio to do fancy stuff like
automatically pausing your audio playback when you have a phone call. Good job!

In some ways they are now doing a better job with integration in to the modern
audio landscape than some Free Software telephony applications!

Unfortunately they didn’t fix the biggest bug though: it’s still not Free
Software!

Skype

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/skype.html

A quick update on Skype: the next Skype version will include native
PulseAudio support. And not only that but they even tag their audio
streams properly
. This enables PulseAudio to do fancy stuff like
automatically pausing your audio playback when you have a phone call. Good job!

In some ways they are now doing a better job with integration in to the modern
audio landscape than some Free Software telephony applications!

Unfortunately they didn’t fix the biggest bug though: it’s still not Free
Software!

Stop Obsessing and Just Do It: VoIP Encryption Is Easier than You Think

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2008/06/20/voip-encryption-easy.html

Ian Sullivan showed me
an article
that he read about eavesdropping on Internet telephony calls
. I’m baffled at
the obsession about this issue on two fronts. First, I am amazed that
people want to hand their phone calls over to yet another proprietary
vendor (aka Skype) using unpublished, undocumented non-standard
protocols and who respects your privacy even less than the traditional
PSTN vendors. Second, I don’t understand why cryptography experts
believe we need to develop complicated new technology to solve this
problem in the medium term.

At SFLC, I set up the telephony system as VoIP with encryption on
every possible leg. While SFLC sometimes uses Skype, I don’t, of course, because it is (a)
proprietary software and (b) based on an undocumented protocol, (c)
controlled by a company that has less respect for users’ privacy than
the PSTN companies themselves. Indeed, security was actually last on
our list for reasons to reject Skype, because we already had a simple
solution for encrypting our telephony traffic: All calls are made
through a VPN.

Specifically, at SFLC, I set up a system whereby all users have an OpenVPN connection back to the
home office. From there, they have access to register a SIP client to
an internal Asterisk server living inside the VPN network.
Using that SIP phone, they could call any SFLC employee, fully encrypted. That call
continues either on the internal secured network, or back out over the
same VPN to the other SIP client. Users can also dial out from there to any
PSTN DID.

Of course, when calling the PSTN, the encryption ends at SFLC’s office, but that’s the PSTN’s fault, not ours. No technological solution — save using a modem to turn that traffic digital — can easily solve that. However,
with minimal effort, and using existing encryption subsystems, we have
end-to-end encryption for all employee-to-employee calls.

And it could go even further with a day’s effort of work! I have a
pretty simple idea on how to have an encrypted call to anyone
who happens to have a SIP client and an OpenVPN client. My plan is to
make a public OpenVPN server that accepts connection from any
host at all, that would then allow encrypted “phone the
office” calls to any SFLC phone with any SIP client anywhere on
the Internet. In this way, anyone wishing end-to-end phone encryption
to the SFLC need only connect to that publicly accessible OpenVPN and
dial our extensions with their SIP client over that line. This solution
even has the added bonus that it avoids the common firewall and NAT
related SIP problems, since all traffic gets tunneled through the
OpenVPN: if OpenVPN (which is, unlike SIP, a single-port UDP/IP protocol)
works, SIP automatically does!

The main criticism of this technique regards the silliness of two
employees at a conference in San Francisco bouncing all the way through
our NYC offices just to make a call to each other. While the Bandwidth
Wasting Police might show up at my door someday, I don’t actually find
this to be a serious problem. The last mile is always the problem in
Internet telephony, so a call that goes mostly across a single set of
last mile infrastructure in a particular municipality is no worse nor
better than one that takes a long haul round trip. Very occasionally,
there is a half second of delay when you have a few VPN-based users on a
conference call together, but that has a nice social side effect of
stopping people from trying to interrupt each other.

Finally, the article linked above talks about the issue of variable bit
rate compression changing packet size such that even encrypted packets
yield possible speech information, since some sounds need larger packets
than others. This problem is solved simply for us with two systems: (a)
we
use µ-law,
a very old, constant bit rate codec
, and (b) a tiny bit of entropy
is added to our packets by default, because the encryption is occurring
for all traffic across the VPN connection, not just the phone
call itself. Remember: all the traffic is going together across the one
OpenVPN UDP port, so an eavesdropper would need to detangle the VoIP
traffic from everything else. Indeed, I could easily make (b) even
stronger by simply having the SIP client open another connection back to
the asterisk host and exchange payloads generated
from /dev/random back and forth while the phone call is
going on.

This is really one of those cases where the simpler the solution, the
more secure it is. Trying to focus on “encryption of VoIP and VoIP only” is
what leads us to the kinds of vulnerabilities described in that article.
VoIP isn’t like email, where you always need an encryption-unaware
delivery mechanism between Alice and Bob. I
believe I’ve described a simple mechanism that can allow anyone with an
Asterisk box, an OpenVPN server, and an Internet connection to publish to the world easy instructions for phoning them securely with merely a SIP client plus and OpenVPN client. Why don’t
we just take the easy and more secure route and do our VoIP this
way?

Back from LCA

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/lca2008.html

After coming back from my somewhat extended linux.conf.au trip I spent the
whole day grepping through email. Only 263 unprocessed emails left in my inbox.
Yay.

PRTPILU

Thanks to the LCA guys, video footage is now available of all talks,
including my talk
Practical Real-Time Programming in Linux Userspace
(Theora,
Slides).
In my endless modesty I have to recommend: go, watch it, it contains some
really good stuff (including me not being able to divide 1 by 1000). Right now,
the real-time features of the Linux kernel are seldomly used on the desktop due
to a couple of reasons, among them general difficulty and unsafety to use them
but predominantly it’s probably just unawareness. There are a couple of
situations however, where scheduling desktop processes as RT makes a lot of
sense (think of video playback, mouse curse feedback, etc.), to decouple the
execution (scheduling) latency from the system load. This talk focussed mostly
on non-trivial technical stuff and all the limitations RT on Linux still has.
To fully grok what’s going on you thus need some insight into concurrent
programming and stuff.

My plan is to submit a related talk to GUADEC wich will focus more on
actually building RT apps for the desktop, in the hope we will eventually be
able to ship a desktop with audio and video that never skips, and where user
feedback is still snappy and quick even if we do the most complicated IO
intensive processing in lots of different processes in the background on slow
hardware.

I didn’t have time to go through all my slides (which I intended that way
and is perfectly OK), so you might want to browse through my slides even if you
saw the whole clip. The slides, however, are not particularly verbose.

Rumors

Regarding all those
rumors
that have been spread while I — the maintainer of PulseAudio — was in the
middle of the australian outback, fist-fighting with kangaroos near Uluru: I
am not really asking anyone to port their apps to the native PulseAudio API right now. While I do think
the API is quite powerful and not redundant, I also acknowledge that it is
very difficult to use properly (and very easy to misuse), (mostly) due to its
fully asynchronous nature. The mysterious libsydney project is
supposed to fix this and a lot more. libsydney is mostly the Dukem Nukem
Forever of audio APIs right now, but in contrast to DNF I didn’t really
announce it publicly yet, so it doesn’t really count. 😉 Suffice to
say, the current situation of audio APIs is a big big mess. We are working on
cleaning it up. For now: stick to the well established and least-broken APIs,
which boils down to ALSA. Stop using the OSS API now! Don’t program
against the ESD API (except for event sounds). But, most importantly: please
stop misusing the existing APIs. I am doing my best to allow all current APIs
to run without hassles on top of PA, but due to the sometimes blatant misues,
or even brutal violations of those APIs it is very hard to get that working
for all applications (yes, that means you, Adobe, and you, Skype). Don’t
expect that mmap is available on all audio devices — it’s not, and especially
not on PA. Don’t use /proc/asound/pcm as an API for enumerating audio
devices. It’s totally unsuitable for that. Don’t hard code device strings. Use
default as device string. Don’t make assumptions that are not and
cannot be true for non-hardware devices. Don’t fiddle around with period
settings unless you fully grok them and know what you are doing. In short: be
a better citizen, write code you don’t need to be ashamed of. ALSA has its
limitations and my compatibility code certainly as well, but this is not an
excuse for working around them by writing code that makes little children cry.
If you have a good ALSA backend for your program than this will not only fix
your issues with PA, but also with Bluetooth, you will have less code to
maintain and also code that is much easier to maintain.

Or even shorter: Fix. Your. Broken. ALSA. Client. Code. Thank you.

Oh, if you have questions regarding PA, just ping me on IRC (if I am
around) or write me an email, like everyone else. Mysterious, blogged pseudo
invitations to rumored meetings is not the best way to contact me.

Back from LCA

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/lca2008.html

After coming back from my somewhat extended linux.conf.au trip I spent the
whole day grepping through email. Only 263 unprocessed emails left in my inbox.
Yay.

PRTPILU

Thanks to the LCA guys, video footage is now available of all talks,
including my talk
Practical Real-Time Programming in Linux Userspace
(Theora,
Slides).
In my endless modesty I have to recommend: go, watch it, it contains some
really good stuff (including me not being able to divide 1 by 1000). Right now,
the real-time features of the Linux kernel are seldomly used on the desktop due
to a couple of reasons, among them general difficulty and unsafety to use them
but predominantly it’s probably just unawareness. There are a couple of
situations however, where scheduling desktop processes as RT makes a lot of
sense (think of video playback, mouse curse feedback, etc.), to decouple the
execution (scheduling) latency from the system load. This talk focussed mostly
on non-trivial technical stuff and all the limitations RT on Linux still has.
To fully grok what’s going on you thus need some insight into concurrent
programming and stuff.

My plan is to submit a related talk to GUADEC wich will focus more on
actually building RT apps for the desktop, in the hope we will eventually be
able to ship a desktop with audio and video that never skips, and where user
feedback is still snappy and quick even if we do the most complicated IO
intensive processing in lots of different processes in the background on slow
hardware.

I didn’t have time to go through all my slides (which I intended that way
and is perfectly OK), so you might want to browse through my slides even if you
saw the whole clip. The slides, however, are not particularly verbose.

Rumors

Regarding all those
rumors
that have been spread while I — the maintainer of PulseAudio — was in the
middle of the australian outback, fist-fighting with kangaroos near Uluru: I
am not really asking anyone to port their apps to the native PulseAudio API right now. While I do think
the API is quite powerful and not redundant, I also acknowledge that it is
very difficult to use properly (and very easy to misuse), (mostly) due to its
fully asynchronous nature. The mysterious libsydney project is
supposed to fix this and a lot more. libsydney is mostly the Dukem Nukem
Forever of audio APIs right now, but in contrast to DNF I didn’t really
announce it publicly yet, so it doesn’t really count. 😉 Suffice to
say, the current situation of audio APIs is a big big mess. We are working on
cleaning it up. For now: stick to the well established and least-broken APIs,
which boils down to ALSA. Stop using the OSS API now! Don’t program
against the ESD API (except for event sounds). But, most importantly: please
stop misusing the existing APIs. I am doing my best to allow all current APIs
to run without hassles on top of PA, but due to the sometimes blatant misues,
or even brutal violations of those APIs it is very hard to get that working
for all applications (yes, that means you, Adobe, and you, Skype). Don’t
expect that mmap is available on all audio devices — it’s not, and especially
not on PA. Don’t use /proc/asound/pcm as an API for enumerating audio
devices. It’s totally unsuitable for that. Don’t hard code device strings. Use
default as device string. Don’t make assumptions that are not and
cannot be true for non-hardware devices. Don’t fiddle around with period
settings unless you fully grok them and know what you are doing. In short: be
a better citizen, write code you don’t need to be ashamed of. ALSA has its
limitations and my compatibility code certainly as well, but this is not an
excuse for working around them by writing code that makes little children cry.
If you have a good ALSA backend for your program than this will not only fix
your issues with PA, but also with Bluetooth, you will have less code to
maintain and also code that is much easier to maintain.

Or even shorter: Fix. Your. Broken. ALSA. Client. Code. Thank you.

Oh, if you have questions regarding PA, just ping me on IRC (if I am
around) or write me an email, like everyone else. Mysterious, blogged pseudo
invitations to rumored meetings is not the best way to contact me.