Tag Archives: torrent

Libertarians are against net neutrality

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/12/libertarians-are-against-net-neutrality.html

This post claims to be by a libertarian in support of net neutrality. As a libertarian, I need to debunk this. “Net neutrality” is a case of one-hand clapping, you rarely hear the competing side, and thus, that side may sound attractive. This post is about the other side, from a libertarian point of view.

That post just repeats the common, and wrong, left-wing talking points. I mean, there might be a libertarian case for some broadband regulation, but this isn’t it.

This thing they call “net neutrality” is just left-wing politics masquerading as some sort of principle. It’s no different than how people claim to be “pro-choice”, yet demand forced vaccinations. Or, it’s no different than how people claim to believe in “traditional marriage” even while they are on their third “traditional marriage”.

Properly defined, “net neutrality” means no discrimination of network traffic. But nobody wants that. A classic example is how most internet connections have faster download speeds than uploads. This discriminates against upload traffic, harming innovation in upload-centric applications like DropBox’s cloud backup or BitTorrent’s peer-to-peer file transfer. Yet activists never mention this, or other types of network traffic discrimination, because they no more care about “net neutrality” than Trump or Gingrich care about “traditional marriage”.

Instead, when people say “net neutrality”, they mean “government regulation”. It’s the same old debate between who is the best steward of consumer interest: the free-market or government.

Specifically, in the current debate, they are referring to the Obama-era FCC “Open Internet” order and reclassification of broadband under “Title II” so they can regulate it. Trump’s FCC is putting broadband back to “Title I”, which means the FCC can’t regulate most of its “Open Internet” order.

Don’t be tricked into thinking the “Open Internet” order is anything but intensely politically. The premise behind the order is the Democrat’s firm believe that it’s government who created the Internet, and all innovation, advances, and investment ultimately come from the government. It sees ISPs as inherently deceitful entities who will only serve their own interests, at the expense of consumers, unless the FCC protects consumers.

It says so right in the order itself. It starts with the premise that broadband ISPs are evil, using illegitimate “tactics” to hurt consumers, and continues with similar language throughout the order.

A good contrast to this can be seen in Tim Wu’s non-political original paper in 2003 that coined the term “net neutrality”. Whereas the FCC sees broadband ISPs as enemies of consumers, Wu saw them as allies. His concern was not that ISPs would do evil things, but that they would do stupid things, such as favoring short-term interests over long-term innovation (such as having faster downloads than uploads).

The political depravity of the FCC’s order can be seen in this comment from one of the commissioners who voted for those rules:

FCC Commissioner Jessica Rosenworcel wants to increase the minimum broadband standards far past the new 25Mbps download threshold, up to 100Mbps. “We invented the internet. We can do audacious things if we set big goals, and I think our new threshold, frankly, should be 100Mbps. I think anything short of that shortchanges our children, our future, and our new digital economy,” Commissioner Rosenworcel said.

This is indistinguishable from communist rhetoric that credits the Party for everything, as this booklet from North Korea will explain to you.

But what about monopolies? After all, while the free-market may work when there’s competition, it breaks down where there are fewer competitors, oligopolies, and monopolies.

There is some truth to this, in individual cities, there’s often only only a single credible high-speed broadband provider. But this isn’t the issue at stake here. The FCC isn’t proposing light-handed regulation to keep monopolies in check, but heavy-handed regulation that regulates every last decision.

Advocates of FCC regulation keep pointing how broadband monopolies can exploit their renting-seeking positions in order to screw the customer. They keep coming up with ever more bizarre and unlikely scenarios what monopoly power grants the ISPs.

But the never mention the most simplest: that broadband monopolies can just charge customers more money. They imagine instead that these companies will pursue a string of outrageous, evil, and less profitable behaviors to exploit their monopoly position.

The FCC’s reclassification of broadband under Title II gives it full power to regulate ISPs as utilities, including setting prices. The FCC has stepped back from this, promising it won’t go so far as to set prices, that it’s only regulating these evil conspiracy theories. This is kind of bizarre: either broadband ISPs are evilly exploiting their monopoly power or they aren’t. Why stop at regulating only half the evil?

The answer is that the claim “monopoly” power is a deception. It starts with overstating how many monopolies there are to begin with. When it issued its 2015 “Open Internet” order the FCC simultaneously redefined what they meant by “broadband”, upping the speed from 5-mbps to 25-mbps. That’s because while most consumers have multiple choices at 5-mbps, fewer consumers have multiple choices at 25-mbps. It’s a dirty political trick to convince you there is more of a problem than there is.

In any case, their rules still apply to the slower broadband providers, and equally apply to the mobile (cell phone) providers. The US has four mobile phone providers (AT&T, Verizon, T-Mobile, and Sprint) and plenty of competition between them. That it’s monopolistic power that the FCC cares about here is a lie. As their Open Internet order clearly shows, the fundamental principle that animates the document is that all corporations, monopolies or not, are treacherous and must be regulated.

“But corporations are indeed evil”, people argue, “see here’s a list of evil things they have done in the past!”

No, those things weren’t evil. They were done because they benefited the customers, not as some sort of secret rent seeking behavior.

For example, one of the more common “net neutrality abuses” that people mention is AT&T’s blocking of FaceTime. I’ve debunked this elsewhere on this blog, but the summary is this: there was no network blocking involved (not a “net neutrality” issue), and the FCC analyzed it and decided it was in the best interests of the consumer. It’s disingenuous to claim it’s an evil that justifies FCC actions when the FCC itself declared it not evil and took no action. It’s disingenuous to cite the “net neutrality” principle that all network traffic must be treated when, in fact, the network did treat all the traffic equally.

Another frequently cited abuse is Comcast’s throttling of BitTorrent.Comcast did this because Netflix users were complaining. Like all streaming video, Netflix backs off to slower speed (and poorer quality) when it experiences congestion. BitTorrent, uniquely among applications, never backs off. As most applications become slower and slower, BitTorrent just speeds up, consuming all available bandwidth. This is especially problematic when there’s limited upload bandwidth available. Thus, Comcast throttled BitTorrent during prime time TV viewing hours when the network was already overloaded by Netflix and other streams. BitTorrent users wouldn’t mind this throttling, because it often took days to download a big file anyway.

When the FCC took action, Comcast stopped the throttling and imposed bandwidth caps instead. This was a worse solution for everyone. It penalized heavy Netflix viewers, and prevented BitTorrent users from large downloads. Even though BitTorrent users were seen as the victims of this throttling, they’d vastly prefer the throttling over the bandwidth caps.

In both the FaceTime and BitTorrent cases, the issue was “network management”. AT&T had no competing video calling service, Comcast had no competing download service. They were only reacting to the fact their networks were overloaded, and did appropriate things to solve the problem.

Mobile carriers still struggle with the “network management” issue. While their networks are fast, they are still of low capacity, and quickly degrade under heavy use. They are looking for tricks in order to reduce usage while giving consumers maximum utility.

The biggest concern is video. It’s problematic because it’s designed to consume as much bandwidth as it can, throttling itself only when it experiences congestion. This is what you probably want when watching Netflix at the highest possible quality, but it’s bad when confronted with mobile bandwidth caps.

With small mobile devices, you don’t want as much quality anyway. You want the video degraded to lower quality, and lower bandwidth, all the time.

That’s the reasoning behind T-Mobile’s offerings. They offer an unlimited video plan in conjunction with the biggest video providers (Netflix, YouTube, etc.). The catch is that when congestion occurs, they’ll throttle it to lower quality. In other words, they give their bandwidth to all the other phones in your area first, then give you as much of the leftover bandwidth as you want for video.

While it sounds like T-Mobile is doing something evil, “zero-rating” certain video providers and degrading video quality, the FCC allows this, because they recognize it’s in the customer interest.

Mobile providers especially have great interest in more innovation in this area, in order to conserve precious bandwidth, but they are finding it costly. They can’t just innovate, but must ask the FCC permission first. And with the new heavy handed FCC rules, they’ve become hostile to this innovation. This attitude is highlighted by the statement from the “Open Internet” order:

And consumers must be protected, for example from mobile commercial practices masquerading as “reasonable network management.”

This is a clear declaration that free-market doesn’t work and won’t correct abuses, and that that mobile companies are treacherous and will do evil things without FCC oversight.

Conclusion

Ignoring the rhetoric for the moment, the debate comes down to simple left-wing authoritarianism and libertarian principles. The Obama administration created a regulatory regime under clear Democrat principles, and the Trump administration is rolling it back to more free-market principles. There is no principle at stake here, certainly nothing to do with a technical definition of “net neutrality”.

The 2015 “Open Internet” order is not about “treating network traffic neutrally”, because it doesn’t do that. Instead, it’s purely a left-wing document that claims corporations cannot be trusted, must be regulated, and that innovation and prosperity comes from the regulators and not the free market.

It’s not about monopolistic power. The primary targets of regulation are the mobile broadband providers, where there is plenty of competition, and who have the most “network management” issues. Even if it were just about wired broadband (like Comcast), it’s still ignoring the primary ways monopolies profit (raising prices) and instead focuses on bizarre and unlikely ways of rent seeking.

If you are a libertarian who nonetheless believes in this “net neutrality” slogan, you’ve got to do better than mindlessly repeating the arguments of the left-wing. The term itself, “net neutrality”, is just a slogan, varying from person to person, from moment to moment. You have to be more specific. If you truly believe in the “net neutrality” technical principle that all traffic should be treated equally, then you’ll want a rewrite of the “Open Internet” order.

In the end, while libertarians may still support some form of broadband regulation, it’s impossible to reconcile libertarianism with the 2015 “Open Internet”, or the vague things people mean by the slogan “net neutrality”.

Security updates for Monday

Post Syndicated from ris original https://lwn.net/Articles/740605/rss

Security updates have been issued by Arch Linux (cacti, curl, exim, lib32-curl, lib32-libcurl-compat, lib32-libcurl-gnutls, lib32-libxcursor, libcurl-compat, libcurl-gnutls, libofx, libxcursor, procmail, samba, shadowsocks-libev, and thunderbird), Debian (tor), Fedora (kernel, moodle, mupdf, python-sanic, qbittorrent, qpid-cpp, and rb_libtorrent), Mageia (git, lame, memcached, nagios, perl-Catalyst-Plugin-Static-Simple, php-phpmailer, shadowsocks-libev, and varnish), openSUSE (binutils, libressl, lynx, openssl, tor, wireshark, and xen), Red Hat (thunderbird), Scientific Linux (kernel, qemu-kvm, and thunderbird), SUSE (kernel, ncurses, openvpn-openssl1, and xen), and Ubuntu (curl, evince, and firefox).

Concerns About The Blockchain Technology

Post Syndicated from Bozho original https://techblog.bozho.net/concerns-blockchain-technology/

The so-called (and marketing-branded) “blockchain technology” is promised to revolutionize every industry. Anything, they say, will become decentralized, free from middle men or government control. Services will thrive on various installments of the blockchain, and smart contracts will automatically enforce any logic that is related to the particular domain.

I don’t mind having another technological leap (after the internet), and given that I’m technically familiar with the blockchain, I may even be part of it. But I’m not convinced it will happen, and I’m not convinced it’s going to be the next internet.

If we strip the hype, the technology behind Bitcoin is indeed a technical masterpiece. It combines existing techniques (likes hash chains and merkle trees) with a very good proof-of-work based consensus algorithm. And it creates a digital currency, which ontop of being worth billions now, is simply cool.

But will this technology be mass-adopted, and will mass adoption allow it to retain the technological benefits it has?

First, I’d like to nitpick a little bit – if anyone is speaking about “decentralized software” when referring to “the blockchain”, be suspicious. Bitcon and other peer-to-peer overlay networks are in fact “distributed” (see the pictures here). “Decentralized” means having multiple providers, but doesn’t mean each user will be full-featured nodes on the network. This nitpicking is actually part of another argument, but we’ll get to that.

If blockchain-based applications want to reach mass adoption, they have to be user-friendly. I know I’m being captain obvious here (and fortunately some of the people in the area have realized that), but with the current state of the technology, it’s impossible for end users to even get it, let alone use it.

My first serious concern is usability. To begin with, you need to download the whole blockchain on your machine. When I got my first bitcoin several years ago (when it was still 10 euro), the blockchain was kind of small and I didn’t notice that problem. Nowadays both the Bitcoin and Ethereum blockchains take ages to download. I still haven’t managed to download the ethereum one – after several bugs and reinstalls of the client, I’m still at 15%. And we are just at the beginning. A user just will not wait for days to download something in order to be able to start using a piece of technology.

I recently proposed downloading snapshots of the blockchain via bittorrent to be included in the Ethereum protocol itself. I know that snapshots of the Bitcoin blockchain have been distributed that way, but it has been a manual process. If a client can quickly download the huge file up to a recent point, and then only donwload the latest ones in the the traditional way, starting up may be easier. Of course, the whole chain would have to be verified, but maybe that can be a background process that doesn’t stop you from using whatever is built ontop of the particular blockchain. (I’m not sure if that will be secure enough, and that, say potential Sybil attacks on the bittorrent part won’t make it undesirable, it’s just an idea).

But even if such an approach works and is adopted, that would still mean that for every service you’d have to download a separate blockchain. Of course, projects like Ethereum may seem like the “one stop shop” for cool blockchain-based applications, but fragmentation is already happening – there are alt-coins bundled with various services like file storage, DNS, etc. That will not be workable for end-users. And it’s certainly not an option for mobile, which is the dominant client now. If instead of downloading the entire chain, something like consistent hashing is used to distribute the content in small portions among clients, it might be workable. But how will trust work in that case, I don’t know. Maybe it’s possible, maybe not.

And yes, I know that you don’t necessarily have to install a wallet/client in order to make use of a given blockchain – you can just have a cloud-based wallet. Which is fairly convenient, but that gets me to my nitpicking from a few paragraphs above and to may second concern – this effectively turns a distributed system into a decentralized one – a limited number of cloud providers hold most of the data (just as a limited number of miners hold most of the processing power). And then, even though the underlying technology allows for a distributed deployment, we’ll end-up again with simply decentralized or even de-facto cenetralized, if mergers and acquisitions lead us there (and they probably will). And in order to be able to access our wallets/accounts from multiple devices, we’d use a convenient cloud service where we’d login with our username and password (because the private key is just too technical and hard for regular users). And that seems to defeat the whole idea.

Not only that, but there is an inevitable centralization of decisions (who decides on the size of the block, who has commit rights to the client repository) as well as a hidden centralization of power – how much GPU power does the Chinese mining “farms” control and can they influence the network significantly? And will the average user ever know that or care (as they don’t care that Google is centralized). I think that overall, distributed technologies will follow the power law, and the majority of data/processing power/decision power will be controller by a minority of actors. And so our distributed utopia will not happen in its purest form we dream of.

My third concern is incentive. Distributed technologies that have been successful so far have a pretty narrow set of incentives. The internet was promoted by large public institutions, including government agencies and big universitives. Bittorrent was successful mainly because it allowed free movies and songs with 2 clicks of the mouse. And Bitcoin was successful because it offered financial benefits. I’m oversimplifying of course, but “government effort”, “free & easy” and “source of more money” seem to have been the successful incentives. On the other side of the fence there are dozens of failed distributed technologies. I’ve tried many of them – alternative search engines, alternative file storage, alternative ride-sharings, alternative social networks, alternative “internets” even. None have gained traction. Because they are not easier to use than their free competitors and you can’t make money out of them (and no government bothers promoting them).

Will blockchain-based services have sufficient incentives to drive customers? Will centralized competitors just easily crush the distributed alternatives by being cheaper, more-user friendly, having sales departments that can target more than hardcore geeks who have no problem syncing their blockchain via the command line? The utopian slogans seem very cool to idealists and futurists, but don’t sell. “Free from centralized control, full control over your data” – we’d have to go through a long process of cultural change before these things make sense to more than a handful of people.

Speaking of services, often examples include “the sharing economy”, where one stranger offers a service to another stranger. Blockchain technology seems like a good fit here indeed – the services are by nature distributed, why should the technology be centralized? Here comes my fourth concern – identity. While for the cryptocurrencies it’s actually beneficial to be anonymous, for most of the real-world services (i.e. the industries that ought to be revolutionized) this is not an option. You can’t just go in the car of publicKey=5389BC989A342…. “But there are already distributed reputation systems”, you may say. Yes, and they are based on technical, not real-world identities. That doesn’t build trust. I don’t trust that publicKey=5389BC989A342… is the same person that got the high reputation. There may be five people behind that private key. The private key may have been stolen (e.g. in a cloud-provider breach).

The values of companies like Uber and AirBNB is that they serve as trust brokers. They verify and vouch for their drivers and hosts (and passengers and guests). They verify their identity through government-issued documents, skype calls, selfies, compare pictures to documents, get access to government databases, credit records, etc. Can a fully distributed service do that? No. You’d need a centralized provider to do it. And how would the blockchain make any difference then? Well, I may not be entirely correct here. I’ve actually been thinking quite a lot about decentralized identity. E.g. a way to predictably generate a private key based on, say biometrics+password+government-issued-documents, and use the corresponding public key as your identifier, which is then fed into reputation schemes and ultimately – real-world services. But we’re not there yet.

And that is part of my fifth concern – the technology itself. We are not there yet. There are bugs, there are thefts and leaks. There are hard-forks. There isn’t sufficient understanding of the technology (I confess I don’t fully grasp all the implementation details, and they are always the key). Often the technology is advertised as “just working”, but it isn’t. The other day I read an article (lost the link) that clarifies a common misconception about smart contracts – they cannot interact with the outside world – they can’t call APIs (e.g. stock market prices, bank APIs), they can’t push or fetch data from anywhere but the blockchain. That mandates the need, again, for a centralized service that pushes the relevant information before smart contracts can pick it up. I’m pretty sure that all cool-sounding applications are not possible without extensive research. And even if/when they are, writing distributed code is hard. Debugging a smart contract is hard. Yes, hard is cool, but that doesn’t drive economic value.

I have mostly been referring to public blockchains so far. Private blockchains may have their practical application, but there’s one catch – they are not exactly the cool distributed technology that the Bitcoin uses. They may be called “blockchains” because they…chain blocks, but they usually centralize trust. For example the Hyperledger project uses PKI, with all its benefits and risks. In these cases, a centralized authority issues the identity “tokens”, and then nodes communicate and form a shared ledger. That’s a bit easier problem to solve, and the nodes would usually be on actual servers in real datacenters, and not on your uncle’s Windows XP.

That said, hash chaining has been around for quite a long time. I did research on the matter because of a side-project of mine and it seems providing a tamper-proof/tamper-evident log/database on semi-trusted machines has been discussed in many computer science papers since the 90s. That alone is not “the magic blockchain” that will solve all of our problems, no matter what gossip protocols you sprinkle ontop. I’m not saying that’s bad, on the contrary – any variation and combinations of the building blocks of the blockchain (the hash chain, the consensus algorithm, the proof-of-work (or stake), possibly smart contracts), has potential for making useful products.

I know I sound like the a naysayer here, but I hope I’ve pointed out particular issues, rather than aimlessly ranting at the hype (though that’s tempting as well). I’m confident that blockchain-like technologies will have their practical applications, and we will see some successful, widely-adopted services and solutions based on that, just as pointed out in this detailed report. But I’m not convinced it will be revolutionizing.

I hope I’m proven wrong, though, because watching a revolutionizing technology closely and even being part of it would be quite cool.

The post Concerns About The Blockchain Technology appeared first on Bozho's tech blog.

Defending anti-netneutrality arguments

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/07/defending-anti-netneutrality-arguments.html

Last week, activists proclaimed a “NetNeutrality Day”, trying to convince the FCC to regulate NetNeutrality. As a libertarian, I tweeted many reasons why NetNeutrality is stupid. NetNeutrality is exactly the sort of government regulation Libertarians hate most. Somebody tweeted the following challenge, which I thought I’d address here.

The links point to two separate cases.

  • the Comcast BitTorrent throttling case
  • a lawsuit against Time Warning for poor service
The tone of the tweet suggests that my anti-NetNeutrality stance cannot be defended in light of these cases. But of course this is wrong. The short answers are:

  • the Comcast BitTorrent throttling benefits customers
  • poor service has nothing to do with NetNeutrality

The long answers are below.

The Comcast BitTorrent Throttling

The presumption is that any sort of packet-filtering is automatically evil, and against the customer’s interests. That’s not true.
Take GoGoInflight’s internet service for airplanes. They block access to video sites like NetFlix. That’s because they often have as little as 1-mbps for the entire plane, which is enough to support many people checking email and browsing Facebook, but a single person trying to watch video will overload the internet connection for everyone. Therefore, their Internet service won’t work unless they filter video sites.
GoGoInflight breaks a lot of other NetNeutrality rules, such as providing free access to Amazon.com or promotion deals where users of a particular phone get free Internet access that everyone else pays for. And all this is allowed by FCC, allowing GoGoInflight to break NetNeutrality rules because it’s clearly in the customer interest.
Comcast’s throttling of BitTorrent is likewise clearly in the customer interest. Until the FCC stopped them, BitTorrent users were allowed unlimited downloads. Afterwards, Comcast imposed a 300-gigabyte/month bandwidth cap.
Internet access is a series of tradeoffs. BitTorrent causes congestion during prime time (6pm to 10pm). Comcast has to solve it somehow — not solving it wasn’t an option. Their options were:
  • Charge all customers more, so that the 99% not using BitTorrent subsidizes the 1% who do.
  • Impose a bandwidth cap, preventing heavy BitTorrent usage.
  • Throttle BitTorrent packets during prime-time hours when the network is congested.
Option 3 is clearly the best. BitTorrent downloads take hours, days, and sometimes weeks. BitTorrent users don’t mind throttling during prime-time congested hours. That’s preferable to the other option, bandwidth caps.
I’m a BitTorrent user, and a heavy downloader (I scan the Internet on a regular basis from cloud machines, then download the results to home, which can often be 100-gigabytes in size for a single scan). I want prime-time BitTorrent throttling rather than bandwidth caps. The EFF/FCC’s action that prevented BitTorrent throttling forced me to move to Comcast Business Class which doesn’t have bandwidth caps, charging me $100 more a month. It’s why I don’t contribute the EFF — if they had not agitated for this, taking such choices away from customers, I’d have $1200 more per year to donate to worthy causes.
Ask any user of BitTorrent which they prefer: 300gig monthly bandwidth cap or BitTorrent throttling during prime-time congested hours (6pm to 10pm). The FCC’s action did not help Comcast’s customers, it hurt them. Packet-filtering would’ve been a good thing, not a bad thing.

The Time-Warner Case
First of all, no matter how you define the case, it has nothing to do with NetNeutrality. NetNeutrality is about filtering packets, giving some priority over others. This case is about providing slow service for everyone.
Secondly, it’s not true. Time Warner provided the same access speeds as everyone else. Just because they promise 10mbps download speeds doesn’t mean you get 10mbps to NetFlix. That’s not how the Internet works — that’s not how any of this works.
To prove this, look at NetFlix’s connection speed graphis. It shows Time Warner Cable is average for the industry. It had the same congestion problems most ISPs had in 2014, and it has the same inability to provide more than 3mbps during prime-time (6pm-10pm) that all ISPs have today.

The YouTube video quality diagnostic pages show Time Warner Cable to similar to other providers around the country. It also shows the prime-time bump between 6pm and 10pm.
Congestion is an essential part of the Internet design. When an ISP like Time Warner promises you 10mbps bandwidth, that’s only “best effort”. There’s no way they can promise 10mbps stream to everybody on the Internet, especially not to a site like NetFlix that gets overloaded during prime-time.
Indeed, it’s the defining feature of the Internet compared to the old “telecommunications” network. The old phone system guaranteed you a steady 64-kbps stream between any time points in the phone network, but it cost a lot of money. Today’s Internet provide a free multi-megabit stream for free video calls (Skype, Facetime) around the world — but with the occasional dropped packets because of congestion.
Whatever lawsuit money-hungry lawyers come up with isn’t about how an ISP like Time Warner works. It’s only about how they describe the technology. They work no different than every ISP — no different than how anything is possible.
Conclusion

The short answer to the above questions is this: Comcast’s BitTorrent throttling benefits customers, and the Time Warner issue has nothing to do with NetNeutrality at all.

The tweet demonstrates that NetNeutrality really means. It has nothing to do with the facts of any case, especially the frequency that people point to ISP ills that have nothing actually to do with NetNeutrality. Instead, what NetNeutrality really about is socialism. People are convinced corporations are evil and want the government to run the Internet. The Comcast/BitTorrent case is a prime example of why this is a bad idea: government definitions of what customers want is actually far different than what customers actually want.

A Raspbian desktop update with some new programming tools

Post Syndicated from Simon Long original https://www.raspberrypi.org/blog/a-raspbian-desktop-update-with-some-new-programming-tools/

Today we’ve released another update to the Raspbian desktop. In addition to the usual small tweaks and bug fixes, the big new changes are the inclusion of an offline version of Scratch 2.0, and of Thonny (a user-friendly IDE for Python which is excellent for beginners). We’ll look at all the changes in this post, but let’s start with the biggest…

Scratch 2.0 for Raspbian

Scratch is one of the most popular pieces of software on Raspberry Pi. This is largely due to the way it makes programming accessible – while it is simple to learn, it covers many of the concepts that are used in more advanced languages. Scratch really does provide a great introduction to programming for all ages.

Raspbian ships with the original version of Scratch, which is now at version 1.4. A few years ago, though, the Scratch team at the MIT Media Lab introduced the new and improved Scratch version 2.0, and ever since we’ve had numerous requests to offer it on the Pi.

There was, however, a problem with this. The original version of Scratch was written in a language called Squeak, which could run on the Pi in a Squeak interpreter. Scratch 2.0, however, was written in Flash, and was designed to run from a remote site in a web browser. While this made Scratch 2.0 a cross-platform application, which you could run without installing any Scratch software, it also meant that you had to be able to run Flash on your computer, and that you needed to be connected to the internet to program in Scratch.

We worked with Adobe to include the Pepper Flash plugin in Raspbian, which enables Flash sites to run in the Chromium browser. This addressed the first of these problems, so the Scratch 2.0 website has been available on Pi for a while. However, it still needed an internet connection to run, which wasn’t ideal in many circumstances. We’ve been working with the Scratch team to get an offline version of Scratch 2.0 running on Pi.

Screenshot of Scratch on Raspbian

The Scratch team had created a website to enable developers to create hardware and software extensions for Scratch 2.0; this provided a version of the Flash code for the Scratch editor which could be modified to run locally rather than over the internet. We combined this with a program called Electron, which effectively wraps up a local web page into a standalone application. We ended up with the Scratch 2.0 application that you can find in the Programming section of the main menu.

Physical computing with Scratch 2.0

We didn’t stop there though. We know that people want to use Scratch for physical computing, and it has always been a bit awkward to access GPIO pins from Scratch. In our Scratch 2.0 application, therefore, there is a custom extension which allows the user to control the Pi’s GPIO pins without difficulty. Simply click on ‘More Blocks’, choose ‘Add an Extension’, and select ‘Pi GPIO’. This loads two new blocks, one to read and one to write the state of a GPIO pin.

Screenshot of new Raspbian iteration of Scratch 2, featuring GPIO pin control blocks.

The Scratch team kindly allowed us to include all the sprites, backdrops, and sounds from the online version of Scratch 2.0. You can also use the Raspberry Pi Camera Module to create new sprites and backgrounds.

This first release works well, although it can be slow for some operations; this is largely unavoidable for Flash code running under Electron. Bear in mind that you will need to have the Pepper Flash plugin installed (which it is by default on standard Raspbian images). As Pepper Flash is only compatible with the processor in the Pi 2.0 and Pi 3, it is unfortunately not possible to run Scratch 2.0 on the Pi Zero or the original models of the Pi.

We hope that this makes Scratch 2.0 a more practical proposition for many users than it has been to date. Do let us know if you hit any problems, though!

Thonny: a more user-friendly IDE for Python

One of the paths from Scratch to ‘real’ programming is through Python. We know that the transition can be awkward, and this isn’t helped by the tools available for learning Python. It’s fair to say that IDLE, the Python IDE, isn’t the most popular piece of software ever written…

Earlier this year, we reviewed every Python IDE that we could find that would run on a Raspberry Pi, in an attempt to see if there was something better out there than IDLE. We wanted to find something that was easier for beginners to use but still useful for experienced Python programmers. We found one program, Thonny, which stood head and shoulders above all the rest. It’s a really user-friendly IDE, which still offers useful professional features like single-stepping of code and inspection of variables.

Screenshot of Thonny IDE in Raspbian

Thonny was created at the University of Tartu in Estonia; we’ve been working with Aivar Annamaa, the lead developer, on getting it into Raspbian. The original version of Thonny works well on the Pi, but because the GUI is written using Python’s default GUI toolkit, Tkinter, the appearance clashes with the rest of the Raspbian desktop, most of which is written using the GTK toolkit. We made some changes to bring things like fonts and graphics into line with the appearance of our other apps, and Aivar very kindly took that work and converted it into a theme package that could be applied to Thonny.

Due to the limitations of working within Tkinter, the result isn’t exactly like a native GTK application, but it’s pretty close. It’s probably good enough for anyone who isn’t a picky UI obsessive like me, anyway! Have a look at the Thonny webpage to see some more details of all the cool features it offers. We hope that having a more usable environment will help to ease the transition from graphical languages like Scratch into ‘proper’ languages like Python.

New icons

Other than these two new packages, this release is mostly bug fixes and small version bumps. One thing you might notice, though, is that we’ve made some tweaks to our custom icon set. We wondered if the icons might look better with slightly thinner outlines. We tried it, and they did: we hope you prefer them too.

Downloading the new image

You can either download a new image from the Downloads page, or you can use apt to update:

sudo apt-get update
sudo apt-get dist-upgrade

To install Scratch 2.0:

sudo apt-get install scratch2

To install Thonny:

sudo apt-get install python3-thonny

One more thing…

Before Christmas, we released an experimental version of the desktop running on Debian for x86-based computers. We were slightly taken aback by how popular it turned out to be! This made us realise that this was something we were going to need to support going forward. We’ve decided we’re going to try to make all new desktop releases for both Pi and x86 from now on.

The version of this we released last year was a live image that could run from a USB stick. Many people asked if we could make it permanently installable, so this version includes an installer. This uses the standard Debian install process, so it ought to work on most machines. I should stress, though, that we haven’t been able to test on every type of hardware, so there may be issues on some computers. Please be sure to back up your hard drive before installing it. Unlike the live image, this will erase and reformat your hard drive, and you will lose anything that is already on it!

You can still boot the image as a live image if you don’t want to install it, and it will create a persistence partition on the USB stick so you can save data. Just select ‘Run with persistence’ from the boot menu. To install, choose either ‘Install’ or ‘Graphical install’ from the same menu. The Debian installer will then walk you through the install process.

You can download the latest x86 image (which includes both Scratch 2.0 and Thonny) from here or here for a torrent file.

One final thing

This version of the desktop is based on Debian Jessie. Some of you will be aware that a new stable version of Debian (called Stretch) was released last week. Rest assured – we have been working on porting everything across to Stretch for some time now, and we will have a Stretch release ready some time over the summer.

The post A Raspbian desktop update with some new programming tools appeared first on Raspberry Pi.

Съд на ЕС: за достъпа до The Pirate Bay

Post Syndicated from nellyo original https://nellyo.wordpress.com/2017/06/15/the-pirate-bay-5/

Вчера беше публикувано решението на Съда на ЕС по дело C‑610/15 Stichting Brein срещу Ziggo BV и XS4ALL Internet BV.

Решението засяга функционирането и достъпа до The Pirate Bay.

Спорът

9 Ziggo и XS4ALL са доставчици на достъп до интернет. Значителна част от техните абонати ползват платформата за онлайн споделяне TPB, индексатор на BitTorrent файлове. BitTorrent е протокол, чрез който потребителите (наричани „равноправни устройства“ или „peers“) могат да споделят файлове. Съществената характеристика на BitTorrent се състои в това, че файловете, които се споделят, са разделени на малки сегменти, като по този начин отпада необходимостта от централен сървър за съхраняване на тези файлове, което облекчава тежестта на индивидуалните сървъри в процеса на споделянето. За да могат да споделят файлове, потребителите трябва най-напред да свалят специален софтуер, наречен „BitTorrent клиент“, който не се предлага от платформата за онлайн споделяне TPB. Този BitTorrent клиент представлява софтуер, който позволява създаването на торент файлове.

10      Потребителите (наричани „seeders“ [сийдъри]), които желаят да предоставят файл от своя компютър на разположение на други потребители (наричани „leechers“ [лийчъри]), трябва да създадат торент файл чрез своя BitTorrent клиент. Торент файловете препращат към централен сървър (наричан „tracker“ [тракер]), който идентифицира потребители, които могат да споделят конкретен торент файл, както и прилежащия към него медиен файл. Тези торент файлове се качват (upload) от сийдърите (на платформа за онлайн споделяне, каквато е TPB, която след това ги индексира, за да могат те да бъдат намирани от потребителите на платформата за онлайн споделяне и произведенията, към които тези торент файлове препращат, да могат да бъдат сваляни (download) на компютрите на последните на отделни сегменти чрез техния BitTorrent клиент.

11      Често пъти вместо торенти се използват магнитни линкове. Тези линкове идентифицират съдържанието на торента и препращат към него чрез цифров отпечатък.

12      Голямото мнозинство от предлаганите на платформата за онлайн споделяне TPB торент файлове препращат към произведения, които са обект на закрила от авторски права, без да е дадено разрешение от носителите на авторското право на администраторите и на потребителите на тази платформа за извършване на действията по споделянето.

13      Главното искане на Stichting Brein в производството пред националната юрисдикция е да разпореди на Ziggo и на XS4ALL да блокират имената на домейни и интернет адресите на платформата за онлайн споделяне TPB с цел да се предотврати възможността за ползване на услугите на тези доставчици на достъп до интернет за нарушаване на авторското и сродните му права на носителите на правата, чиито интереси защитава Stichting Brein.

Въпросите

 При тези обстоятелства Hoge Raad der Nederlanden (Върховен съд на Нидерландия) решава да спре производството по делото и да постави на Съда следните преюдициални въпроси:

„1)      Налице ли е публично разгласяване по смисъла на член 3, параграф 1 от Директива 2001/29 от страна на администратора на уебсайт, ако на този уебсайт не са налице защитени произведения, но съществува система […], с която намиращи се в компютрите на потребителите метаданни за защитени произведения се индексират и категоризират за потребителите по начин, по който последните могат да проследяват, да качват онлайн, както и да свалят закриляните произведения?

2)      При отрицателен отговор на първия въпрос:

Дават ли член 8, параграф 3 от Директива 2001/29 и член 11 от Директива 2004/48 основание за издаването на забрана по отношение на посредник по смисъла на тези разпоредби, който по описания във въпрос 1 начин улеснява извършването на нарушения от трети лица?“.

Вече имаме заключението на Генералния адвокат Szpunar, според което

обстоятелството, че операторът на уебсайт индексира файлове, съдържащи закриляни с авторско право произведения, които се предлагат за споделяне в peer-to-peer мрежа, и предоставя търсачка, с което позволява тези файлове да бъдат намирани, представлява публично разгласяване по смисъла на член 3, параграф 1 от Директива 2001/29, когато операторът знае, че дадено произведение е предоставено на разположение в мрежата без съгласието на носителите на авторските права, но не предприема действия за блокиране на достъпа до това произведение.

Решението

Понятието „публично разгласяване“ обединява два кумулативни елемента, а именно „акт на разгласяване“ на произведение и „публичност“ на разгласяването (решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 29 и цитираната съдебна практика). За да се прецени дали даден ползвател извършва акт на публично разгласяване по смисъла на член 3, параграф 1 от Директива 2001/29, трябва да се отчетат няколко допълнителни критерия, които не са самостоятелни и са взаимозависими.

  • ключовата роля на потребителя и съзнателния характер на неговата намеса. Всъщност този потребител извършва акт на разгласяване, когато, като съзнава напълно последиците от своето поведение, се намесва, за да предостави на клиентите си достъп до произведение, което е обект на закрила, и по-специално когато без неговата намеса тези клиенти по принцип не биха могли да се ползват от разпространеното произведение. (вж. в този смисъл решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 31 и цитираната съдебна практика).
  • понятието „публично“ се отнася до неопределен брой потенциални адресати и освен това предполага наличие на доста голям брой лица (решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 32 и цитираната съдебна практика).
  • закриляното произведение трябва да бъде разгласено, като се използва специфичен технически способ, различен от използваните дотогава, или, ако не е използван такъв способ — пред „нова публика“, тоест публика, която не е била вече взета предвид от носителите на авторското право при даването на разрешение за първоначалното публично разгласяване на произведението им (решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 33 и цитираната съдебна практика).
  • дали публичното разгласяване  е извършено с цел печалба (решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 34 и цитираната съдебна практика).

42 В случая, видно от акта за преюдициално запитване, значителна част от абонатите на Ziggo и XS4ALL са сваляли медийни файлове чрез платформата за онлайн споделяне TPB. Както следва и от представените пред Съда становища, тази платформа се използва от значителен брой лица, като администраторите от TPB съобщават на своята платформа за онлайн споделяне за десетки милиони „потребители“. В това отношение разглежданото в главното производство разгласяване се отнася най-малкото до всички потребители на тази платформа. Тези потребители могат да имат достъп във всеки момент и едновременно до защитените произведения, които са споделени посредством посочената платформа. Следователно това разгласяване се отнася до неопределен брой потенциални адресати и предполага наличие на голям брой лица (вж. в този смисъл решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 45 и цитираната съдебна практика).

43      От това следва, че с разгласяване като разглежданото в главното производство закриляни произведения действително се разгласяват „публично“ по смисъла на член 3, параграф 1 от Директива 2001/29.

44      Освен това, що се отнася до въпроса дали тези произведения са разгласяват на „нова“ публика по смисъла на съдебната практика, цитирана в точка 28 от настоящото съдебно решение, следва да се посочи, че в решението си от 13 февруари 2014 г., Svensson и др. (C‑466/12, EU:C:2014:76, т. 24 и 31), както и в определението си от 21 октомври 2014 г., BestWater International (C‑348/13, EU:C:2014:2315, т. 14) Съдът е приел, че това е публика, която носителите на авторските права не са имали предвид, когато са дали разрешение за първоначалното разгласяване.

45      В случая, видно от становищата, представени пред Съда, от една страна, администраторите на платформата за онлайн споделяне TPB са знаели, че тази платформа, която предоставят на разположение на потребителите и която администрират, дава достъп до произведения, публикувани без разрешение на носителите на правата, и от друга страна, че същите администратори изразяват изрично в блоговете и форумите на тази платформа своята цел да предоставят закриляните произведения на разположение на потребителите и поощряват последните да реализират копия от тези произведения. Във всички случаи, видно от акта за преюдициално запитване, администраторите на онлайн платформата TPB не може да не са знаели, че тази платформа дава достъп до произведения, публикувани без разрешението на носителите на правата, с оглед на обстоятелството, което се подчертава изрично от запитващата юрисдикция, че голяма част от торент файловете, които се намират на платформата за онлайн споделяне TPB, препращат към произведения, публикувани без разрешението на носителите на правата. При тези обстоятелства следва да се приеме, че е налице разгласяване пред „нова публика“ (вж. в този смисъл решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 50).

46      От друга страна, не може да се оспори, че предоставянето на разположение и администрирането на платформа за онлайн споделяне като разглежданата в главното производство се извършва с цел да се извлече печалба, тъй като тази платформа генерира, видно от становищата, представени пред Съда, значителни приходи от реклама.

47      Вследствие на това трябва да се приеме, че предоставянето на разположение и администрирането на платформа за онлайн споделяне като разглежданата в главното производство, съставлява „публично разгласяване“ по смисъла на член 3, параграф 1 от Директива 2001/29.

48      С оглед на всички изложени съображения на първия въпрос следва да се отговори, че понятието „публично разгласяване“  трябва да се тълкува в смисъл, че  в неговия обхват попада предоставянето на разположение и администрирането в интернет на платформа за споделяне, която чрез индексиране на метаданните относно закриляните произведения и с предлагането на търсачка позволява на потребителите на платформата да намират тези произведения и да ги споделят в рамките на мрежа с равноправен достъп (peer-to-peer).

Масовите коментари са, че решението засилва позициите на търсещите блокиране организации.

Filed under: Digital, EU Law, Media Law Tagged: съд на ес

John Oliver is wrong about Net Neutrality

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/05/john-oliver-is-wrong-about-net.html

People keep linking to John Oliver bits. We should stop doing this. This is comedy, but people are confused into thinking Oliver is engaging in rational political debate:
Enlightened people know that reasonable people disagree, that there’s two sides to any debate. John Oliver’s bit erodes that belief, making one side (your side) sound smart, and the other side sound unreasonable.
The #1 thing you should know about Net Neutrality is that reasonable people disagree. It doesn’t mean they are right, only that they are reasonable. They aren’t stupid. They aren’t shills for the telcom lobby, or confused by the telcom lobby. Indeed, those opposed to Net Neutrality are the tech experts who know how packets are routed, whereas the supporters tend only to be lawyers, academics, and activists. If you think that the anti-NetNeutrality crowd is unreasonable, then you are in a dangerous filter bubble.
Most everything in John Oliver’s piece is incorrect.
For example, he says that without Net Neutrality, Comcast can prefer original shows it produces, and slow down competing original shows by Netflix. This is silly: Comcast already does that, even with NetNeutrality rules.
Comcast owns NBC, which produces a lot of original shows. During prime time (8pm to 11pm), Comcast delivers those shows at 6-mbps to its customers, while Netflix is throttled to around 3-mbps. Because of this, Comcast original shows are seen at higher quality than Netflix shows.
Comcast can do this, even with NetNeutrality rules, because it separates its cables into “channels”. One channel carries public Internet traffic, like Netflix. The other channels carry private Internet traffic, for broadcast TV shows and pay-per-view.
All NetNeutrality means is that if Comcast wants to give preference to its own contents/services, it has to do so using separate channels on the wire, rather than pushing everything over the same channel. This is a detail nobody tells you because NetNeutrality proponents aren’t techies. They are lawyers and academics. They maximize moral outrage, while ignoring technical details.
Another example in Oliver’s show is whether search engines like Google or the (hypothetical) Bing can pay to get faster access to customers. They already do that. The average distance a packet travels on the web is less than 100-miles. That’s because the biggest companies (Google, Facebook, Netflix, etc.) pay to put servers in your city close to you. Smaller companies, such as search engine DuckDuckGo.com, also pay third-party companies like Akamai or Amazon Web Services to get closer to you. The smallest companies, however, get poor performance, being a thousand miles away.
You can test this out for yourself. Run a packet-sniffer on your home network for a week, then for each address, use mapping tools like ping and traceroute to figure out how far away things are.
The Oliver bit mentioned how Verizon banned Google Wallet. Again, technical details are important here. It had nothing to do with Net Neutrality issues blocking network packets, but only had to do with Verizon-branded phones blocking access to the encrypted enclave. You could use Google Wallet on unlocked phones you bought separately. Moreover, market forces won in the end, with Google Wallet (aka. Android Wallet) now the preferred wallet on their network. In other words, this incident shows that the “free market” fixes things in the long run without the heavy hand of government.
Oliver shows a piece where FCC chief Ajit Pai points out that Internet companies didn’t do evil without Net Neutrality rules, and thus NetNeutrality rules were unneeded. Oliver claimed this was a “disingenuous” argument. No, it’s not “disingenuous”, it entirely the point of why Net Neutrality is bad. It’s chasing theoretical possibility of abuse, not the real thing. Sure, Internet companies will occasionally go down misguided paths. If it’s truly bad, customers will rebel. In some cases, it’s not actually a bad thing, and will end up being a benefit to customers (e.g. throttling BitTorrent during primetime would benefit most BitTorrent users). It’s the pro-NetNeutrality side that’s being disingenuous, knowingly trumping up things as problems that really aren’t.
The point is this. The argument here is a complicated one, between reasonable sides. For humor, John Oliver has created a one-sided debate that falls apart under any serious analysis. Those like the EFF should not mistake such humor for intelligent technical debate.

Hacker dumps, magnet links, and you

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/05/hacker-dumps-magnet-links-and-you.html

In an excellent post pointing out Wikileaks deserves none of the credit given them in the #MacronLeaks, the author erroneously stated that after Archive.org took down the files, that Wikileaks provided links to a second archive. This is not true. Instead, Wikileaks simply pointed to what’s known as “magnet links” of the first archive. Understanding magnet links is critical to understanding all these links and dumps, so I thought I’d describe them.

The tl;dr version is this: anything published via BitTorrent has a matching “magnet link” address, and the contents can still be reached via magnet links when the original publisher goes away.

In this case, the leaker uploaded to “archive.org”, a popular Internet archiving resource. This website allows you to either download files directly, which is slow, or via peer-to-peer using BitTorrent, which is fast. As you know, BitTorrent works by all the downloaders exchanging pieces with each other, rather getting them from the server. I give you a piece you don’t have, in exchange for a piece I don’t have.

BitTorrent, though still requires a “torrent” (a ~30k file that lists all the pieces) and a “tracker” (http://bt1.archive.org:6969/announce) that keeps a list of all the peers so they can find each other. The tracker also makes sure that every piece is available from at least one peer.

When “archive.org” realized what was happening, they deleted the leaked files, the torrent, and the tracking.

However, BitTorrent has another feature called “magnet links”. This is simply the “hash” of the “torrent” file contents, which looks something like “06724742e86176c0ec82e294d299fba4aa28901a“. (This isn’t a hash of the entire file, but just the important parts, such as the filenames and sizes).

Along with downloading files, BitTorrent software on your computer also participates in a “distributed hash” network. When using a torrent file to download, your BitTorrent software still tell other random BitTorrent clients about the hash. Knowledge of this hash thus spreads throughout the BitTorrent world. It’s only 16 bytes in size, so the average BitTorrent client can keep track of millions of such hashes while consuming very little memory or bandwidth.

If somebody decides they want to download the BitTorrent with that hash, they broadcast that request throughout this “distributed hash” network until they find one or more people with the full torrent. They then get the torrent description file from them, and also a list of peers in the “swarm” who are downloading the file.

Thus, when the original torrent description file, the tracker, and original copy goes away, you can still locate the swarm of downloaders through this hash. As long as all the individual pieces exist in the swarm, you can still successfully download the original file.

In this case, one of the leaked documents was a 2.3 gigabyte file called “langannerch.rar”. The torrent description file called “langanerch_archive.torrent” is 26 kilobytes in size. The hash (magnet link) is 16 bytes in size, written “magnet:?xt=urn:btih:06724742e86176c0ec82e294d299fba4aa28901a“. If you’ve got BitTorrent software installed and click on the link, you’ll join the swarm and start downloading the file, even though the original torrent/tracker/files have gone away.

According to my BitTorrent client, there are currently 108 people in the swarm downloading this file world-wide. I’m currently connected to 11 of them. Most of them appear to be located in France.

Looking at the General tab, I see that “availability” is 2.95. That means there exist 2.95 complete copies of the download. In other words, if there are 20 pieces, it means that for one of the pieces in the swarm, only 2 people have it. This is dangerously small — if those two people leave the network, then a complete copy of the dump will no longer exist in the swarm, and it’ll be impossible to download it all.

Such dumps can remain popular enough for years after the original tracker/torrent has disappeared, but at some point, a critical piece disappears, and it becomes impossible for anybody to download more than 99.95%, with everyone in the swarm waiting for that last piece. If you read this blogpost 6 months from now, you are likely to see 10 people in the swarm, all stuck at 99.95% complete.

Conclusion


The upshot of this is that it’s hard censoring BitTorrent, because all torrents also exist as magnet links. It took only a couple hours for Archive.org to take down the tracker/torrents/files, but after complete downloads were out in the swarm, all anybody needed was the hash of the original torrent to create a magnet link to the data. Those magnet links had already been published by many people. The Wikileaks tweet that linked to them was fairly late, all things considered, other people had already published them.

Security updates for Monday

Post Syndicated from ris original https://lwn.net/Articles/720203/rss

Security updates have been issued by Debian (libosip2, openoffice.org-dictionaries, and qbittorrent), Fedora (kernel, libpng12, libsndfile, libtiff, mediawiki, mupdf, qt5-qtwebengine, samba, xen, xorgxrdp, and xrdp), Mageia (mediawiki, ming, python-django, unshield, and webkit2), and openSUSE (postgresql93).

Security updates for Wednesday

Post Syndicated from ris original https://lwn.net/Articles/718385/rss

Security updates have been issued by CentOS (icoutils and openjpeg), Debian (eject, graphicsmagick, libytnef, and tnef), Fedora (drupal8, firefox, kernel, ntp, qbittorrent, texlive, and webkitgtk4), Oracle (bash, coreutils, glibc, gnutls, kernel, libguestfs, ocaml, openssh, qemu-kvm, quagga, samba, samba4, tigervnc, and wireshark), Red Hat (curl), Slackware (mariadb), SUSE (samba), and Ubuntu (apparmor).

Security updates for Tuesday

Post Syndicated from ris original https://lwn.net/Articles/718262/rss

Security updates have been issued by Debian (eject, gst-plugins-bad1.0, gst-plugins-base1.0, gst-plugins-good1.0, gst-plugins-ugly1.0, gstreamer1.0, php5, and tiff), Fedora (kernel), Gentoo (curl, deluge, libtasn1, and xen-tools), Mageia (mbedtls, putty, and roundcubemail), openSUSE (dbus-1, gegl, mxml, open-vm-tools, partclone, qbittorrent, tcpreplay, and xtrabackup), and Ubuntu (eject, gst-plugins-base0.10, gst-plugins-base1.0, and gst-plugins-good0.10, gst-plugins-good1.0).

Security updates for Thursday

Post Syndicated from jake original https://lwn.net/Articles/717368/rss

Security updates have been issued by CentOS (thunderbird), Fedora (ettercap, jasper, qbittorrent, and tcpreplay), Oracle (tomcat6), Red Hat (rabbitmq-server), Slackware (pidgin), SUSE (flash-player), and Ubuntu (libxml2, linux, linux-aws, linux-gke, linux-raspi2, linux-snapdragon, and linux-lts-xenial).

Security advisories for Monday

Post Syndicated from ris original http://lwn.net/Articles/701915/rss

Debian has updated imagemagick
(code execution), libarchive (three
vulnerabilities), openssl (regression in
previous update), and unadf (two vulnerabilities).

Debian-LTS has updated dropbear (two vulnerabilities), dwarfutils (two vulnerabilities), mactelnet (code execution), openssl (multiple vulnerabilities), and policycoreutils (sandbox escape).

Fedora has updated bash (F24; F23: code execution) and firefox (F24; F23: multiple vulnerabilities).

Gentoo has updated bundler (installs malicious gem files) and qemu (multiple vulnerabilities).

Mageia has updated gdk-pixbuf2.0 (denial of service), golang (denial of service), libarchive (file overwrite), libtorrent-rasterbar (denial of service), php (multiple vulnerabilities), and wireshark (multiple vulnerabilities).

openSUSE has updated curl
(Leap42.1: multiple vulnerabilities), flash-player (13.1: multiple vulnerabilities),
gd (Leap42.1: multiple vulnerabilities),
gtk2 (Leap42.1; 13.2: code execution), firefox, nss (Leap42.1, 13.2: multiple
vulnerabilities), samba (Leap42.1: crypto
downgrade), thunderbird (13.1: multiple
vulnerabilities), tiff (13.1: multiple
vulnerabilities), and wpa_supplicant
(Leap42.1: multiple vulnerabilities).

Slackware has updated php (multiple vulnerabilities).

Ubuntu has updated openssl
(regression in previous update).

Security advisories for Wednesday

Post Syndicated from ris original http://lwn.net/Articles/700646/rss

Arch Linux has updated libtorrent-rasterbar (denial of service) and powerdns (denial of service).

Debian has updated mysql-5.5 (SQL injection/privilege escalation).

Fedora has updated gnupg (F23:
flawed random number generation), gnutls (F24; F23:
certificate verification vulnerability), openjpeg2 (F24: denial of service), thunderbird (F24: unspecified
vulnerabilities), and xen (F24: three vulnerabilities).

openSUSE has updated mysql-connector-java (Leap42.1: information disclosure).

Red Hat has updated flash-plugin
(RHEL5,6: multiple vulnerabilities).

Slackware has updated mariadb (SQL injection/privilege escalation).

Ubuntu has updated mysql-5.5,
mysql-5.7
(SQL injection/privilege escalation) and webkit2gtk (16.04: multiple vulnerabilities).

Security advisories for Monday

Post Syndicated from ris original http://lwn.net/Articles/700383/rss

Arch Linux has updated file-roller (file deletion), graphicsmagick (denial of service), and tomcat8 (redirect HTTP traffic).

Debian has updated openjpeg2
(multiple vulnerabilities) and pdns
(multiple denial of service flaws).

Debian-LTS has updated libarchive (two vulnerabilities), qemu (directory/path traversal), and qemu-kvm (directory/path traversal).

Fedora has updated chromium (F24:
multiple vulnerabilities), elog (F24; F23:
unauthorized posts), phpMyAdmin (F23: multiple vulnerabilities), python-jwcrypto (F24; F23: information disclosure), and slock (F24; F23: screen locking bypass).

openSUSE has updated libtorrent-rasterbar (Leap42.1: denial of
service), kernel (Leap42.1: multiple
vulnerabilities), and wget (13.2: race condition).

Slackware has updated gnutls (denial of service).

SUSE has updated java-1_7_0-ibm
(SOSC5, SMP2.1, SM2.1, SLES11-SP2,3: three vulnerabilities).

Now Available – IPv6 Support for Amazon S3

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-ipv6-support-for-amazon-s3/

As you probably know, every server and device that is connected to the Internet must have a unique IP address. Way back in 1981, RFC 791 (“Internet Protocol”) defined an IP address as a 32-bit entity, with three distinct network and subnet sizes (Classes A, B, and C – essentially large, medium, and small) designed for organizations with requirements for different numbers of IP addresses. In time, this format came to be seen as wasteful and the more flexible CIDR (Classless Inter-Domain Routing) format was standardized and put in to use. The 32-bit entity (commonly known as an IPv4 address) has served the world well, but the continued growth of the Internet means that all available IPv4 addresses will ultimately be assigned and put to use.

In order to accommodate this growth and to pave the way for future developments, networks, devices, and service providers are now in the process of moving to IPv6. With 128 bits per IP address, IPv6 has plenty of address space (according to my rough calculation, 128 bits is enough to give 3.5 billion IP addresses to every one of the 100 octillion or so stars in the universe). While the huge address space is the most obvious benefit of IPv6, there are other more subtle benefits as well. These include extensibility, better support for dynamic address allocation, and additional built-in support for security.

Today I am happy to announce that objects in Amazon S3 buckets are now accessible via IPv6 addresses via new “dual-stack” endpoints. When a DNS lookup is performed on an endpoint of this type, it returns an “A” record with an IPv4 address and an “AAAA” record with an IPv6 address. In most cases the network stack in the client environment will automatically prefer the AAAA record and make a connection using the IPv6 address.

Accessing S3 Content via IPv6
In order to start accessing your content via IPv6, you need to switch to new dual-stack endpoints that look like this:

http://BUCKET.s3.dualstack.REGION.amazonaws.com

or this:

http://s3.dualstack.REGION.amazonaws.com/BUCKET

If you are using the AWS Command Line Interface (CLI) or the AWS Tools for Windows PowerShell you can use the --enabledualstack flag to switch to the dual-stack endpoints.

We are currently updating the AWS SDKs to support the use_dualstack_endpoint setting and expect to push them out to production by the middle of next week. Until then, refer to the developer guide for your SDK to learn how to enable this feature.

Things to Know
Here are some things that you need to know in order to make a smooth transition to IPv6:

Bucket and IAM Policies – If you use policies to grant or restrict access via IP address, update them to include the desired IPv6 ranges before you switch to the new endpoints. If you don’t do this, clients may incorrectly gain or lose access to the AWS resources. Update any policies that exclude access from certain IPv4 addresses by adding the corresponding IPv6 addresses.

IPv6 Connectivity – Because the network stack will prefer an IPv6 address to an IPv4 address, an unusual situation can arise under certain circumstances. The client system can be configured for IPv6 but connected to a network that is not configured to route IPv6 packets to the Internet. Be sure to test for end-to-end connectivity before you switch to the dual-stack endpoints.

Log Entries – Log entries will include the IPv4 or IPv6 address, as appropriate. If you analyze your log files using internal or third-party applications, you should ensure that they are able to recognize and process entries that include an IPv6 address.

S3 Feature Support – IPv6 support is available for all S3 features with the exception of Website Hosting, S3 Transfer Acceleration, and access via BitTorrent.

Region Support – IPv6 support is available in all commercial AWS Regions and in AWS GovCloud (US). It is not available in the China (Beijing) Region.


Jeff;

Tracking the Owner of Kickass Torrents

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/tracking_the_ow.html

Here’s the story of how it was done. First, a fake ad on torrent listings linked the site to a Latvian bank account, an e-mail address, and a Facebook page.

Using basic website-tracking services, Der-Yeghiayan was able to uncover (via a reverse DNS search) the hosts of seven apparent KAT website domains: kickasstorrents.com, kat.cr, kickass.to, kat.ph, kastatic.com, thekat.tv and kickass.cr. This dug up two Chicago IP addresses, which were used as KAT name servers for more than four years. Agents were then able to legally gain a copy of the server’s access logs (explaining why it was federal authorities in Chicago that eventually charged Vaulin with his alleged crimes).

Using similar tools, Homeland Security investigators also performed something called a WHOIS lookup on a domain that redirected people to the main KAT site. A WHOIS search can provide the name, address, email and phone number of a website registrant. In the case of kickasstorrents.biz, that was Artem Vaulin from Kharkiv, Ukraine.

Der-Yeghiayan was able to link the email address found in the WHOIS lookup to an Apple email address that Vaulin purportedly used to operate KAT. It’s this Apple account that appears to tie all of pieces of Vaulin’s alleged involvement together.

On July 31st 2015, records provided by Apple show that the me.com account was used to purchase something on iTunes. The logs show that the same IP address was used on the same day to access the KAT Facebook page. After KAT began accepting Bitcoin donations in 2012, $72,767 was moved into a Coinbase account in Vaulin’s name. That Bitcoin wallet was registered with the same me.com email address.

Another article.

Security advisories for Wednesday

Post Syndicated from ris original http://lwn.net/Articles/693573/rss

Arch Linux has updated libarchive (code execution), libreoffice-fresh (code execution), and xerces-c (denial of service).

Debian-LTS has updated sqlite3 (information leak).

Fedora has updated mingw-xerces-c (F23; F22:
three vulnerabilities) and xerces-c (F23; F22: two vulnerabilities).

Mageia has updated gimp (use-after-free), iperf (denial of service), libarchive (multiple vulnerabilities), libgd (multiple vulnerabilities), libtorrent-rasterbar (denial of service), php (multiple vulnerabilities), phpmyadmin (multiple vulnerabilities), pidgin (multiple vulnerabilities), squidguard (cross-site scripting), and xerces-c (denial of service).

openSUSE has updated cronic
(Leap42.1, 13.2: predictable temporary files), libircclient (Leap42.1; 13.2: insecure cipher suites), and xerces-c (13.2: code execution).

SUSE has updated xen (SLE11-SP3:
multiple vulnerabilities – some from 2013).

Ubuntu has updated gimp (15.10,
14.04, 12.04: use-after-free), libimobiledevice (16.04, 15.10, 14.04: sockets
listening on INADDR_ANY), libusbmuxd
(16.04, 15.10: sockets listening on INADDR_ANY), and tomcat6, tomcat7 (multiple vulnerabilities).