Tag Archives: conspiracy

ISP Questions Impartiality of Judges in Copyright Troll Cases

Post Syndicated from Andy original https://torrentfreak.com/isp-questions-impartiality-of-judges-in-copyright-troll-cases-180602/

Following in the footsteps of similar operations around the world, two years ago the copyright trolling movement landed on Swedish shores.

The pattern was a familiar one, with trolls harvesting IP addresses from BitTorrent swarms and tracing them back to Internet service providers. Then, after presenting evidence to a judge, the trolls obtained orders that compelled ISPs to hand over their customers’ details. From there, the trolls demanded cash payments to make supposed lawsuits disappear.

It’s a controversial business model that rarely receives outside praise. Many ISPs have tried to slow down the flood but most eventually grow tired of battling to protect their customers. The same cannot be said of Swedish ISP Bahnhof.

The ISP, which is also a strong defender of privacy, has become known for fighting back against copyright trolls. Indeed, to thwart them at the very first step, the company deletes IP address logs after just 24 hours, which prevents its customers from being targeted.

Bahnhof says that the copyright business appeared “dirty and corrupt” right from the get go, so it now operates Utpressningskollen.se, a web portal where the ISP publishes data on Swedish legal cases in which copyright owners demand customer data from ISPs through the Patent and Market Courts.

Over the past two years, Bahnhof says it has documented 76 cases of which six are still ongoing, 11 have been waived and a majority 59 have been decided in favor of mainly movie companies. Bahnhof says that when it discovered that 59 out of the 76 cases benefited one party, it felt a need to investigate.

In a detailed report compiled by Bahnhof Communicator Carolina Lindahl and sent to TF, the ISP reveals that it examined the individual decision-makers in the cases before the Courts and found five judges with “questionable impartiality.”

“One of the judges, we can call them Judge 1, has closed 12 of the cases, of which two have been waived and the other 10 have benefitted the copyright owner, mostly movie companies,” Lindahl notes.

“Judge 1 apparently has written several articles in the magazine NIR – Nordiskt Immateriellt Rättsskydd (Nordic Intellectual Property Protection) – which is mainly supported by Svenska Föreningen för Upphovsrätt, the Swedish Association for Copyright (SFU).

“SFU is a member-financed group centered around copyright that publishes articles, hands out scholarships, arranges symposiums, etc. On their website they have a public calendar where Judge 1 appears regularly.”

Bahnhof says that the financiers of the SFU are Sveriges Television AB (Sweden’s national public TV broadcaster), Filmproducenternas Rättsförening (a legally-oriented association for filmproducers), BMG Chrysalis Scandinavia (a media giant) and Fackförbundet för Film och Mediabranschen (a union for the movie and media industry).

“This means that Judge 1 is involved in a copyright association sponsored by the film and media industry, while also judging in copyright cases with the film industry as one of the parties,” the ISP says.

Bahnhof’s also has criticism for Judge 2, who participated as an event speaker for the Swedish Association for Copyright, and Judge 3 who has written for the SFU-supported magazine NIR. According to Lindahl, Judge 4 worked for a bureau that is partly owned by a board member of SFU, who also defended media companies in a “high-profile” Swedish piracy case.

That leaves Judge 5, who handled 10 of the copyright troll cases documented by Bahnhof, waiving one and deciding the remaining nine in favor of a movie company plaintiff.

“Judge 5 has been questioned before and even been accused of bias while judging a high-profile piracy case almost ten years ago. The accusations of bias were motivated by the judge’s membership of SFU and the Swedish Association for Intellectual Property Rights (SFIR), an association with several important individuals of the Swedish copyright community as members, who all defend, represent, or sympathize with the media industry,” Lindahl says.

Bahnhof hasn’t named any of the judges nor has it provided additional details on the “high-profile” case. However, anyone who remembers the infamous trial of ‘The Pirate Bay Four’ a decade ago might recall complaints from the defense (1,2,3) that several judges involved in the case were members of pro-copyright groups.

While there were plenty of calls to consider them biased, in May 2010 the Supreme Court ruled otherwise, a fact Bahnhof recognizes.

“Judge 5 was never sentenced for bias by the court, but regardless of the court’s decision this is still a judge who shares values and has personal connections with [the media industry], and as if that weren’t enough, the judge has induced an additional financial aspect by participating in events paid for by said party,” Lindahl writes.

“The judge has parties and interest holders in their personal network, a private engagement in the subject and a financial connection to one party – textbook characteristics of bias which would make anyone suspicious.”

The decision-makers of the Patent and Market Court and their relations.

The ISP notes that all five judges have connections to the media industry in the cases they judge, which isn’t a great starting point for returning “objective and impartial” results. In its summary, however, the ISP is scathing of the overall system, one in which court cases “almost looked rigged” and appear to be decided in favor of the movie company even before reaching court.

In general, however, Bahnhof says that the processes show a lack of individual attention, such as the court blindly accepting questionable IP address evidence supplied by infamous anti-piracy outfit MaverickEye.

“The court never bothers to control the media company’s only evidence (lists generated by MaverickMonitor, which has proven to be an unreliable software), the court documents contain several typos of varying severity, and the same standard texts are reused in several different cases,” the ISP says.

“The court documents show a lack of care and control, something that can easily be taken advantage of by individuals with shady motives. The findings and discoveries of this investigation are strengthened by the pure numbers mentioned in the beginning which clearly show how one party almost always wins.

“If this is caused by bias, cheating, partiality, bribes, political agenda, conspiracy or pure coincidence we can’t say for sure, but the fact that this process has mainly generated money for the film industry, while citizens have been robbed of their personal integrity and legal certainty, indicates what forces lie behind this machinery,” Bahnhof’s Lindahl concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

AskRob: Does Tor let government peek at vuln info?

Post Syndicated from Robert Graham original http://blog.erratasec.com/2018/03/askrob-does-tor-let-government-peek-at.html

On Twitter, somebody asked this question:

The question is about a blog post that claims Tor privately tips off the government about vulnerabilities, using as proof a “vulnerability” from October 2007 that wasn’t made public until 2011.
The tl;dr is that it’s bunk. There was no vulnerability, it was a feature request. The details were already public. There was no spy agency involved, but the agency that does Voice of America, and which tries to protect activists under foreign repressive regimes.

Discussion

The issue is that Tor traffic looks like Tor traffic, making it easy to block/censor, or worse, identify users. Over the years, Tor has added features to make it look more and more like normal traffic, like the encrypted traffic used by Facebook, Google, and Apple. Tors improves this bit-by-bit over time, but short of actually piggybacking on website traffic, it will always leave some telltale signature.
An example showing how we can distinguish Tor traffic is the packet below, from the latest version of the Tor server:
Had this been Google or Facebook, the names would be something like “www.google.com” or “facebook.com”. Or, had this been a normal “self-signed” certificate, the names would still be recognizable. But Tor creates randomized names, with letters and numbers, making it distinctive. It’s hard to automate detection of this, because it’s only probably Tor (other self-signed certificates look like this, too), which means you’ll have occasional “false-positives”. But still, if you compare this to the pattern of traffic, you can reliably detect that Tor is happening on your network.
This has always been a known issue, since the earliest days. Google the search term “detect tor traffic”, and set your advanced search dates to before 2007, and you’ll see lots of discussion about this, such as this post for writing intrusion-detection signatures for Tor.
Among the things you’ll find is this presentation from 2006 where its creator (Roger Dingledine) talks about how Tor can be identified on the network with its unique network fingerprint. For a “vulnerability” they supposedly kept private until 2011, they were awfully darn public about it.
The above blogpost claims Tor kept this vulnerability secret until 2011 by citing this message. It’s because Levine doesn’t understand the terminology and is just blindly searching for an exact match for “TLS normalization”. Here’s an earlier proposed change for the long term goal of to “make our connection handshake look closer to a regular HTTPS [TLS] connection”, from February 2007. Here is another proposal from October 2007 on changing TLS certificates, from days after the email discussion (after they shipped the feature, presumably).
What we see here is here is a known problem from the very beginning of the project, a long term effort to fix that problem, and a slow dribble of features added over time to preserve backwards compatibility.
Now let’s talk about the original train of emails cited in the blogpost. It’s hard to see the full context here, but it sounds like BBG made a feature request to make Tor look even more like normal TLS, which is hinted with the phrase “make our funders happy”. Of course the people giving Tor money are going to ask for improvements, and of course Tor would in turn discuss those improvements with the donor before implementing them. It’s common in project management: somebody sends you a feature request, you then send the proposal back to them to verify what you are building is what they asked for.
As for the subsequent salacious paragraph about “secrecy”, that too is normal. When improving a problem, you don’t want to talk about the details until after you have a fix. But note that this is largely more for PR than anything else. The details on how to detect Tor are available to anybody who looks for them — they just aren’t readily accessible to the layman. For example, Tenable Networks announced the previous month exactly this ability to detect Tor’s traffic, because any techy wanting to would’ve found the secrets how to. Indeed, Teneble’s announcement may have been the impetus for BBG’s request to Tor: “can you fix it so that this new Tenable feature no longer works”.
To be clear, there are zero secret “vulnerability details” here that some secret spy agency could use to detect Tor. They were already known, and in the Teneble product, and within the grasp of any techy who wanted to discover them. A spy agency could just buy Teneble, or copy it, instead of going through this intricate conspiracy.

Conclusion

The issue isn’t a “vulnerability”. Tor traffic is recognizable on the network, and over time, they make it less and less recognizable. Eventually they’ll just piggyback on true HTTPS and convince CloudFlare to host ingress nodes, or something, making it completely undetectable. In the meanwhile, it leaves behind fingerprints, as I showed above.
What we see in the email exchanges is the normal interaction of a donor asking for a feature, not a private “tip off”. It’s likely the donor is the one who tipped off Tor, pointing out Tenable’s product to detect Tor.
Whatever secrets Tor could have tipped off to the “secret spy agency” were no more than what Tenable was already doing in a shipping product.

Update: People are trying to make it look like Voice of America is some sort of intelligence agency. That’s a conspiracy theory. It’s not a member of the American intelligence community. You’d have to come up with a solid reason explaining why the United States is hiding VoA’s membership in the intelligence community, or you’d have to believe that everything in the U.S. government is really just some arm of the C.I.A.

Kim Dotcom Begins New Fight to Avoid Extradition to United States

Post Syndicated from Andy original https://torrentfreak.com/kim-dotcom-begins-new-fight-to-avoid-extradition-to-united-states-180212/

More than six years ago in January 2012, file-hosting site Megaupload was shut down by the United States government and founder Kim Dotcom and his associates were arrested in New Zealand.

What followed was an epic legal battle to extradite Dotcom, Mathias Ortmann, Finn Batato, and Bram van der Kolk to the United States to face several counts including copyright infringement, racketeering, and money laundering. Dotcom has battled the US government every inch of the way.

The most significant matters include the validity of the search warrants used to raid Dotcom’s Coatesville home on January 20, 2012. Despite a prolonged trip through the legal system, in 2014 the Supreme Court dismissed Dotcom’s appeals that the search warrants weren’t valid.

In 2015, the District Court later ruled that Dotcom and his associates are eligible for extradition. A subsequent appeal to the High Court failed when in February 2017 – and despite a finding that communicating copyright-protected works to the public is not a criminal offense in New Zealand – a judge also ruled in favor.

Of course, Dotcom and his associates immediately filed appeals and today in the Court of Appeal in Wellington, their hearing got underway.

Lawyer Grant Illingworth, representing Van der Kolk and Ortmann, told the Court that the case had “gone off the rails” during the initial 10-week extradition hearing in 2015, arguing that the case had merited “meaningful” consideration by a judge, something which failed to happen.

“It all went wrong. It went absolutely, totally wrong,” Mr. Illingworth said. “We were not heard.”

As expected, Illingworth underlined the belief that under New Zealand law, a person may only be extradited for an offense that could be tried in a criminal court locally. His clients’ cases do not meet that standard, the lawyer argued.

Turning back the clocks more than six years, Illingworth again raised the thorny issue of the warrants used to authorize the raids on the Megaupload defendants.

It had previously been established that New Zealand’s GCSB intelligence service had illegally spied on Dotcom and his associates in the lead up to their arrests. However, that fact was not disclosed to the District Court judge who authorized the raids.

“We say that there was misleading conduct at this stage because there was no reference to the fact that information had been gathered illegally by the GCSB,” he said.

But according to Justice Forrest Miller, even if this defense argument holds up the High Court had already found there was a prima facie case to answer “with bells on”.

“The difficulty that you face here ultimately is whether the judicial process that has been followed in both of the courts below was meaningful, to use the Canadian standard,” Justice Miller said.

“You’re going to have to persuade us that what Justice Gilbert [in the High Court] ended up with, even assuming your interpretation of the legislation is correct, was wrong.”

Although the US seeks to extradite Dotcom and his associates on 13 charges, including racketeering, copyright infringement, money laundering and wire fraud, the Court of Appeal previously confirmed that extradition could be granted based on just some of the charges.

The stakes couldn’t be much higher. The FBI says that the “Megaupload Conspiracy” earned the quartet $175m and if extradited to the US, they could face decades in jail.

While Dotcom was not in court today, he has been active on Twitter.

“The court process went ‘off the rails’ when the only copyright expert Judge in NZ was >removed< from my case and replaced by a non-tech Judge who asked if Mega was ‘cow storage’. He then simply copy/pasted 85% of the US submissions into his judgment," Dotcom wrote.

Dotcom also appeared to question the suitability of judges at both the High Court and Court of Appeal for the task in hand.

“Justice Miller and Justice Gilbert (he wrote that High Court judgment) were business partners at the law firm Chapman Tripp which represents the Hollywood Studios in my case. Both Judges are now at the Court of Appeal. Gilbert was promoted shortly after ruling against me,” Dotcom added.

Dotcom is currently suing the New Zealand government for billions of dollars in damages over the warrant which triggered his arrest and the demise of Megaupload.

The hearing is expected to last up to two-and-a-half weeks.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Denuvo Has Been Sold to Global Anti-Piracy Outfit Irdeto

Post Syndicated from Andy original https://torrentfreak.com/denuvo-has-been-sold-to-global-anti-piracy-outfit-irdeto-180123/

It’s fair to say that of all video games anti-piracy technologies, Denuvo is perhaps the most hated of recent times. That hatred unsurprisingly stems from both its success and complexity.

Those with knowledge of the system say it’s fiendishly difficult to defeat but in recent times, cracks have been showing. In 2017, various iterations of the anti-tamper system were defeated by several cracking groups, much to the delight of the pirate masses.

Now, however, a new development has the potential to herald a new lease of life for the Austria-based anti-piracy company. A few moments ago it was revealed that the company has been bought by Irdeto, a global anti-piracy company with considerable heritage and resources.

“Irdeto has acquired Denuvo, the world leader in gaming security, to provide anti-piracy and anti-cheat solutions for games on desktop, mobile, console and VR devices,” Irdeto said in a statement.

“Denuvo provides technology and services for game publishers and platforms, independent software vendors, e-publishers and video publishers across the globe. Current Denuvo customers include Electronic Arts, UbiSoft, Warner Bros and Lionsgate Entertainment, with protection provided for games such as Star Wars Battlefront II, Football Manager, Injustice 2 and others.”

Irdeto says that Denuvo will “continue to operate as usual” with all of its staff retained – a total of 45 across Austria, Poland, the Czech Republic, and the US. Denuvo headquarters in Salzburg, Austria, will also remain intact along with its sales operations.

“The success of any game title is dependent upon the ability of the title to operate as the publisher intended,” says Irdeto CEO Doug Lowther.

“As a result, protection of both the game itself and the gaming experience for end users is critical. Our partnership brings together decades of security expertise under one roof to better address new and evolving security threats. We are looking forward to collaborating as a team on a number of initiatives to improve our core technology and services to better serve our customers.”

Denuvo was founded relatively recently in 2013 and employs less than 50 people. In contrast, Irdeto’s roots go all the way back to 1969 and currently has almost 1,000 staff. It’s a subsidiary of South Africa-based Internet and media group Naspers, a corporate giant with dozens of notable companies under its control.

While Denuvo is perhaps best known for its anti-piracy technology, Irdeto is also placing emphasis on the company’s ability to hinder cheating in online multi-player gaming environments. This has become a hot topic recently, with several lawsuits filed in the US by companies including Blizzard and Epic.

Denuvo CEO Reinhard Blaukovitsch

“Hackers and cybercriminals in the gaming space are savvy, and always have been. It is critical to implement robust security strategies to combat the latest gaming threats and protect the investment in games. Much like the movie industry, it’s the only way to ensure that great games continue to get made,” says Denuvo CEO Reinhard Blaukovitsch.

“In joining with Irdeto, we are bringing together a unique combination of security expertise, technology and enhanced piracy services to aggressively address security challenges that customers and gamers face from hackers.”

While it seems likely that the companies have been in negotiations for some, the timing of this announcement also coincides with negative news for Denuvo.

Yesterday it was revealed that the latest variant of its anti-tamper technology – Denuvo v4.8 – had been defeated by online cracking group CPY (Conspiracy). Version 4.8 had been protecting Sonic Forces since its release early November 2017 but the game was leaked out onto the Internet late Sunday with all protection neutralized.

Sonic Forces cracked by CPY

Irdeto has a long history of acquiring anti-piracy companies and technologies. They include Lockstream (DRM for content on mobile phones), Philips Cryptoworks (DVB conditional access system), Cloakware (various security), Entriq (media protection), BD+ (Blu-ray protection), and BayTSP (anti-piracy monitoring).

It’s also noteworthy that Irdeto supplied behind-the-scenes support in two of the largest IPTV provider raids of recent times, one focused on Spain in 2017 and more recently in Cyprus, Bulgaria, Greece and the Netherlands (1,2,3).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Libertarians are against net neutrality

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/12/libertarians-are-against-net-neutrality.html

This post claims to be by a libertarian in support of net neutrality. As a libertarian, I need to debunk this. “Net neutrality” is a case of one-hand clapping, you rarely hear the competing side, and thus, that side may sound attractive. This post is about the other side, from a libertarian point of view.

That post just repeats the common, and wrong, left-wing talking points. I mean, there might be a libertarian case for some broadband regulation, but this isn’t it.

This thing they call “net neutrality” is just left-wing politics masquerading as some sort of principle. It’s no different than how people claim to be “pro-choice”, yet demand forced vaccinations. Or, it’s no different than how people claim to believe in “traditional marriage” even while they are on their third “traditional marriage”.

Properly defined, “net neutrality” means no discrimination of network traffic. But nobody wants that. A classic example is how most internet connections have faster download speeds than uploads. This discriminates against upload traffic, harming innovation in upload-centric applications like DropBox’s cloud backup or BitTorrent’s peer-to-peer file transfer. Yet activists never mention this, or other types of network traffic discrimination, because they no more care about “net neutrality” than Trump or Gingrich care about “traditional marriage”.

Instead, when people say “net neutrality”, they mean “government regulation”. It’s the same old debate between who is the best steward of consumer interest: the free-market or government.

Specifically, in the current debate, they are referring to the Obama-era FCC “Open Internet” order and reclassification of broadband under “Title II” so they can regulate it. Trump’s FCC is putting broadband back to “Title I”, which means the FCC can’t regulate most of its “Open Internet” order.

Don’t be tricked into thinking the “Open Internet” order is anything but intensely politically. The premise behind the order is the Democrat’s firm believe that it’s government who created the Internet, and all innovation, advances, and investment ultimately come from the government. It sees ISPs as inherently deceitful entities who will only serve their own interests, at the expense of consumers, unless the FCC protects consumers.

It says so right in the order itself. It starts with the premise that broadband ISPs are evil, using illegitimate “tactics” to hurt consumers, and continues with similar language throughout the order.

A good contrast to this can be seen in Tim Wu’s non-political original paper in 2003 that coined the term “net neutrality”. Whereas the FCC sees broadband ISPs as enemies of consumers, Wu saw them as allies. His concern was not that ISPs would do evil things, but that they would do stupid things, such as favoring short-term interests over long-term innovation (such as having faster downloads than uploads).

The political depravity of the FCC’s order can be seen in this comment from one of the commissioners who voted for those rules:

FCC Commissioner Jessica Rosenworcel wants to increase the minimum broadband standards far past the new 25Mbps download threshold, up to 100Mbps. “We invented the internet. We can do audacious things if we set big goals, and I think our new threshold, frankly, should be 100Mbps. I think anything short of that shortchanges our children, our future, and our new digital economy,” Commissioner Rosenworcel said.

This is indistinguishable from communist rhetoric that credits the Party for everything, as this booklet from North Korea will explain to you.

But what about monopolies? After all, while the free-market may work when there’s competition, it breaks down where there are fewer competitors, oligopolies, and monopolies.

There is some truth to this, in individual cities, there’s often only only a single credible high-speed broadband provider. But this isn’t the issue at stake here. The FCC isn’t proposing light-handed regulation to keep monopolies in check, but heavy-handed regulation that regulates every last decision.

Advocates of FCC regulation keep pointing how broadband monopolies can exploit their renting-seeking positions in order to screw the customer. They keep coming up with ever more bizarre and unlikely scenarios what monopoly power grants the ISPs.

But the never mention the most simplest: that broadband monopolies can just charge customers more money. They imagine instead that these companies will pursue a string of outrageous, evil, and less profitable behaviors to exploit their monopoly position.

The FCC’s reclassification of broadband under Title II gives it full power to regulate ISPs as utilities, including setting prices. The FCC has stepped back from this, promising it won’t go so far as to set prices, that it’s only regulating these evil conspiracy theories. This is kind of bizarre: either broadband ISPs are evilly exploiting their monopoly power or they aren’t. Why stop at regulating only half the evil?

The answer is that the claim “monopoly” power is a deception. It starts with overstating how many monopolies there are to begin with. When it issued its 2015 “Open Internet” order the FCC simultaneously redefined what they meant by “broadband”, upping the speed from 5-mbps to 25-mbps. That’s because while most consumers have multiple choices at 5-mbps, fewer consumers have multiple choices at 25-mbps. It’s a dirty political trick to convince you there is more of a problem than there is.

In any case, their rules still apply to the slower broadband providers, and equally apply to the mobile (cell phone) providers. The US has four mobile phone providers (AT&T, Verizon, T-Mobile, and Sprint) and plenty of competition between them. That it’s monopolistic power that the FCC cares about here is a lie. As their Open Internet order clearly shows, the fundamental principle that animates the document is that all corporations, monopolies or not, are treacherous and must be regulated.

“But corporations are indeed evil”, people argue, “see here’s a list of evil things they have done in the past!”

No, those things weren’t evil. They were done because they benefited the customers, not as some sort of secret rent seeking behavior.

For example, one of the more common “net neutrality abuses” that people mention is AT&T’s blocking of FaceTime. I’ve debunked this elsewhere on this blog, but the summary is this: there was no network blocking involved (not a “net neutrality” issue), and the FCC analyzed it and decided it was in the best interests of the consumer. It’s disingenuous to claim it’s an evil that justifies FCC actions when the FCC itself declared it not evil and took no action. It’s disingenuous to cite the “net neutrality” principle that all network traffic must be treated when, in fact, the network did treat all the traffic equally.

Another frequently cited abuse is Comcast’s throttling of BitTorrent.Comcast did this because Netflix users were complaining. Like all streaming video, Netflix backs off to slower speed (and poorer quality) when it experiences congestion. BitTorrent, uniquely among applications, never backs off. As most applications become slower and slower, BitTorrent just speeds up, consuming all available bandwidth. This is especially problematic when there’s limited upload bandwidth available. Thus, Comcast throttled BitTorrent during prime time TV viewing hours when the network was already overloaded by Netflix and other streams. BitTorrent users wouldn’t mind this throttling, because it often took days to download a big file anyway.

When the FCC took action, Comcast stopped the throttling and imposed bandwidth caps instead. This was a worse solution for everyone. It penalized heavy Netflix viewers, and prevented BitTorrent users from large downloads. Even though BitTorrent users were seen as the victims of this throttling, they’d vastly prefer the throttling over the bandwidth caps.

In both the FaceTime and BitTorrent cases, the issue was “network management”. AT&T had no competing video calling service, Comcast had no competing download service. They were only reacting to the fact their networks were overloaded, and did appropriate things to solve the problem.

Mobile carriers still struggle with the “network management” issue. While their networks are fast, they are still of low capacity, and quickly degrade under heavy use. They are looking for tricks in order to reduce usage while giving consumers maximum utility.

The biggest concern is video. It’s problematic because it’s designed to consume as much bandwidth as it can, throttling itself only when it experiences congestion. This is what you probably want when watching Netflix at the highest possible quality, but it’s bad when confronted with mobile bandwidth caps.

With small mobile devices, you don’t want as much quality anyway. You want the video degraded to lower quality, and lower bandwidth, all the time.

That’s the reasoning behind T-Mobile’s offerings. They offer an unlimited video plan in conjunction with the biggest video providers (Netflix, YouTube, etc.). The catch is that when congestion occurs, they’ll throttle it to lower quality. In other words, they give their bandwidth to all the other phones in your area first, then give you as much of the leftover bandwidth as you want for video.

While it sounds like T-Mobile is doing something evil, “zero-rating” certain video providers and degrading video quality, the FCC allows this, because they recognize it’s in the customer interest.

Mobile providers especially have great interest in more innovation in this area, in order to conserve precious bandwidth, but they are finding it costly. They can’t just innovate, but must ask the FCC permission first. And with the new heavy handed FCC rules, they’ve become hostile to this innovation. This attitude is highlighted by the statement from the “Open Internet” order:

And consumers must be protected, for example from mobile commercial practices masquerading as “reasonable network management.”

This is a clear declaration that free-market doesn’t work and won’t correct abuses, and that that mobile companies are treacherous and will do evil things without FCC oversight.

Conclusion

Ignoring the rhetoric for the moment, the debate comes down to simple left-wing authoritarianism and libertarian principles. The Obama administration created a regulatory regime under clear Democrat principles, and the Trump administration is rolling it back to more free-market principles. There is no principle at stake here, certainly nothing to do with a technical definition of “net neutrality”.

The 2015 “Open Internet” order is not about “treating network traffic neutrally”, because it doesn’t do that. Instead, it’s purely a left-wing document that claims corporations cannot be trusted, must be regulated, and that innovation and prosperity comes from the regulators and not the free market.

It’s not about monopolistic power. The primary targets of regulation are the mobile broadband providers, where there is plenty of competition, and who have the most “network management” issues. Even if it were just about wired broadband (like Comcast), it’s still ignoring the primary ways monopolies profit (raising prices) and instead focuses on bizarre and unlikely ways of rent seeking.

If you are a libertarian who nonetheless believes in this “net neutrality” slogan, you’ve got to do better than mindlessly repeating the arguments of the left-wing. The term itself, “net neutrality”, is just a slogan, varying from person to person, from moment to moment. You have to be more specific. If you truly believe in the “net neutrality” technical principle that all traffic should be treated equally, then you’ll want a rewrite of the “Open Internet” order.

In the end, while libertarians may still support some form of broadband regulation, it’s impossible to reconcile libertarianism with the 2015 “Open Internet”, or the vague things people mean by the slogan “net neutrality”.

Sky: People Can’t Pirate Live Soccer in the UK Anymore

Post Syndicated from Andy original https://torrentfreak.com/sky-people-cant-pirate-live-soccer-in-the-uk-anymore-171108/

The commotion over the set-top box streaming phenomenon is showing no signs of dying down and if day one at the Cable and Satellite Broadcasting Association of Asia (CASBAA) Conference 2017 was anything to go by, things are only heating up.

Held at Studio City in Macau, the conference has a strong anti-piracy element and was opened by Joe Welch, CASBAA Board Chairman and SVP Public Affairs Asia, 21st Century Fox. He began Tuesday by noting the important recent launch of a brand new anti-piracy initiative.

“CASBAA recently launched the Coalition Against Piracy, funded by 18 of the region’s content players and distribution partners,” he said.

TF reported on the formation of the coalition mid-October. It includes heavyweights such as Disney, Fox, HBO, NBCUniversal and BBC Worldwide, and will have a strong focus on the illicit set-top box market.

Illegal streaming devices (or ISDs, as the industry calls them), were directly addressed in a segment yesterday afternoon titled Face To Face. Led by Dr. Ros Lynch, Director of Copyright & IP Enforcement at the UK Intellectual Property Office, the session detailed the “onslaught of online piracy” and the rise of ISDs that is apparently “shaking the market”.

Given the apparent gravity of those statements, the following will probably come as a surprise. According to Lynch, the UK IPO sought the opinion of UK-based rightsholders about the pirate box phenomenon a while back after being informed of their popularity in the East. The response was that pirate boxes weren’t an issue. It didn’t take long, however, for things to blow up.

“The UKIPO provides intelligence and evidence to industry and the Police Intellectual Property Crime Unit (PIPCU) in London who then take enforcement actions,” Lynch explained.

“We first heard about the issues with ISDs from [broadcaster] TVB in Hong Kong and we then consulted the UK rights holders who responded that it wasn’t a problem. Two years later the issue just exploded.”

The evidence of that in the UK isn’t difficult to find. In addition to millions of devices with both free Kodi addon and subscription-based systems deployed, the app market has bloomed too, offering free or near to free content to all.

This caught the eye of the Premier League who this year obtained two pioneering injunctions (1,2) to tackle live streams of football games. Streams are blocked by local ISPs in real-time, making illicit online viewing a more painful experience than it ever has been. No doubt progress has been made on this front, with thousands of streams blocked, but according to broadcaster Sky, the results are unprecedented.

“Site-blocking has moved the goalposts significantly,” said Matthew Hibbert, head of litigation at Sky UK.

“In the UK you cannot watch pirated live Premier League content anymore,” he said.

While progress has been good, the statement is overly enthusiastic. TF sources have been monitoring the availability of pirate streams on around dozen illicit sites and services every Saturday (when it is actually illegal to broadcast matches in the UK) and service has been steady on around half of them and intermittent at worst on the rest.

There are hundreds of other platforms available so while many are definitely affected by Premier League blocking, it’s safe to assume that live football piracy hasn’t been wiped out. Nevertheless, it would be wrong to suggest that no progress has been made, in this and other related areas.

Kevin Plumb, Director of Legal Services at The Premier League, said that pubs showing football from illegal streams had also massively dwindled in numbers.

“In the past 18 months the illegal broadcasting of live Premier League matches in pubs in the UK has been decimated,” he said.

This result is almost certainly down to prosecutions taken in tandem with the Federation Against Copyright Theft (FACT), that have seen several landlords landed with large fines. Indeed, both sides of the market have been tackled, with both licensed premises and IPTV device sellers being targeted.

“The most successful thing we’ve done to combat piracy has been to undertake criminal prosecutions against ISD piracy,” said FACT chief Kieron Sharp yesterday. “Everyone is pleading guilty to these offenses.”

Most if not all of FACT-led prosecutions target device and subscription sellers under fraud legislation but that could change in the future, Lynch of the Intellectual Property Office said.

“While the UK works to update its legislation, we can’t wait for the new legislation to take enforcement actions and we rely heavily on ‘conspiracy to defraud’ charges, and have successfully prosecuted a number of ISD retailers,” she said.

Finally, information provided yesterday by network company CISCO shine light on what it costs to run a subscription-based pirate IPTV operation.

Director of Intelligence & Security Operations Avigail Gutman said a pirate IPTV server offering 1,000 channels to around 1,000 subscribers can cost as little as 2,000 euros per month to run but can generate 12,000 euros in revenue during the same period.

“In April of 2017, ten major paid TV and content providers had relinquished 3.09 million euros per month to 285 ISD-based streaming pirate syndicates,” she said.

There’s little doubt that IPTV piracy, both paid and free, is here to stay. The big question is how it will be tackled short and long-term and whether any changes in legislation will have any unintended knock-on effects.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

New UK IP Crime Report Reveals Continued Focus on ‘Pirate’ Kodi Boxes

Post Syndicated from Andy original https://torrentfreak.com/new-uk-ip-crime-report-reveals-continued-focus-on-pirate-kodi-boxes-170908/

The UK’s Intellectual Property Office has published its annual IP Crime Report, spanning the period 2016 to 2017.

It covers key events in the copyright and trademark arenas and is presented with input from the police and trading standards, plus private entities such as the BPI, Premier League, and Federation Against Copyright Theft, to name a few.

The report begins with an interesting statistic. Despite claims that many millions of UK citizens regularly engage in some kind of infringement, figures from the Ministry of Justice indicate that just 47 people were found guilty of offenses under the Copyright, Designs and Patents Act during 2016. That’s down on the 69 found guilty in the previous year.

Despite this low conviction rate, 15% of all internet users aged 12+ are reported to have consumed at least one item of illegal content between March and May 2017. Figures supplied by the Industry Trust for IP indicate that 19% of adults watch content via various IPTV devices – often referred to as set-top, streaming, Android, or Kodi boxes.

“At its cutting edge IP crime is innovative. It exploits technological loopholes before they become apparent. IP crime involves sophisticated hackers, criminal financial experts, international gangs and service delivery networks. Keeping pace with criminal innovation places a burden on IP crime prevention resources,” the report notes.

The report covers a broad range of IP crime, from counterfeit sportswear to foodstuffs, but our focus is obviously on Internet-based infringement. Various contributors cover various aspects of online activity as it affects them, including music industry group BPI.

“The main online piracy threats to the UK recorded music industry at present are from BitTorrent networks, linking/aggregator sites, stream-ripping sites, unauthorized streaming sites and cyberlockers,” the BPI notes.

The BPI’s website blocking efforts have been closely reported, with 63 infringing sites blocked to date via various court orders. However, the BPI reports that more than 700 related URLs, IP addresses, and proxy sites/ proxy aggregators have also been rendered inaccessible as part of the same action.

“Site blocking has proven to be a successful strategy as the longer the blocks are in place, the more effective they are. We have seen traffic to these sites reduce by an average of 70% or more,” the BPI reports.

While prosecutions against music pirates are a fairly rare event in the UK, the Crown Prosecution Service (CPS) Specialist Fraud Division highlights that their most significant prosecution of the past 12 months involved a prolific music uploader.

As first revealed here on TF, Wayne Evans was an uploader not only on KickassTorrents and The Pirate Bay, but also some of his own sites. Known online as OldSkoolScouse, Evans reportedly cost the UK’s Performing Rights Society more than £1m in a single year. He was sentenced in December 2016 to 12 months in prison.

While Evans has been free for some time already, the CPS places particular emphasis on the importance of the case, “since it provided sentencing guidance for the Copyright, Designs and Patents Act 1988, where before there was no definitive guideline.”

The CPS says the case was useful on a number of fronts. Despite illegal distribution of content being difficult to investigate and piracy losses proving tricky to quantify, the court found that deterrent sentences are appropriate for the kinds of offenses Evans was accused of.

The CPS notes that various factors affect the severity of such sentences, not least the length of time the unlawful activity has persisted and particularly if it has done so after the service of a cease and desist notice. Other factors include the profit made by defendants and/or the loss caused to copyright holders “so far as it can accurately be calculated.”

Importantly, however, the CPS says that beyond issues of personal mitigation and timely guilty pleas, a jail sentence is probably going to be the outcome for others engaging in this kind of activity in future. That’s something for torrent and streaming site operators and their content uploaders to consider.

“[U]nless the unlawful activity of this kind is very amateur, minor or short-lived, or in the absence of particularly compelling mitigation or other exceptional circumstances, an immediate custodial sentence is likely to be appropriate in cases of illegal distribution of copyright infringing articles,” the CPS concludes.

But while a music-related trial provided the highlight of the year for the CPS, the online infringement world is still dominated by the rise of streaming sites and the now omnipresent “fully-loaded Kodi Box” – set-top devices configured to receive copyright-infringing live TV and VOD.

In the IP Crime Report, the Intellectual Property Office references a former US Secretary of Defense to describe the emergence of the threat.

“The echoes of Donald Rumsfeld’s famous aphorism concerning ‘known knowns’ and ‘known unknowns’ reverberate across our landscape perhaps more than any other. The certainty we all share is that we must be ready to confront both ‘known unknowns’ and ‘unknown unknowns’,” the IPO writes.

“Not long ago illegal streaming through Kodi Boxes was an ‘unknown’. Now, this technology updates copyright infringement by empowering TV viewers with the technology they need to subvert copyright law at the flick of a remote control.”

While the set-top box threat has grown in recent times, the report highlights the important legal clarifications that emerged from the BREIN v Filmspeler case, which found itself before the European Court of Justice.

As widely reported, the ECJ determined that the selling of piracy-configured devices amounts to a communication to the public, something which renders their sale illegal. However, in a submission by PIPCU, the Police Intellectual Property Crime Unit, box sellers are said to cast a keen eye on the legal situation.

“Organised criminals, especially those in the UK who distribute set-top boxes, are aware of recent developments in the law and routinely exploit loopholes in it,” PIPCU reports.

“Given recent judgments on the sale of pre-programmed set-top boxes, it is now unlikely criminals would advertise the devices in a way which is clearly infringing by offering them pre-loaded or ‘fully loaded’ with apps and addons specifically designed to access subscription services for free.”

With sellers beginning to clean up their advertising, it seems likely that detection will become more difficult than when selling was considered a gray area. While that will present its own issues, PIPCU still sees problems on two fronts – a lack of clear legislation and a perception of support for ‘pirate’ devices among the public.

“There is no specific legislation currently in place for the prosecution of end users or sellers of set-top boxes. Indeed, the general public do not see the usage of these devices as potentially breaking the law,” the unit reports.

“PIPCU are currently having to try and ‘shoehorn’ existing legislation to fit the type of criminality being observed, such as conspiracy to defraud (common law) to tackle this problem. Cases are yet to be charged and results will be known by late 2017.”

Whether these prosecutions will be effective remains to be seen, but PIPCU’s comments suggest an air of caution set to a backdrop of box-sellers’ tendency to adapt to legal challenges.

“Due to the complexity of these cases it is difficult to substantiate charges under the Fraud Act (2006). PIPCU have convicted one person under the Serious Crime Act (2015) (encouraging or assisting s11 of the Fraud Act). However, this would not be applicable unless the suspect had made obvious attempts to encourage users to use the boxes to watch subscription only content,” PIPCU notes, adding;

“The selling community is close knit and adapts constantly to allow itself to operate in the gray area where current legislation is unclear and where they feel they can continue to sell ‘under the radar’.”

More generally, pirate sites as a whole are still seen as a threat. As reported last month, the current anti-piracy narrative is that pirate sites represent a danger to their users. As a result, efforts are underway to paint torrent and streaming sites as risky places to visit, with users allegedly exposed to malware and other malicious content. The scare strategy is supported by PIPCU.

“Unlike the purchase of counterfeit physical goods, consumers who buy unlicensed content online are not taking a risk. Faulty copyright doesn’t explode, burn or break. For this reason the message as to why the public should avoid copyright fraud needs to be re-focused.

“A more concerted attempt to push out a message relating to malware on pirate websites, the clear criminality and the links to organized crime of those behind the sites are crucial if public opinion is to be changed,” the unit advises.

But while the changing of attitudes is desirable for pro-copyright entities, PIPCU says that winning over the public may not prove to be an easy battle. It was given a small taste of backlash itself, after taking action against the operator of a pirate site.

“The scale of the problem regarding public opinion of online copyright crime is evidenced by our own experience. After PIPCU executed a warrant against the owner of a streaming website, a tweet about the event (read by 200,000 people) produced a reaction heavily weighted against PIPCU’s legitimate enforcement action,” PIPCU concludes.

In summary, it seems likely that more effort will be expended during the next 12 months to target the set-top box threat, but there doesn’t appear to be an abundance of confidence in existing legislation to tackle all but the most egregious offenders. That being said, a line has now been drawn in the sand – if the public is prepared to respect it.

The full IP Crime Report 2016-2017 is available here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

BitTorrent Users Form The World’s Largest Criminal Enterprise, Lawyer Says

Post Syndicated from Andy original https://torrentfreak.com/bittorrent-users-form-the-worlds-largest-criminal-enterprise-lawyer-says-170731/

As the sharing of copyrighted material on the Internet continues, so do the waves of lawsuits which claim compensation for alleged damage caused.

Run by so-called ‘copyright trolls’, these legal efforts are often painted as the only way for rightsholders to send a tough message to deter infringement. In reality, however, these schemes are often the basis for a separate revenue stream, one in which file-sharers are forced to pay large cash sums to make supposed jury trials disappear.

Courts around the United States are becoming familiar with these ‘settlement factories’ and sometimes choose to make life more difficult for the trolls. With this potential for friction, the language deployed in lawsuits is often amped up to paint copyright holders as fighting for their very existence. Meanwhile, alleged infringers are described as hardened criminals intent on wreaking havoc on the entertainment industries.

While this polarization is nothing new, a court filing spotted by the troll-fighters over at Fight Copyright Trolls sees the demonization of file-sharers amped up to eleven – and then some.

The case, which is being heard in a district court in Nevada, features LHF Productions, the outfit behind action movie London Has Fallen. It targets five people who allegedly shared the work using BitTorrent and failed to respond to the company’s requests to settle.

“[N]one of the Defendants referenced herein have made any effort to answer or otherwise respond to the Plaintiff’s allegations. In light of the Defendants’ apparent failure to take any action with respect to the present lawsuit, the Plaintiff is left with no choice but to seek a default judgment,” the motion reads.

In the absence of any defense, LHF Productions asks the court to grant default judgments of $15,000 per defendant, which amounts to $75,000 overall, a decent sum for what amounts to five downloads. LHF Productions notes that it could’ve demanded $150,000 from each individual but feels that a more modest sum would be sufficient to “deter future infringement.”

However, when reading the description of the defendants provided by LHF, one could be forgiven for thinking that they’re actually heinous criminals hell-bent on worldwide destruction.

“The Defendants are participants in a global piracy ring composed of one hundred fifty million members – a ring that threatens to tear down fundamental structures of intellectual property,” the lawsuit reads.

While there are indeed 150 million users of BitTorrent, this characterization that they’re all involved in a single “piracy ring” is both misleading and inaccurate.

BitTorrent swarms are separate entities, so the correct way of describing the defendants would be limited to their action for the movie London Has Fallen. Instead, they’re painted as being involved in a global conspiracy with more members than the populations of the United Kingdom, Canada, and Spain combined.

It seems that the introduction of more drama into these infringement lawsuits is becoming necessary as more courts become wise to the activities of trolls, not least organizations being branded criminal themselves, such as the now defunct Prenda Law.

Perhaps with this in mind, LHF Productions tries to convince the court that far from being small-time file-sharers, people downloading their movie online are actually part of something extremely big, a crime wave so huge that nothing like it has ever been witnessed.

“While the actions of each individual participant may seem innocuous, their collective action amounts to one of the largest criminal enterprises ever seen on earth,” LHF says of the defendants.

“[I]f this pervasive culture of piracy is allowed to continue undeterred, it threatens to undo centuries of intellectual property law and unravel a core pillar of our economy. After all, the right to intellectual property was something so fundamental, so essential, to our nation’s founding, that our founding father’s found it necessary to include in the first article of the Constitution.”

If the apocalyptic scenario painted by LHF in its lawsuit (pdf) is to be believed, recouping a mere $15,000 from each defendant begins to sound like a bargain. Certainly, the movie outfit will be hoping the judge sees it that way too.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

NonPetya: no evidence it was a "smokescreen"

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/06/nonpetya-no-evidence-it-was-smokescreen.html

Many well-regarded experts claim that the not-Petya ransomware wasn’t “ransomware” at all, but a “wiper” whose goal was to destroy files, without any intent at letting victims recover their files. I want to point out that there is no real evidence of this.

Certainly, things look suspicious. For one thing, it certainly targeted the Ukraine. For another thing, it made several mistakes that prevent them from ever decrypting drives. Their email account was shutdown, and it corrupts the boot sector.

But these things aren’t evidence, they are problems. They are things needing explanation, not things that support our preferred conspiracy theory.

The simplest, Occam’s Razor explanation explanation is that they were simple mistakes. Such mistakes are common among ransomware. We think of virus writers as professional software developers who thoroughly test their code. Decades of evidence show the opposite, that such software is of poor quality with shockingly bad bugs.

It’s true that effectively, nPetya is a wiper. Matthieu Suiche‏ does a great job describing one flaw that prevents it working. @hasherezade does a great job explaining another flaw.  But best explanation isn’t that this is intentional. Even if these bugs didn’t exist, it’d still be a wiper if the perpetrators simply ignored the decryption requests. They need not intentionally make the decryption fail.

Thus, the simpler explanation is that it’s simply a bug. Ransomware authors test the bits they care about, and test less well the bits they don’t. It’s quite plausible to believe that just before shipping the code, they’d add a few extra features, and forget to regression test the entire suite. I mean, I do that all the time with my code.

Some have pointed to the sophistication of the code as proof that such simple errors are unlikely. This isn’t true. While it’s more sophisticated than WannaCry, it’s about average for the current state-of-the-art for ransomware in general. What people think of, such the Petya base, or using PsExec to spread throughout a Windows domain, is already at least a year old.

Indeed, the use of PsExec itself is a bit clumsy, when the code for doing the same thing is already public. It’s just a few calls to basic Windows networking APIs. A sophisticated virus would do this itself, rather than clumsily use PsExec.

Infamy doesn’t mean skill. People keep making the mistake that the more widespread something is in the news, the more skill, the more of a “conspiracy” there must be behind it. This is not true. Virus/worm writers often do newsworthy things by accident. Indeed, the history of worms, starting with the Morris Worm, has been things running out of control more than the author’s expectations.

What makes nPetya newsworthy isn’t the EternalBlue exploit or the wiper feature. Instead, the creators got lucky with MeDoc. The software is used by every major organization in the Ukraine, and at the same time, their website was horribly insecure — laughably insecure. Furthermore, it’s autoupdate feature didn’t check cryptographic signatures. No hacker can plan for this level of widespread incompetence — it’s just extreme luck.

Thus, the effect of bumbling around is something that hit the Ukraine pretty hard, but it’s not necessarily the intent of the creators. It’s like how the Slammer worm hit South Korea pretty hard, or how the Witty worm hit the DoD pretty hard. These things look “targeted”, especially to the victims, but it was by pure chance (provably so, in the case of Witty).

Certainly, MeDoc was targeted. But then, targeting a single organization is the norm for ransomware. They have to do it that way, giving each target a different Bitcoin address for payment. That it then spread to the entire Ukraine, and further, is the sort of thing that typically surprises worm writers.

Finally, there’s little reason to believe that there needs to be a “smokescreen”. Russian hackers are targeting the Ukraine all the time. Whether Russian hackers are to blame for “ransomware” vs. “wiper” makes little difference.

Conclusion

We know that Russian hackers are constantly targeting the Ukraine. Therefore, the theory that this was nPetya’s goal all along, to destroy Ukraines computers, is a good one.

Yet, there’s no actual “evidence” of this. nPetya’s issues are just as easily explained by normal software bugs. The smokescreen isn’t needed. The boot record bug isn’t needed. The single email address that was shutdown isn’t significant, since half of all ransomware uses the same technique.

The experts who disagree with me are really smart/experienced people who you should generally trust. It’s just that I can’t see their evidence.

Update: I wrote another blogpost about “survivorship bias“, refuting the claim by many experts talking about the sophistication of the spreading feature.


Update: comment asks “why is there no Internet spreading code?”. The answer is “I don’t know”, but unanswerable questions aren’t evidence of a conspiracy. “What aren’t there any stars in the background?” isn’t proof the moon landings are fake, such because you can’t answer the question. One guess is that you never want ransomware to spread that far, until you’ve figured out how to get payment from so many people.

I want to talk for a moment about tolerance

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/05/i-want-to-talk-for-moment-about.html

This post is in response to this Twitter thread. I was going to do a series of tweets in response, but as the number grew, I thought it’d better be done in a blog.

She thinks we are fighting for the rights of Nazis. We aren’t — indeed, the fact that she thinks we are is exactly the problem. They aren’t Nazis.

The issue is not about a slippery slope that first Nazi’s lose free speech, then other groups start losing their speech as well. The issue is that it’s a slippery slope that more and more people get labeled a Nazi. And we are already far down that slope.

The “alt-right” is a diverse group. Like any group. Vilifying the entire alt-right by calling them Nazi’s is like lumping all Muslims in with ISIS or Al Qaeda. We really don’t have Nazi’s in America. Even White Nationalists don’t fit the bill. Nazism was about totalitarianism, real desire to exterminate Jews, lebensraum, and Aryan superiority. Sure, some of these people exist, but they are a fringe, even among the alt-right.

It’s at this point we need to discuss words like “tolerance”. I don’t think it means what you think it means.

The idea of tolerance is that reasonable people can disagree. You still believe you are right, and the other person is wrong, but you accept that they are nonetheless a reasonable person with good intentions, and that they don’t need to be punished for holding the wrong opinion.

Gay rights is a good example. I agree with you that there is only one right answer to this. Having spent nights holding my crying gay college roommate, because his father hated gays, has filled me with enormous hatred and contempt for people like his father. I’ve done my fair share shouting at people for anti-gay slurs.

Yet on the other hand, progressive icons like Barack Obama and Hillary Clinton have had evolving positions on gay rights issues, such as having opposed gay marriage at one time.

Tolerance means accepting that a person is reasonable, intelligent, and well-meaning — even if they oppose gay marriage. It means accepting that Hillary and Obama were reasonable people, even when they were vocally opposing gay marriage.

I’m libertarian. Like most libertarians, I support wide open borders, letting any immigrant across the border for any reason. To me, Hillary’s and Obama’s immigration policies are almost as racist as Trump’s. I have to either believe all you people supporting Hillary/Obama are irredeemably racist — or that well-meaning, good people can disagree about immigration.

I could go through a long list of issues that separate the progressive left and alt-right, and my point would always be the same. While people disagree on issues, and I have my own opinions about which side is right, there are reasonable people on both sides. If there are issues that divide our country down the middle, then by definition, both sides are equally reasonable. The problem with the progressive left is that they do not tolerate this. They see the world as being between one half who hold the correct opinions, and the other half who are unreasonable.

What defines the “alt-right” is not Nazism or White Nationalism, but the reaction of many on the right to intolerance of many on the left. Every time somebody is punished and vilified for uttering what is in fact a reasonable difference of opinion, they join the “alt-right”.

The issue at stake here, the issue that the ACLU is defending, is after that violent attack on the Portland train by an extremist, the city is denying all “alt-right” protesters the right to march. It’s blaming all those of the “alt-right” for the actions of one of their member. It’s similar to cities blocking Muslims from building a mosque because of extremists like ISIS and Al Qaeda, or disturbed individuals who carry out violent attacks in the name of Islam.

This is not just a violation of the First Amendment rights, it’s an obvious one. As the Volokh Conspiracy documents, the courts have ruled many times on this issue. There is no doubt that the “alt-right” has the right to march, and that the city’s efforts to deny them this right is a blatant violation of the constitution.

What we are defending here is not the rights of actual Nazi’s to march (as the courts famous ruled was still legitimate speech in Skokie, Illinois), but the rights of non-Nazi’s to march, most who have legitimate, reasonable (albeit often wrong) grievances to express. This speech is clearly being suppressed by gun wielding thugs in Portland, Oregon.

Those like Jillian see this as dealing with unreasonable speech, we see this as a problem of tolerably wrong speech. Those like Jillian York aren’t defending the right to free speech because, in their minds, they’ve vilified the people they disagree with. But that’s that’s exactly when, and only when, free speech needs our protection, when those speaking out have been vilified, and their repression seems just. Look at how Russia suppresses supporters of gay rights, with exactly this sort of vilification, whereby the majority of the populace sees the violence and policing as a legitimate response to speech that should not be free.

We aren’t fighting a slippery slope here, by defending Nazis. We’ve already slid down that slope, where reasonable people’s rights are being violated. We are fighting to get back up top.

–> –>

Pranksters gonna prank

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/03/pranksters-gonna-prank.html

So Alfa Bank (the bank whose DNS traffic link it to trump-email.com) is back in the news with this press release about how in the last month, hackers have spoofed traffic trying to make it look like there’s a tie with Trump. In other words, Alfa claims these packets are trying to frame them for a tie with Trump now, and thus (by extension) it must’ve been a frame last October.

There is no conspiracy here: it’s just merry pranksters doing pranks (as this CNN article quotes me).

Indeed, among the people pranking has been me (not the pranks mentioned by Alfa, but different pranks). I ran a scan sending packets from IP address to almost everyone one the Internet, and set the reverse lookup to “mail1.trumpemail.com”.

Sadly, my ISP doesn’t allow me to put hyphens in the name, so it’s not “trump-email.com” as it should be in order to prank well.

Geeks gonna geek and pranksters gonna prank. I can imagine all sorts of other fun pranks somebody might do in order to stir the pot. Since the original news reports of the AlfaBank/trump-email.com connection last year, we have to assume any further data is tainted by goofballs like me goofing off.

By the way, in my particular case, there’s a good lesson to be had here about the arbitrariness of IP addresses and names. There is no server located at my IP address of 209.216.230.75. No such machine exists. Instead, I run my scans from a nearby machine on the same network, and “spoof” that address with masscan:

$ masscan 0.0.0.0/0 -p80 –banners –spoof-ip 209.216.230.75

This sends a web request to every machine on the Internet from that IP address, despite no machine anywhere being configured with that IP address.

I point this out because people are confused by the meaning of an “IP address”, or a “server”, “domain”, and “domain name”. I can imagine the FBI looking into this and getting a FISA warrant for the server located at my IP address, and my ISP coming back and telling them that no such server exists, nor has a server existed at that IP address for many years.

In the case of last years story, there’s little reason to believe IP spoofing was happening, but the conspiracy theory still breaks down for the same reason: the association between these concepts is not what you think it is. Listrak, the owner of the server at the center of the conspiracy, still reverse resolves the IP address 66.216.133.29 as “mail1.trump-email.com”, either because they are lazy, or because they enjoy the lulz.

It’s absurd thinking anything sent by the server is related to the Trump Orgainzation today, and it’s equally plausible that nothing the server sent was related to Trump last year as well, especially since (as CNN reports), Trump had severed their ties with Cendyn (the marketing company that uses Listrak servers for email).


Also, as mentioned in a previous blog post, I set my home network’s domain to be “moscow.alfaintra.net”, which means that some of my DNS lookups at home are actually being sent to Alfa Bank. I should probably turn this off before the FBI comes knocking at my door.

…or, how I learned not to be a jerk in 20 short years

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2017/02/or-how-i-learned-not-to-be-jerk-in-20.html

People who are accomplished in one field of expertise tend to believe that they can bring unique insights to just about any other debate.
I am as guilty as anyone: at one time or another, I aired my thoughts on anything from
CNC manufacturing, to
electronics, to
emergency preparedness, to
politics.
Today, I’m about to commit the same sin – but instead of pretending to speak from a position of authority, I wanted to share a more personal tale.



The author, circa 1995. The era of hand-crank computers and punch cards.

Back in my school days, I was that one really tall and skinny kid in the class. It wasn’t trying to stay this way; I preferred computer games to sports, and my grandma’s Polish cooking was heavy on potatoes, butter, chicken, dumplings, cream, and cheese. But that did not matter: I could eat what I wanted, as often as I wanted, and I still stayed in shape. This made me look down on chubby kids; if my reckless ways had little or no effect on my body, it followed that they had to be exceptionally lazy and must have lacked even the most basic form of self-control.

As I entered adulthood, my habits remained the same. I felt healthy and stayed reasonably active, walking to and from work every other day and hiking with friends whenever I could. But my looks started to change:



The author at a really exciting BlackHat party in 2002.

I figured it’s just a part of growing up. But somewhere around my twentieth birthday, I stepped on a bathroom scale and typed the result into an online calculator. I was surprised to find out that my BMI was about 24 – pretty darn close to overweight.

“Pssh, you know how inaccurate these things are!”, I exclaimed while searching online to debunk that whole BMI thing. I mean, sure, I had some belly fat – maybe a pizza or two too far – but nothing that wouldn’t go away in time. Besides, I was doing fine, so what would be the point of submitting to the society’s idea of the “right” weight?

It certainly helped that I was having a blast at work. I made a name for myself in the industry, published a fair amount of cool research, authored a book, settled down, bought a house, had a kid. It wasn’t until the age of 26 that I strayed into a doctor’s office for a routine checkup. When the nurse asked me about my weight, I blurted out “oh, 175 pounds, give or take”. She gave me a funny look and asked me to step on the scale.

Turns out it was quite a bit more than 175 pounds. With a BMI of 27.1, I was now firmly into the “overweight” territory. Yeah yeah, the BMI metric was a complete hoax – but why did my passport photos look less flattering than before?



A random mugshot from 2007. Some people are just born big-boned, I think.

Well, damn. I knew what had to happen: from now on, I was going to start eating healthier foods. I traded Cheetos for nuts, KFC for sushi rolls, greasy burgers for tortilla wraps, milk smoothies for Jamba Juice, fries for bruschettas, regular sodas for diet. I’d even throw in a side of lettuce every now and then. It was bound to make a difference. I just wasn’t gonna be one of the losers who check their weight every day and agonize over every calorie on their plate. (Weren’t calories a scam, anyway? I think I read that on that cool BMI conspiracy site.)

By the time I turned 32, my body mass index hit 29. At that point, it wasn’t just a matter of looking chubby. I could do the math: at that rate, I’d be in a real pickle in a decade or two – complete with a ~50% chance of developing diabetes or cardiovascular disease. This wouldn’t just make me miserable, but also mess up the lives of my spouse and kids.



Presenting at Google TGIF in 2013. It must’ve been the unflattering light.

I wanted to get this over with right away, so I decided to push myself hard. I started biking to work, quite a strenuous ride. It felt good, but did not help: I would simply eat more to compensate and ended up gaining a few extra pounds. I tried starving myself. That worked, sure – only to be followed by an even faster rebound. Ultimately, I had to face the reality: I had a problem and I needed a long-term solution. There was no one weird trick to outsmart the calorie-counting crowd, no overnight cure.

I started looking for real answers. My world came crumbling down; I realized that a “healthy” burrito from Chipotle packed four times as many calories as a greasy burger from McDonald’s. That a loaded fruit smoothie from Jamba Juice was roughly equal to two hot dogs with a side of mashed potatoes to boot. That a glass of apple juice fared worse than a can of Sprite, and that bruschetta wasn’t far from deep-fried butter on a stick. It didn’t matter if it was sugar or fat, bacon or kale. Familiar favorites were not better or worse than the rest. Losing weight boiled down to portion control – and sticking to it for the rest of my life.

It was a slow and humbling journey that spanned almost a year. I ended up losing around 70 lbs along the way. What shocked me is that it wasn’t a painful experience; what held me back for years was just my own smugness, plus the folksy wisdom gleaned from the covers of glossy magazines.



Author with a tractor, 2017.

I’m not sure there is a moral to this story. I guess one lesson is: don’t be a judgmental jerk. Sometimes, the simple things – the ones you think you have all figured out – prove to be a lot more complicated than they seem.

A Rebuttal For Python 3

Post Syndicated from Eevee original https://eev.ee/blog/2016/11/23/a-rebuttal-for-python-3/

Zed Shaw, of Learn Python the Hard Way fame, has now written The Case Against Python 3.

I’m not involved with core Python development. The only skin I have in this game is that I like Python 3. It’s a good language. And one of the big factors I’ve seen slowing its adoption is that respected people in the Python community keep grouching about it. I’ve had multiple newcomers tell me they have the impression that Python 3 is some kind of unusable disaster, though they don’t know exactly why; it’s just something they hear from people who sound like they know what they’re talking about. Then they actually use the language, and it’s fine.

I’m sad to see the Python community needlessly sabotage itself, but Zed’s contribution is beyond the pale. It’s not just making a big deal about changed details that won’t affect most beginners; it’s complete and utter nonsense, on a platform aimed at people who can’t yet recognize it as nonsense. I am so mad.

The Case Against Python 3

I give two sets of reasons as I see them now. One for total beginners, and another for people who are more knowledgeable about programming.

Just to note: the two sets of reasons are largely the same ideas presented differently, so I’ll just weave them together below.

The first section attempts to explain the case against starting with Python 3 in non-technical terms so a beginner can make up their own mind without being influenced by propaganda or social pressure.

Having already read through this once, this sentence really stands out to me. The author of a book many beginners read to learn Python in the first place is providing a number of reasons (some outright fabricated) not to use Python 3, often in terms beginners are ill-equipped to evaluate, but believes this is a defense against propaganda or social pressure.

The Most Important Reason

Before getting into the main technical reasons I would like to discuss the one most important social reason for why you should not use Python 3 as a beginner:

THERE IS A HIGH PROBABILITY THAT PYTHON 3 IS SUCH A FAILURE IT WILL KILL PYTHON.

Python 3’s adoption is really only at about 30% whenever there is an attempt to measure it.

Wait, really? Wow, that’s fantastic.

I mean, it would probably be higher if the most popular beginner resources were actually teaching Python 3, but you know.

Nobody is all that interested in finding out what the real complete adoption is, despite there being fairly simple ways to gather metrics on the adoption.

This accusatory sentence conspicuously neglects to mention what these fairly simple ways are, a pattern that repeats throughout. The trouble is that it’s hard to even define what “adoption” means — I write all my code in Python 3 now, but veekun is still Python 2 because it’s in maintenance mode, so what does that say about adoption? You could look at PyPI download stats, but those are thrown way off by caches and system package managers. You could look at downloads from the Python website, but a great deal of Python is written and used on Unix-likes, where Python itself is either bundled or installed from the package manager.

It’s as simple as that. If you learn Python 2, then you can still work with all the legacy Python 2 code in existence until Python dies or you (hopefully) move on. But if you learn Python 3 then your future is very uncertain. You could really be learning a dead language and end up having to learn Python 2 anyway.

You could use Python 2, until it dies… or you could use Python 3, which might die. What a choice.

By some definitions, Python 2 is already dead — it will not see another major release, only security fixes. Python 3 is still actively developed, and its seventh major release is next month. It even contains a new feature that Zed later mentions he prefers to Python 2’s offerings.

It may shock you to learn that I know both Python 2 and Python 3. Amazingly, two versions of the same language are much more similar than they are different. If you learned Python 3 and then a wizard cast a spell that made it vanish from the face of the earth, you’d just have to spend half an hour reading up on what had changed from Python 2.

Also, it’s been over a decade, maybe even multiple decades, and Python 3 still isn’t above about 30% in adoption. Even among the sciences where Python 3 is touted as a “success” it’s still only around 25-30% adoption. After that long it’s time to admit defeat and come up with a new plan.

Python 3.0 came out in 2008. The first couple releases ironed out some compatibility and API problems, so it didn’t start to gain much traction until Python 3.2 came out in 2011. Hell, Python 2.0 came out in 2000, so even Python 2 isn’t multiple decades old. It would be great if this trusted beginner reference could take two seconds to check details like this before using them to scaremonger.

The big early problem was library compatibility: it’s hard to justify switching to a new version of the language if none of the libraries work. Libraries could only port once their own dependencies had ported, of course, and it took a couple years to figure out the best way to maintain compatibility with both Python 2 and Python 3. I’d say we only really hit critical mass a few years ago — for instance, Django didn’t support Python 3 until 2013 — in which case that 30% is nothing to sneeze at.

There are more reasons beyond just the uncertain future of Python 3 even decades later.

In one paragraph, we’ve gone from “maybe even multiple decades” to just “decades”, which is a funny way to spell “eight years”.

Not In Your Best Interests

The Python project’s efforts to convince you to start with Python 3 are not in your best interest, but, rather, are only in the best interests of the Python project.

It’s bad, you see, for the Python project to want people to use the work it produced.

Anyway, please buy Zed Shaw’s book.

Anyway, please pledge to my Patreon.

Ultimately though, if Python 3 were good they wouldn’t need to do any convincing to get you to use it. It would just naturally work for you and you wouldn’t have any problems. Instead, there are serious issues with Python 3 for beginners, and rather than fix those issues the Python project uses propaganda, social pressure, and marketing to convince you to use it. In the world of technology using marketing and propaganda is immediately a sign that the technology is defective in some obvious way.

This use of social pressure and propaganda to convince you to use Python 3 despite its problems, in an attempt to benefit the Python project, is morally unconscionable to me.

Ten paragraphs in, Zed is telling me that I should be suspicious of anything that relies on marketing and propaganda. Meanwhile, there has yet to be a single concrete reason why Python 3 is bad for beginners — just several flat-out incorrect assertions and a lot of handwaving about how inexplicably nefarious the Python core developers are. You know, the same people who made Python 2. But they weren’t evil then, I guess.

You Should Be Able to Run 2 and 3

In the programming language theory there is this basic requirement that, given a “complete” programming language, I can run any other programming language. In the world of Java I’m able to run Ruby, Java, C++, C, and Lua all at the same time. In the world of Microsoft I can run F#, C#, C++, and Python all at the same time. This isn’t just a theoretical thing. There is solid math behind it. Math that is truly the foundation of computer science.

The fact that you can’t run Python 2 and Python 3 at the same time is purely a social and technical decision that the Python project made with no basis in mathematical reality. This means you are working with a purposefully broken platform when you use Python 3, and I personally can’t condone teaching people to use something that is fundamentally broken.

The programmer-oriented section makes clear that the solid math being referred to is Turing-completeness — the section is even titled “Python 3 Is Not Turing Complete”.

First, notice a rhetorical trick here. You can run Ruby, Java, C++, etc. at the same time, so why not Python 2 and Python 3?

But can you run Java and C# at the same time? (I’m sure someone has done this, but it’s certainly much less popular than something like Jython or IronPython.)

Can you run Ruby 1.8 and Ruby 2.3 at the same time? Ah, no, so I guess Ruby 2.3 is fundamentally and purposefully broken.

Can you run Lua 5.1 and 5.3 at the same time? Lua is a spectacular example, because Lua 5.2 made a breaking change to how the details of scope work, and it’s led to a situation where a lot of programs that embed Lua haven’t bothered upgrading from Lua 5.1. Was Lua 5.2 some kind of dark plot to deliberately break the language? No, it’s just slightly more inconvenient than expected for people to upgrade.

Anyway, as for Turing machines:

In computer science a fundamental law is that if I have one Turing Machine I can build any other Turing Machine. If I have COBOL then I can bootstrap a compiler for FORTRAN (as disgusting as that might be). If I have FORTH, then I can build an interpreter for Ruby. This also applies to bytecodes for CPUs. If I have a Turing Complete bytecode then I can create a compiler for any language. The rule then can be extended even further to say that if I cannot create another Turing Machine in your language, then your language cannot be Turing Complete. If I can’t use your language to write a compiler or interpreter for any other language then your language is not Turing Complete.

Yes, this is true.

Currently you cannot run Python 2 inside the Python 3 virtual machine. Since I cannot, that means Python 3 is not Turing Complete and should not be used by anyone.

And this is completely asinine. Worse, it’s flat-out dishonest, and relies on another rhetorical trick. You only “cannot” run Python 2 inside the Python 3 VM because no one has written a Python 2 interpreter in Python 3. The “cannot” is not a mathematical impossibility; it’s a simple matter of the code not having been written. Or perhaps it has, but no one cares anyway, because it would be comically and unusably slow.

I assume this was meant to be sarcastic on some level, since it’s followed by a big blue box that seems unsure about whether to double down or reverse course. But I can’t tell why it was even brought up, because it has absolutely nothing to do with Zed’s true complaint, which is that Python 2 and Python 3 do not coexist within a single environment. Implementing language X using language Y does not mean that X and Y can now be used together seamlessly.

The canonical Python release is written in C (just like with Ruby or Lua), but you can’t just dump a bunch of C code into a Python (or Ruby or Lua) file and expect it to work. You can talk to C from Python and vice versa, but defining how they communicate is a bit of a pain in the ass and requires some level of setup.

I’ll get into this some more shortly.

No Working Translator

Python 3 comes with a tool called 2to3 which is supposed to take Python 2 code and translate it to Python 3 code.

I should point out right off the bat that this is not actually what you want to use most of the time, because you probably want to translate your Python 2 code to Python 2/3 code. 2to3 produces code that most likely will not work on Python 2. Other tools exist to help you port more conservatively.

Translating one programming language into another is a solidly researched topic with solid math behind it. There are translators that convert any number of languages into JavaScript, C, C++, Java, and many times you have no idea the translation is being done. In addition to this, one of the first steps when implementing a new language is to convert the new language into an existing language (like C) so you don’t have to write a full compiler. Translation is a fully solved problem.

This is completely fucking ludicrous. Translating one programming language to another is a common task, though “fully solved” sounds mighty questionable. But do you know what the results look like?

I found a project called “Transcrypt”, which puts Python in the browser by “translating” it to JavaScript. I’ve never used or heard of this before; I just googled for something to convert Python to JavaScript. Here’s their first sample, a demo using jQuery:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
def start ():
    def changeColors ():
        for div in S__divs:
            S (div) .css ({
                'color': 'rgb({},{},{})'.format (* [int (256 * Math.random ()) for i in range (3)]),
            })

    S__divs = S ('div')
    changeColors ()
    window.setInterval (changeColors, 500)

And here’s the JavaScript code it compiles to:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
(function () {
    var start = function () {
        var changeColors = function () {
            var __iterable0__ = $divs;
            for (var __index0__ = 0; __index0__ < __iterable0__.length; __index0__++) {
                var div = __iterable0__ [__index0__];
                $ (div).css (dict ({'color': 'rgb({},{},{})'.format.apply (null, function () {
                    var __accu0__ = [];
                    for (var i = 0; i < 3; i++) {
                        __accu0__.append (int (256 * Math.random ()));
                    }
                    return __accu0__;
                } ())}));
            }
        };
        var $divs = $ ('div');
        changeColors ();
        window.setInterval (changeColors, 500);
    };
    __pragma__ ('<all>')
        __all__.start = start;
    __pragma__ ('</all>')
}) ();

Well, not quite. That’s actually just a small piece at the end of the full 1861-line file.

You may notice that the emitted JavaScript effectively has to emulate the Python for loop, because JavaScript doesn’t have anything that works exactly the same way. And this is a basic, common language feature translated between two languages in the same general family! Imagine how your code would look if you relied on gritty details of how classes are implemented.

Is this what you want 2to3 to do to your code?

Even if something has been proven to be mathematically possible, that doesn’t mean it’s easy, and it doesn’t mean the results will be pretty (or fast).

The 2to3 translator fails on about 15% of the code it attempts, and does a poor job of translating the code it can handle. The motivations for this are unclear, but keep in mind that a group of people who claim to be programming language experts can’t write a reliable translator from one version of their own language to another. This is also a cause of their porting problems, which adds up to more evidence Python 3’s future is uncertain.

Writing a translator from one language to another is a fully proven and fundamental piece of computer science. Yet, the 2to3 translator cannot translate code 100%. In my own tests it is only about 85% effective, leaving a large amount of code to translate manually. Given that translation is a solved problem this seems to be a decision bordering on malice rather than incredible incompetence.

The programmer-oriented section doubles down on this idea with a title of “Purposefully Crippled 2to3 Translator” — again, accusing the Python project of sabotaging everyone. That doesn’t even make sense; if their goal is to make everyone use Python 3 at any cost, why would they deliberately break their tool that reduces the amount of Python 2 code and increases the amount of Python 3 code?

2to3 sucks because its job is hard. Python is dynamically typed. If it sees d.iteritems(), it might want to change that to d.items(), as it’s called in Python 3 — but it can’t always be sure that d is actually a dict. If d is some user-defined type, renaming the method is wrong.

But hey, Turing-completeness, right? It must be mathematically possible. And it is! As long as you’re willing to see this:

1
2
for key, value in d.iteritems():
    ...

Get translated to this:

1
2
3
__d = d
for key, value in (__d.items() if isinstance(__d, dict) else __d.iteritems()):
    ...

Would Zed be happier with that, I wonder?

The JVM and CLR Prove It’s Pointless

Yet, for some reason, the Python 3 virtual machine can’t run Python 2? Despite the solidly established mathematics disproving this, the countless examples of running one crazy language inside a Russian doll cascade of other crazy languages, and huge number of languages that can coexist in nearly every other virtual machine? That makes no sense.

This, finally, is the real complaint. It’s not a bad one, and it comes up sometimes, but… it’s not this easy.

The Python 3 VM is fairly similar to the Python 2 VM. The problem isn’t the VM, but the core language constructs and standard library.

Consider: what happens when a Python 2 old-style class instance gets passed into Python 3, which has no such concept? It seems like a value would have to always have the semantics of the language version it came from — that’s how languages usually coexist on the same VM, anyway.

Now, I’m using Python 3, and I load some library written for Python 2. I call a Python 2 function that deals with bytestrings, and I pass it a Python 3 bytestring. Oh no! It breaks because Python 3 bytestrings iterate as integers, whereas the Python 2 library expects them to iterate as characters.

Okay, well, no big deal, you say. Maybe Python 2 libraries just need to be updated to work either way, before they can be used with Python 3.

But that’s exactly the situation we’re in right now. Syntax changes are trivially fixed by 2to3 and similar tools. It’s libraries that cause the subtler issues.

The same applies the other way, too. I write Python 3 code, and it gets an int from some Python 2 library. I try to use the .to_bytes method on it, but that doesn’t exist on Python 2 integers. So my Python 3 code, written and intended purely for Python 3, now has to deal with Python 2 integers as well.

Perhaps “primitive” types should convert automatically, on the boundary? Okay, sure. What about the Python 2 buffer type, which is C-backed and replaced by memoryview in Python 3?

Or how about this very fundamental problem: names of methods and other attributes are str in both versions, but that means they’re bytestrings in Python 2 and text in Python 3. If you’re in Python 3 land, and you call obj.foo() on a Python 2 object, what happens? Python 3 wants a method with the text name foo, but Python 2 wants a method with the bytes name foo. Text and bytes are not implicitly convertible in Python 3. So does it error? Somehow work anyway? What about the other way around?

What about the standard library, which has had a number of improvements in Python 3 that don’t or can’t exist in Python 2? Should Python ship two entire separate copies of its standard library? What about modules like logging, which rely on global state? Does Python 2 and Python 3 code need to set up logging separately within the same process?

There are no good solutions here. The language would double in size and complexity, and you’d still end up with a mess at least as bad as the one we have now when values leak from one version into the other.

We either have two situations here:

  1. Python 3 has been purposefully crippled to prevent Python 2’s execution alongside Python 3 for someone’s professional or ideological gain.
  2. Python 3 cannot run Python 2 due to simple incompetence on the part of the Python project.

I can think of a third.

Difficult To Use Strings

The strings in Python 3 are very difficult to use for beginners. In an attempt to make their strings more “international” they turned them into difficult to use types with poor error messages.

Why is “international” in scare quotes?

Every time you attempt to deal with characters in your programs you’ll have to understand the difference between byte sequences and Unicode strings.

Given that I’m reading part of a book teaching Python, this would be a perfect opportunity to drive this point home by saying “Look! Running exercise N in Python 3 doesn’t work.” Exercise 1, at least, works fine for me with a little extra sprinkle of parentheses:

1
2
3
4
5
6
7
print("Hello World!")
print("Hello Again")
print("I like typing this.")
print("This is fun.")
print('Yay! Printing.')
print("I'd much rather you 'not'.")
print('I "said" do not touch this.')

Contrast with the actual content of that exercise — at the bottom is a big red warning box telling people from “another country” (relative to where?) that if they get errors about ASCII encodings, they should put an unexplained magical incantation at the top of their scripts to fix “Unicode UTF-8”, whatever that is. I wonder if Zed has read his own book.

Don’t know what that is? Exactly.

If only there were a book that could explain it to beginners in more depth than “you have to fix this if you’re foreign”.

The Python project took a language that is very forgiving to beginners and mostly “just works” and implemented strings that require you to constantly know what type of string they are. Worst of all, when you get an error with strings (which is very often) you get an error message that doesn’t tell you what variable names you need to fix.

The complaint is that this happens in Python 3, whereas it’s accepted in Python 2:

1
2
3
4
>>> b"hello" + "hello"
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: can't concat bytes to str

The programmer section is called “Statically Typed Strings”. But this is not static typing. That’s strong typing, a property that sets Python’s type system apart from languages like JavaScript. It’s usually considered a good thing, because the alternative is to silently produce nonsense in some cases, and then that nonsense propagates through your program and is hard to track down when it finally causes problems.

If they’re going to require beginners to struggle with the difference between bytes and Unicode the least they could do is tell people what variables are bytes and what variables are strings.

That would be nice, but it’s not like this is a new problem. Try this in Python 2.

1
2
3
4
>>> 3 + "hello"
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'int' and 'str'

How would Python even report this error when I used literals instead of variables? How could custom types hook into such a thing? Error messages are hard.

By the way, did you know that several error messages are much improved in Python 3? Python 2 is somewhat notorious for the confusing errors it produces when an argument is missing from a method call, but Python 3 is specific about the problem, which is much friendlier to beginners.

However, when you point out that this is hard to use they try to claim it’s good for you. It is not. It’s simple blustering covering for a poor implementation.

I don’t know what about this is hard. Why do you have a text string and a bytestring in the first place? Why is it okay to refuse adding a number to a string, but not to refuse adding bytes to a string?

Imagine if one of the Python core developers were just getting into Python 2 and messing around.

1
2
3
# -*- coding: utf8 -*-
print "Hi, my name is Łukasz Langa."
print "Hi, my name is Łukasz Langa."[::-1]
1
2
Hi, my name is Łukasz Langa.
.agnaL zsaku�� si eman ym ,iH

Good luck figuring out how to fix that.

This isn’t blustering. Bytes are not text; they are binary data that could encode anything. They happen to look like text sometimes, and you can get away with thinking they’re text if you’re not from “another country”, but that mindset will lead you to write code that is wrong. The resulting bugs will be insidious and confusing, and you’ll have a hard time even reasoning about them because it’ll seem like “Unicode text” is somehow a different beast altogether from “ASCII text”.

Exercise 11 mentions at the end that you can use int() to convert a number to an integer. It’s no more complicated to say that you convert bytes to a string using .decode(). It shouldn’t even come up unless you’re explicitly working with binary data, and I don’t see any reading from sockets in LPTHW.

It’s also not statically compiled as strongly as it could be, so you can’t find these kinds of type errors until you run the code.

This comes a scant few paragraphs after “Dynamic typing is what makes Python easy to use and one of the reasons I advocate it for beginners.”

You can’t find any kinds of type errors until you run the code. Welcome to dynamic typing.

Strings are also most frequently received from an external source, such as a network socket, file, or similar input. This means that Python 3’s statically typed strings and lack of static type safety will cause Python 3 applications to crash more often and have more security problems when compared with Python 2.

On the contrary — Python 3 applications should crash less often. The problem with silently converting between bytestrings and text in Python 2 is that it might fail, depending on the contents. "cafe" + u"hello" works fine, but "café" + u"hello" raises a UnicodeDecodeError. Python 2 makes it very easy to write code that appears to work when tested with ASCII data, but later breaks with anything else, even though the values are still the same types. In Python 3, you get an error the first time you try to run such code, regardless of what’s in the actual values. That’s the biggest reason for the change: it improves things from being intermittent value errors to consistent type errors.

More security problems? This is never substantiated, and seems to have been entirely fabricated.

Too Many Formatting Options

In addition to that you will have 3 different formatting options in Python 3.6. That means you’ll have to learn to read and use multiple ways to format strings that are all very different. Not even I, an experienced professional programmer, can easily figure out these new formatting systems or keep up with their changing features.

I don’t know what on earth “keep up with their changing features” is supposed to mean, and Zed doesn’t bother to go into details.

Python 3 has three ways to format strings: % interpolation, str.format(), and the new f"" strings in Python 3.6. The f"" strings use the same syntax as str.format(); the difference is that where str.format() uses numbers or names of keyword arguments, f"" strings just use expressions. Compare:

1
2
3
number = 133
print("{n:02x}".format(n=number))
print(f"{number:02x}")

This isn’t “very different”. A frequently-used method is being promoted to syntax.

I really like this new style, and I have no idea why this wasn’t the formatting for Python 3 instead of that stupid .format function. String interpolation is natural for most people and easy to explain.

The problem is that beginner will now how to know all three of these formatting styles, and that’s too many.

I could swear Zed, an experienced professional programmer, just said he couldn’t easily figure out these new formatting systems. Note also that str.format() has existed in Python 2 since Python 2.6 was released in 2008, so I don’t know why Zed said “new formatting systems“, plural.

This is a truly bizarre complaint overall, because the mechanism Zed likes best is the newest one. If Python core had agreed that three mechanisms was too many, we wouldn’t be getting f"" at all.

Even More Versions of Strings

Finally, I’m told there is a new proposal for a string type that is both bytes and Unicode at the same time? That’d be fantastic if this new type brings back the dynamic typing that makes Python easy, but I’m betting it will end up being yet another static type to learn. For that reason I also think beginners should avoid Python 3 until this new “chimera string” is implemented and works reliably in a dynamic way. Until then, you will just be dealing with difficult strings that are statically typed in a dynamically typed language.

I have absolutely no idea what this is referring to, and I can’t find anyone who does. I don’t see any recent PEPs mentioning such a thing, nor anything in the last several months on the python-dev mailing list. I don’t see it in the Python 3.6 release notes.

The closest thing I can think of is the backwards-compatibility shenanigans for PEP 528 and PEP 529 — they switch to the Windows wide-string APIs for console and filesystem encoding, but pretend under the hood that the APIs take UTF-8-encoded bytes to avoid breaking libraries like Twisted. That’s a microscopic detail that should never matter to anyone but authors of Twisted, and is nothing like a new hybrid string type, but otherwise I’m at a loss.

This paragraph really is a perfect summary of the whole article. It speaks vaguely yet authoritatively about something that doesn’t seem to exist, it doesn’t bother actually investigating the thing the entire section talks about, it conjectures that this mysterious feature will be hard just because it’s in Python 3, and it misuses terminology to complain about a fundamental property of Python that’s always existed.

Core Libraries Not Updated

Many of the core libraries included with Python 3 have been rewritten to use Python 3, but have not been updated to use its features. How could they given Python 3’s constant changing status and new features?

What “constant changing status”? The language makes new releases; is that bad? The only mention of “changing” so far was with string formatting, which makes no sense to me, because the only major change has been the addition of syntax that Zed prefers.

There are several libraries that, despite knowing the encoding of data, fail to return proper strings. The worst offender seems to be any libraries dealing with the HTTP protocol, which does indicate the encoding of the underlying byte stream in many cases.

In many cases, yes. Not in all. Some web servers don’t send back an encoding. Some files don’t have an encoding, because they’re images or other binary data. HTML allows the encoding to be given inside the document, instead. urllib has always returned bytes, so it’s not all that unreasonable to keep doing that, rather than… well, I’m not quite sure what this is proposing. Return strings sometimes?

The documentation for urllib.request and http.client both advise using the higher-level Requests library instead, in a prominent yellow box right at the top. Requests has distinct mechanisms for retrieving bytes versus text and is vastly easier to use overall, though I don’t think even it understands reading encodings from HTML. Alas, computers.

Good luck to any beginner figuring out how to install Requests on Python 2 — but thankfully, Python 3 now comes bundled with pip, which makes installing libraries much easier. Contrast with the beginning of exercise 46, which apologizes for how difficult this is to explain, lists four things to install, warns that it will be frustrating, and advises watching a video to help figure it out.

What’s even more idiotic about this is Python has a really good Chardet library for detecting the encoding of byte streams. If Python 3 is supposed to be “batteries included” then fast Chardet should be baked into the core of Python 3’s strings making it cake to translate strings to bytes even if you don’t know the underlying encoding. … Call the function whatever you want, but it’s not magic to guess at the encoding of a byte stream, it’s science. The only reason this isn’t done for you is that the Python project decided that you should be punished for not knowing about bytes vs. Unicode, and their arrogance means you have difficult to use strings.

Guessing at the encoding of a byte stream isn’t so much science as, well, guessing. Guessing means that sometimes you’re wrong. Sometimes that’s what you want, and I’m honestly ambivalent about having chardet in the standard library, but it’s hardly arrogant to not want to include a highly-fallible heuristic in your programming language.

Conclusions and Warnings

I have resisted writing about these problems with Python 3 for 5 versions because I hoped it would become usable for beginners. Each year I would attempt to convert some of my code and write a couple small tests with Python 3 and simply fail. If I couldn’t use Python 3 reliably then there’s no way a total beginner could manage it. So each year I’d attempt it, and fail, and wait until they fix it. I really liked Python and hoped the Python project would drop their stupid stances on usability.

Let us recap the usability problems seen thusfar.

  • You can’t add b"hello" to "hello".
  • TypeErrors are phrased exactly the same as they were in Python 2.
  • The type system is exactly as dynamic as it was in Python 2.
  • There is a new formatting mechanism, using the same syntax as one in Python 2, that Zed prefers over the ones in Python 2.
  • urllib.request doesn’t decode for you, just like in Python 2.
  • 档牡敤㽴 isn’t built in. Oh, sorry, I meant chardet.

Currently, the state of strings is viewed as a Good Thing in the Python community. The fact that you can’t run Python 2 inside Python 3 is seen as a weird kind of tough love. The brainwashing goes so far as to outright deny the mathematics behind language translation and compilation in an attempt to motivate the Python community to brute force convert all Python 2 code.

Which is probably why the Python project focuses on convincing unsuspecting beginners to use Python 3. They don’t have a switching cost, so if you get them to fumble their way through the Python 3 usability problems then you have new converts who don’t know any better. To me this is morally wrong and is simply preying on people to prop up a project that needs a full reset to survive. It means beginners will fail at learning to code not because of their own abilities, but because of Python 3’s difficulty.

Now that we’re towards the end, it’s a good time to say this: Zed Shaw, your behavior here is fucking reprehensible.

Half of what’s written here is irrelevant nonsense backed by a vague appeal to “mathematics”. Instead of having even the shred of humility required to step back and wonder if there are complicating factors beyond whether something is theoretically possible, you have invented a variety of conflicting and malicious motivations to ascribe to the Python project.

It’s fine to criticize Python 3. The string changes force you to think about what you’re doing a little more in some cases, and occasionally that’s a pain in the ass. I absolutely get it.

But you’ve gone out of your way to invent a conspiracy out of whole cloth and promote it on your popular platform aimed at beginners, who won’t know how obviously full of it you are. And why? Because you can’t add b"hello" to "hello"? Are you kidding me? No one can even offer to help you, because instead of examples of real problems you’ve had, you gave two trivial toys and then yelled a lot about how the whole Python project is releasing mind-altering chemicals into the air.

The Python 3 migration has been hard enough. It’s taken a lot of work from a lot of people who’ve given enough of a crap to help Python evolve — to make it better to the best of their judgment and abilities. Now we’re finally, finally at the point where virtually all libraries support Python 3, a few new ones only support Python 3, and Python 3 adoption is starting to take hold among application developers.

And you show up to piss all over it, to propagate this myth that Python 3 is hamstrung to the point of unusability, because if the Great And Wise Zed Shaw can’t figure it out in ten seconds then it must just be impossible.

Fuck you.

Sadly, I doubt this will happen, and instead they’ll just rant about how I don’t know what I’m talking about and I should shut up.

This is because you don’t know what you’re talking about, and you should shut up.

A Rebuttal For Python 3

Post Syndicated from Eevee original https://eev.ee/blog/2016/11/23/a-rebuttal-for-python-3/

Zed Shaw, of Learn Python the Hard Way fame, has now written The Case Against Python 3.

I’m not involved with core Python development. The only skin I have in this game is that I like Python 3. It’s a good language. And one of the big factors I’ve seen slowing its adoption is that respected people in the Python community keep grouching about it. I’ve had multiple newcomers tell me they have the impression that Python 3 is some kind of unusable disaster, though they don’t know exactly why; it’s just something they hear from people who sound like they know what they’re talking about. Then they actually use the language, and it’s fine.

I’m sad to see the Python community needlessly sabotage itself, but Zed’s contribution is beyond the pale. It’s not just making a big deal about changed details that won’t affect most beginners; it’s complete and utter nonsense, on a platform aimed at people who can’t yet recognize it as nonsense. I am so mad.

The Case Against Python 3

I give two sets of reasons as I see them now. One for total beginners, and another for people who are more knowledgeable about programming.

Just to note: the two sets of reasons are largely the same ideas presented differently, so I’ll just weave them together below.

The first section attempts to explain the case against starting with Python 3 in non-technical terms so a beginner can make up their own mind without being influenced by propaganda or social pressure.

Having already read through this once, this sentence really stands out to me. The author of a book many beginners read to learn Python in the first place is providing a number of reasons (some outright fabricated) not to use Python 3, often in terms beginners are ill-equipped to evaluate, but believes this is a defense against propaganda or social pressure.

The Most Important Reason

Before getting into the main technical reasons I would like to discuss the one most important social reason for why you should not use Python 3 as a beginner:

THERE IS A HIGH PROBABILITY THAT PYTHON 3 IS SUCH A FAILURE IT WILL KILL PYTHON.

Python 3’s adoption is really only at about 30% whenever there is an attempt to measure it.

Wait, really? Wow, that’s fantastic.

I mean, it would probably be higher if the most popular beginner resources were actually teaching Python 3, but you know.

Nobody is all that interested in finding out what the real complete adoption is, despite there being fairly simple ways to gather metrics on the adoption.

This accusatory sentence conspicuously neglects to mention what these fairly simple ways are, a pattern that repeats throughout. The trouble is that it’s hard to even define what “adoption” means — I write all my code in Python 3 now, but veekun is still Python 2 because it’s in maintenance mode, so what does that say about adoption? You could look at PyPI download stats, but those are thrown way off by caches and system package managers. You could look at downloads from the Python website, but a great deal of Python is written and used on Unix-likes, where Python itself is either bundled or installed from the package manager.

It’s as simple as that. If you learn Python 2, then you can still work with all the legacy Python 2 code in existence until Python dies or you (hopefully) move on. But if you learn Python 3 then your future is very uncertain. You could really be learning a dead language and end up having to learn Python 2 anyway.

You could use Python 2, until it dies… or you could use Python 3, which might die. What a choice.

By some definitions, Python 2 is already dead — it will not see another major release, only security fixes. Python 3 is still actively developed, and its seventh major release is next month. It even contains a new feature that Zed later mentions he prefers to Python 2’s offerings.

It may shock you to learn that I know both Python 2 and Python 3. Amazingly, two versions of the same language are much more similar than they are different. If you learned Python 3 and then a wizard cast a spell that made it vanish from the face of the earth, you’d just have to spend half an hour reading up on what had changed from Python 2.

Also, it’s been over a decade, maybe even multiple decades, and Python 3 still isn’t above about 30% in adoption. Even among the sciences where Python 3 is touted as a “success” it’s still only around 25-30% adoption. After that long it’s time to admit defeat and come up with a new plan.

Python 3.0 came out in 2008. The first couple releases ironed out some compatibility and API problems, so it didn’t start to gain much traction until Python 3.2 came out in 2011. Hell, Python 2.0 came out in 2000, so even Python 2 isn’t multiple decades old. It would be great if this trusted beginner reference could take two seconds to check details like this before using them to scaremonger.

The big early problem was library compatibility: it’s hard to justify switching to a new version of the language if none of the libraries work. Libraries could only port once their own dependencies had ported, of course, and it took a couple years to figure out the best way to maintain compatibility with both Python 2 and Python 3. I’d say we only really hit critical mass a few years ago — for instance, Django didn’t support Python 3 until 2013 — in which case that 30% is nothing to sneeze at.

There are more reasons beyond just the uncertain future of Python 3 even decades later.

In one paragraph, we’ve gone from “maybe even multiple decades” to just “decades”, which is a funny way to spell “eight years”.

Not In Your Best Interests

The Python project’s efforts to convince you to start with Python 3 are not in your best interest, but, rather, are only in the best interests of the Python project.

It’s bad, you see, for the Python project to want people to use the work it produced.

Anyway, please buy Zed Shaw’s book.

Anyway, please pledge to my Patreon.

Ultimately though, if Python 3 were good they wouldn’t need to do any convincing to get you to use it. It would just naturally work for you and you wouldn’t have any problems. Instead, there are serious issues with Python 3 for beginners, and rather than fix those issues the Python project uses propaganda, social pressure, and marketing to convince you to use it. In the world of technology using marketing and propaganda is immediately a sign that the technology is defective in some obvious way.

This use of social pressure and propaganda to convince you to use Python 3 despite its problems, in an attempt to benefit the Python project, is morally unconscionable to me.

Ten paragraphs in, Zed is telling me that I should be suspicious of anything that relies on marketing and propaganda. Meanwhile, there has yet to be a single concrete reason why Python 3 is bad for beginners — just several flat-out incorrect assertions and a lot of handwaving about how inexplicably nefarious the Python core developers are. You know, the same people who made Python 2. But they weren’t evil then, I guess.

You Should Be Able to Run 2 and 3

In the programming language theory there is this basic requirement that, given a “complete” programming language, I can run any other programming language. In the world of Java I’m able to run Ruby, Java, C++, C, and Lua all at the same time. In the world of Microsoft I can run F#, C#, C++, and Python all at the same time. This isn’t just a theoretical thing. There is solid math behind it. Math that is truly the foundation of computer science.

The fact that you can’t run Python 2 and Python 3 at the same time is purely a social and technical decision that the Python project made with no basis in mathematical reality. This means you are working with a purposefully broken platform when you use Python 3, and I personally can’t condone teaching people to use something that is fundamentally broken.

The programmer-oriented section makes clear that the solid math being referred to is Turing-completeness — the section is even titled “Python 3 Is Not Turing Complete”.

First, notice a rhetorical trick here. You can run Ruby, Java, C++, etc. at the same time, so why not Python 2 and Python 3?

But can you run Java and C# at the same time? (I’m sure someone has done this, but it’s certainly much less popular than something like Jython or IronPython.)

Can you run Ruby 1.8 and Ruby 2.3 at the same time? Ah, no, so I guess Ruby 2.3 is fundamentally and purposefully broken.

Can you run Lua 5.1 and 5.3 at the same time? Lua is a spectacular example, because Lua 5.2 made a breaking change to how the details of scope work, and it’s led to a situation where a lot of programs that embed Lua haven’t bothered upgrading from Lua 5.1. Was Lua 5.2 some kind of dark plot to deliberately break the language? No, it’s just slightly more inconvenient than expected for people to upgrade.

Anyway, as for Turing machines:

In computer science a fundamental law is that if I have one Turing Machine I can build any other Turing Machine. If I have COBOL then I can bootstrap a compiler for FORTRAN (as disgusting as that might be). If I have FORTH, then I can build an interpreter for Ruby. This also applies to bytecodes for CPUs. If I have a Turing Complete bytecode then I can create a compiler for any language. The rule then can be extended even further to say that if I cannot create another Turing Machine in your language, then your language cannot be Turing Complete. If I can’t use your language to write a compiler or interpreter for any other language then your language is not Turing Complete.

Yes, this is true.

Currently you cannot run Python 2 inside the Python 3 virtual machine. Since I cannot, that means Python 3 is not Turing Complete and should not be used by anyone.

And this is completely asinine. Worse, it’s flat-out dishonest, and relies on another rhetorical trick. You only “cannot” run Python 2 inside the Python 3 VM because no one has written a Python 2 interpreter in Python 3. The “cannot” is not a mathematical impossibility; it’s a simple matter of the code not having been written. Or perhaps it has, but no one cares anyway, because it would be comically and unusably slow.

I assume this was meant to be sarcastic on some level, since it’s followed by a big blue box that seems unsure about whether to double down or reverse course. But I can’t tell why it was even brought up, because it has absolutely nothing to do with Zed’s true complaint, which is that Python 2 and Python 3 do not coexist within a single environment. Implementing language X using language Y does not mean that X and Y can now be used together seamlessly.

The canonical Python release is written in C (just like with Ruby or Lua), but you can’t just dump a bunch of C code into a Python (or Ruby or Lua) file and expect it to work. You can talk to C from Python and vice versa, but defining how they communicate is a bit of a pain in the ass and requires some level of setup.

I’ll get into this some more shortly.

No Working Translator

Python 3 comes with a tool called 2to3 which is supposed to take Python 2 code and translate it to Python 3 code.

I should point out right off the bat that this is not actually what you want to use most of the time, because you probably want to translate your Python 2 code to Python 2/3 code. 2to3 produces code that most likely will not work on Python 2. Other tools exist to help you port more conservatively.

Translating one programming language into another is a solidly researched topic with solid math behind it. There are translators that convert any number of languages into JavaScript, C, C++, Java, and many times you have no idea the translation is being done. In addition to this, one of the first steps when implementing a new language is to convert the new language into an existing language (like C) so you don’t have to write a full compiler. Translation is a fully solved problem.

This is completely fucking ludicrous. Translating one programming language to another is a common task, though “fully solved” sounds mighty questionable. But do you know what the results look like?

I found a project called “Transcrypt”, which puts Python in the browser by “translating” it to JavaScript. I’ve never used or heard of this before; I just googled for something to convert Python to JavaScript. Here’s their first sample, a demo using jQuery:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
def start ():
    def changeColors ():
        for div in S__divs:
            S (div) .css ({
                'color': 'rgb({},{},{})'.format (* [int (256 * Math.random ()) for i in range (3)]),
            })

    S__divs = S ('div')
    changeColors ()
    window.setInterval (changeColors, 500)

And here’s the JavaScript code it compiles to:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
(function () {
    var start = function () {
        var changeColors = function () {
            var __iterable0__ = $divs;
            for (var __index0__ = 0; __index0__ < __iterable0__.length; __index0__++) {
                var div = __iterable0__ [__index0__];
                $ (div).css (dict ({'color': 'rgb({},{},{})'.format.apply (null, function () {
                    var __accu0__ = [];
                    for (var i = 0; i < 3; i++) {
                        __accu0__.append (int (256 * Math.random ()));
                    }
                    return __accu0__;
                } ())}));
            }
        };
        var $divs = $ ('div');
        changeColors ();
        window.setInterval (changeColors, 500);
    };
    __pragma__ ('<all>')
        __all__.start = start;
    __pragma__ ('</all>')
}) ();

Well, not quite. That’s actually just a small piece at the end of the full 1861-line file.

You may notice that the emitted JavaScript effectively has to emulate the Python for loop, because JavaScript doesn’t have anything that works exactly the same way. And this is a basic, common language feature translated between two languages in the same general family! Imagine how your code would look if you relied on gritty details of how classes are implemented.

Is this what you want 2to3 to do to your code?

Even if something has been proven to be mathematically possible, that doesn’t mean it’s easy, and it doesn’t mean the results will be pretty (or fast).

The 2to3 translator fails on about 15% of the code it attempts, and does a poor job of translating the code it can handle. The motivations for this are unclear, but keep in mind that a group of people who claim to be programming language experts can’t write a reliable translator from one version of their own language to another. This is also a cause of their porting problems, which adds up to more evidence Python 3’s future is uncertain.

Writing a translator from one language to another is a fully proven and fundamental piece of computer science. Yet, the 2to3 translator cannot translate code 100%. In my own tests it is only about 85% effective, leaving a large amount of code to translate manually. Given that translation is a solved problem this seems to be a decision bordering on malice rather than incredible incompetence.

The programmer-oriented section doubles down on this idea with a title of “Purposefully Crippled 2to3 Translator” — again, accusing the Python project of sabotaging everyone. That doesn’t even make sense; if their goal is to make everyone use Python 3 at any cost, why would they deliberately break their tool that reduces the amount of Python 2 code and increases the amount of Python 3 code?

2to3 sucks because its job is hard. Python is dynamically typed. If it sees d.iteritems(), it might want to change that to d.items(), as it’s called in Python 3 — but it can’t always be sure that d is actually a dict. If d is some user-defined type, renaming the method is wrong.

But hey, Turing-completeness, right? It must be mathematically possible. And it is! As long as you’re willing to see this:

1
2
for key, value in d.iteritems():
    ...

Get translated to this:

1
2
3
__d = d
for key, value in (__d.items() if isinstance(__d, dict) else __d.iteritems()):
    ...

Would Zed be happier with that, I wonder?

The JVM and CLR Prove It’s Pointless

Yet, for some reason, the Python 3 virtual machine can’t run Python 2? Despite the solidly established mathematics disproving this, the countless examples of running one crazy language inside a Russian doll cascade of other crazy languages, and huge number of languages that can coexist in nearly every other virtual machine? That makes no sense.

This, finally, is the real complaint. It’s not a bad one, and it comes up sometimes, but… it’s not this easy.

The Python 3 VM is fairly similar to the Python 2 VM. The problem isn’t the VM, but the core language constructs and standard library.

Consider: what happens when a Python 2 old-style class instance gets passed into Python 3, which has no such concept? It seems like a value would have to always have the semantics of the language version it came from — that’s how languages usually coexist on the same VM, anyway.

Now, I’m using Python 3, and I load some library written for Python 2. I call a Python 2 function that deals with bytestrings, and I pass it a Python 3 bytestring. Oh no! It breaks because Python 3 bytestrings iterate as integers, whereas the Python 2 library expects them to iterate as characters.

Okay, well, no big deal, you say. Maybe Python 2 libraries just need to be updated to work either way, before they can be used with Python 3.

But that’s exactly the situation we’re in right now. Syntax changes are trivially fixed by 2to3 and similar tools. It’s libraries that cause the subtler issues.

The same applies the other way, too. I write Python 3 code, and it gets an int from some Python 2 library. I try to use the .to_bytes method on it, but that doesn’t exist on Python 2 integers. So my Python 3 code, written and intended purely for Python 3, now has to deal with Python 2 integers as well.

Perhaps “primitive” types should convert automatically, on the boundary? Okay, sure. What about the Python 2 buffer type, which is C-backed and replaced by memoryview in Python 3?

Or how about this very fundamental problem: names of methods and other attributes are str in both versions, but that means they’re bytestrings in Python 2 and text in Python 3. If you’re in Python 3 land, and you call obj.foo() on a Python 2 object, what happens? Python 3 wants a method with the text name foo, but Python 2 wants a method with the bytes name foo. Text and bytes are not implicitly convertible in Python 3. So does it error? Somehow work anyway? What about the other way around?

What about the standard library, which has had a number of improvements in Python 3 that don’t or can’t exist in Python 2? Should Python ship two entire separate copies of its standard library? What about modules like logging, which rely on global state? Does Python 2 and Python 3 code need to set up logging separately within the same process?

There are no good solutions here. The language would double in size and complexity, and you’d still end up with a mess at least as bad as the one we have now when values leak from one version into the other.

We either have two situations here:

  1. Python 3 has been purposefully crippled to prevent Python 2’s execution alongside Python 3 for someone’s professional or ideological gain.
  2. Python 3 cannot run Python 2 due to simple incompetence on the part of the Python project.

I can think of a third.

Difficult To Use Strings

The strings in Python 3 are very difficult to use for beginners. In an attempt to make their strings more “international” they turned them into difficult to use types with poor error messages.

Why is “international” in scare quotes?

Every time you attempt to deal with characters in your programs you’ll have to understand the difference between byte sequences and Unicode strings.

Given that I’m reading part of a book teaching Python, this would be a perfect opportunity to drive this point home by saying “Look! Running exercise N in Python 3 doesn’t work.” Exercise 1, at least, works fine for me with a little extra sprinkle of parentheses:

1
2
3
4
5
6
7
print("Hello World!")
print("Hello Again")
print("I like typing this.")
print("This is fun.")
print('Yay! Printing.')
print("I'd much rather you 'not'.")
print('I "said" do not touch this.')

Contrast with the actual content of that exercise — at the bottom is a big red warning box telling people from “another country” (relative to where?) that if they get errors about ASCII encodings, they should put an unexplained magical incantation at the top of their scripts to fix “Unicode UTF-8”, whatever that is. I wonder if Zed has read his own book.

Don’t know what that is? Exactly.

If only there were a book that could explain it to beginners in more depth than “you have to fix this if you’re foreign”.

The Python project took a language that is very forgiving to beginners and mostly “just works” and implemented strings that require you to constantly know what type of string they are. Worst of all, when you get an error with strings (which is very often) you get an error message that doesn’t tell you what variable names you need to fix.

The complaint is that this happens in Python 3, whereas it’s accepted in Python 2:

1
2
3
4
>>> b"hello" + "hello"
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: can't concat bytes to str

The programmer section is called “Statically Typed Strings”. But this is not static typing. That’s strong typing, a property that sets Python’s type system apart from languages like JavaScript. It’s usually considered a good thing, because the alternative is to silently produce nonsense in some cases, and then that nonsense propagates through your program and is hard to track down when it finally causes problems.

If they’re going to require beginners to struggle with the difference between bytes and Unicode the least they could do is tell people what variables are bytes and what variables are strings.

That would be nice, but it’s not like this is a new problem. Try this in Python 2.

1
2
3
4
>>> 3 + "hello"
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'int' and 'str'

How would Python even report this error when I used literals instead of variables? How could custom types hook into such a thing? Error messages are hard.

By the way, did you know that several error messages are much improved in Python 3? Python 2 is somewhat notorious for the confusing errors it produces when an argument is missing from a method call, but Python 3 is specific about the problem, which is much friendlier to beginners.

However, when you point out that this is hard to use they try to claim it’s good for you. It is not. It’s simple blustering covering for a poor implementation.

I don’t know what about this is hard. Why do you have a text string and a bytestring in the first place? Why is it okay to refuse adding a number to a string, but not to refuse adding bytes to a string?

Imagine if one of the Python core developers were just getting into Python 2 and messing around.

1
2
3
# -*- coding: utf8 -*-
print "Hi, my name is Łukasz Langa."
print "Hi, my name is Łukasz Langa."[::-1]
1
2
Hi, my name is Łukasz Langa.
.agnaL zsaku�� si eman ym ,iH

Good luck figuring out how to fix that.

This isn’t blustering. Bytes are not text; they are binary data that could encode anything. They happen to look like text sometimes, and you can get away with thinking they’re text if you’re not from “another country”, but that mindset will lead you to write code that is wrong. The resulting bugs will be insidious and confusing, and you’ll have a hard time even reasoning about them because it’ll seem like “Unicode text” is somehow a different beast altogether from “ASCII text”.

Exercise 11 mentions at the end that you can use int() to convert a number to an integer. It’s no more complicated to say that you convert bytes to a string using .decode(). It shouldn’t even come up unless you’re explicitly working with binary data, and I don’t see any reading from sockets in LPTHW.

It’s also not statically compiled as strongly as it could be, so you can’t find these kinds of type errors until you run the code.

This comes a scant few paragraphs after “Dynamic typing is what makes Python easy to use and one of the reasons I advocate it for beginners.”

You can’t find any kinds of type errors until you run the code. Welcome to dynamic typing.

Strings are also most frequently received from an external source, such as a network socket, file, or similar input. This means that Python 3’s statically typed strings and lack of static type safety will cause Python 3 applications to crash more often and have more security problems when compared with Python 2.

On the contrary — Python 3 applications should crash less often. The problem with silently converting between bytestrings and text in Python 2 is that it might fail, depending on the contents. "cafe" + u"hello" works fine, but "café" + u"hello" raises a UnicodeDecodeError. Python 2 makes it very easy to write code that appears to work when tested with ASCII data, but later breaks with anything else, even though the values are still the same types. In Python 3, you get an error the first time you try to run such code, regardless of what’s in the actual values. That’s the biggest reason for the change: it improves things from being intermittent value errors to consistent type errors.

More security problems? This is never substantiated, and seems to have been entirely fabricated.

Too Many Formatting Options

In addition to that you will have 3 different formatting options in Python 3.6. That means you’ll have to learn to read and use multiple ways to format strings that are all very different. Not even I, an experienced professional programmer, can easily figure out these new formatting systems or keep up with their changing features.

I don’t know what on earth “keep up with their changing features” is supposed to mean, and Zed doesn’t bother to go into details.

Python 3 has three ways to format strings: % interpolation, str.format(), and the new f"" strings in Python 3.6. The f"" strings use the same syntax as str.format(); the difference is that where str.format() uses numbers or names of keyword arguments, f"" strings just use expressions. Compare:

1
2
3
number = 133
print("{n:02x}".format(n=number))
print(f"{number:02x}")

This isn’t “very different”. A frequently-used method is being promoted to syntax.

I really like this new style, and I have no idea why this wasn’t the formatting for Python 3 instead of that stupid .format function. String interpolation is natural for most people and easy to explain.

The problem is that beginner will now how to know all three of these formatting styles, and that’s too many.

I could swear Zed, an experienced professional programmer, just said he couldn’t easily figure out these new formatting systems. Note also that str.format() has existed in Python 2 since Python 2.6 was released in 2008, so I don’t know why Zed said “new formatting systems“, plural.

This is a truly bizarre complaint overall, because the mechanism Zed likes best is the newest one. If Python core had agreed that three mechanisms was too many, we wouldn’t be getting f"" at all.

Even More Versions of Strings

Finally, I’m told there is a new proposal for a string type that is both bytes and Unicode at the same time? That’d be fantastic if this new type brings back the dynamic typing that makes Python easy, but I’m betting it will end up being yet another static type to learn. For that reason I also think beginners should avoid Python 3 until this new “chimera string” is implemented and works reliably in a dynamic way. Until then, you will just be dealing with difficult strings that are statically typed in a dynamically typed language.

I have absolutely no idea what this is referring to, and I can’t find anyone who does. I don’t see any recent PEPs mentioning such a thing, nor anything in the last several months on the python-dev mailing list. I don’t see it in the Python 3.6 release notes.

The closest thing I can think of is the backwards-compatibility shenanigans for PEP 528 and PEP 529 — they switch to the Windows wide-string APIs for console and filesystem encoding, but pretend under the hood that the APIs take UTF-8-encoded bytes to avoid breaking libraries like Twisted. That’s a microscopic detail that should never matter to anyone but authors of Twisted, and is nothing like a new hybrid string type, but otherwise I’m at a loss.

This paragraph really is a perfect summary of the whole article. It speaks vaguely yet authoritatively about something that doesn’t seem to exist, it doesn’t bother actually investigating the thing the entire section talks about, it conjectures that this mysterious feature will be hard just because it’s in Python 3, and it misuses terminology to complain about a fundamental property of Python that’s always existed.

Core Libraries Not Updated

Many of the core libraries included with Python 3 have been rewritten to use Python 3, but have not been updated to use its features. How could they given Python 3’s constant changing status and new features?

What “constant changing status”? The language makes new releases; is that bad? The only mention of “changing” so far was with string formatting, which makes no sense to me, because the only major change has been the addition of syntax that Zed prefers.

There are several libraries that, despite knowing the encoding of data, fail to return proper strings. The worst offender seems to be any libraries dealing with the HTTP protocol, which does indicate the encoding of the underlying byte stream in many cases.

In many cases, yes. Not in all. Some web servers don’t send back an encoding. Some files don’t have an encoding, because they’re images or other binary data. HTML allows the encoding to be given inside the document, instead. urllib has always returned bytes, so it’s not all that unreasonable to keep doing that, rather than… well, I’m not quite sure what this is proposing. Return strings sometimes?

The documentation for urllib.request and http.client both advise using the higher-level Requests library instead, in a prominent yellow box right at the top. Requests has distinct mechanisms for retrieving bytes versus text and is vastly easier to use overall, though I don’t think even it understands reading encodings from HTML. Alas, computers.

Good luck to any beginner figuring out how to install Requests on Python 2 — but thankfully, Python 3 now comes bundled with pip, which makes installing libraries much easier. Contrast with the beginning of exercise 46, which apologizes for how difficult this is to explain, lists four things to install, warns that it will be frustrating, and advises watching a video to help figure it out.

What’s even more idiotic about this is Python has a really good Chardet library for detecting the encoding of byte streams. If Python 3 is supposed to be “batteries included” then fast Chardet should be baked into the core of Python 3’s strings making it cake to translate strings to bytes even if you don’t know the underlying encoding. … Call the function whatever you want, but it’s not magic to guess at the encoding of a byte stream, it’s science. The only reason this isn’t done for you is that the Python project decided that you should be punished for not knowing about bytes vs. Unicode, and their arrogance means you have difficult to use strings.

Guessing at the encoding of a byte stream isn’t so much science as, well, guessing. Guessing means that sometimes you’re wrong. Sometimes that’s what you want, and I’m honestly ambivalent about having chardet in the standard library, but it’s hardly arrogant to not want to include a highly-fallible heuristic in your programming language.

Conclusions and Warnings

I have resisted writing about these problems with Python 3 for 5 versions because I hoped it would become usable for beginners. Each year I would attempt to convert some of my code and write a couple small tests with Python 3 and simply fail. If I couldn’t use Python 3 reliably then there’s no way a total beginner could manage it. So each year I’d attempt it, and fail, and wait until they fix it. I really liked Python and hoped the Python project would drop their stupid stances on usability.

Let us recap the usability problems seen thusfar.

  • You can’t add b"hello" to "hello".
  • TypeErrors are phrased exactly the same as they were in Python 2.
  • The type system is exactly as dynamic as it was in Python 2.
  • There is a new formatting mechanism, using the same syntax as one in Python 2, that Zed prefers over the ones in Python 2.
  • urllib.request doesn’t decode for you, just like in Python 2.
  • 档牡敤㽴 isn’t built in. Oh, sorry, I meant chardet.

Currently, the state of strings is viewed as a Good Thing in the Python community. The fact that you can’t run Python 2 inside Python 3 is seen as a weird kind of tough love. The brainwashing goes so far as to outright deny the mathematics behind language translation and compilation in an attempt to motivate the Python community to brute force convert all Python 2 code.

Which is probably why the Python project focuses on convincing unsuspecting beginners to use Python 3. They don’t have a switching cost, so if you get them to fumble their way through the Python 3 usability problems then you have new converts who don’t know any better. To me this is morally wrong and is simply preying on people to prop up a project that needs a full reset to survive. It means beginners will fail at learning to code not because of their own abilities, but because of Python 3’s difficulty.

Now that we’re towards the end, it’s a good time to say this: Zed Shaw, your behavior here is fucking reprehensible.

Half of what’s written here is irrelevant nonsense backed by a vague appeal to “mathematics”. Instead of having even the shred of humility required to step back and wonder if there are complicating factors beyond whether something is theoretically possible, you have invented a variety of conflicting and malicious motivations to ascribe to the Python project.

It’s fine to criticize Python 3. The string changes force you to think about what you’re doing a little more in some cases, and occasionally that’s a pain in the ass. I absolutely get it.

But you’ve gone out of your way to invent a conspiracy out of whole cloth and promote it on your popular platform aimed at beginners, who won’t know how obviously full of it you are. And why? Because you can’t add b"hello" to "hello"? Are you kidding me? No one can even offer to help you, because instead of examples of real problems you’ve had, you gave two trivial toys and then yelled a lot about how the whole Python project is releasing mind-altering chemicals into the air.

The Python 3 migration has been hard enough. It’s taken a lot of work from a lot of people who’ve given enough of a crap to help Python evolve — to make it better to the best of their judgment and abilities. Now we’re finally, finally at the point where virtually all libraries support Python 3, a few new ones only support Python 3, and Python 3 adoption is starting to take hold among application developers.

And you show up to piss all over it, to propagate this myth that Python 3 is hamstrung to the point of unusability, because if the Great And Wise Zed Shaw can’t figure it out in ten seconds then it must just be impossible.

Fuck you.

Sadly, I doubt this will happen, and instead they’ll just rant about how I don’t know what I’m talking about and I should shut up.

This is because you don’t know what you’re talking about, and you should shut up.

On Trump

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2016/11/on-trump.html

I dislike commenting on politics. I think it’s difficult to contribute any novel thought – and in today’s hyper-polarized world, stating an unpopular or half-baked opinion is a recipe for losing friends or worse. Still, with many of my colleagues expressing horror and disbelief over what happened on Tuesday night, I reluctantly decided to jot down my thoughts.

I think that in trying to explain away the meteoric rise of Mr. Trump, many of the mainstream commentators have focused on two phenomena. Firstly, they singled out the emergence of “filter bubbles” – a mechanism that allows people to reinforce their own biases and shields them from opposing views. Secondly, they implicated the dark undercurrents of racism, misogynism, or xenophobia that still permeate some corners of our society. From that ugly place, the connection to Mr. Trump’s foul-mouthed populism was not hard to make; his despicable bragging about women aside, to his foes, even an accidental hand gesture or an inane 4chan frog meme was proof enough. Once we crossed this line, the election was no longer about economic policy, the environment, or the like; it was an existential battle for equality and inclusiveness against the forces of evil that lurk in our midst. Not a day went by without a comparison between Mr. Trump and Adolf Hitler in the press. As for the moderate voters, the pundits had an explanation, too: the right-wing filter bubble must have clouded their judgment and created a false sense of equivalency between a horrid, conspiracy-peddling madman and our cozy, liberal status quo.

Now, before I offer my take, let me be clear that I do not wish to dismiss the legitimate concerns about the overtones of Mr. Trump’s campaign. Nor do I desire to downplay the scale of discrimination and hatred that the societies around the world are still grappling with, or the potential that the new administration could make it worse. But I found the aforementioned explanation of Mr. Trump’s unexpected victory to be unsatisfying in many ways. Ultimately, we all live in bubbles and we all have biases; in that regard, not much sets CNN apart from Fox News, Vox from National Review, or The Huffington Post from Breitbart. The reason why most of us would trust one and despise the other is that we instinctively recognize our own biases as more benign. After all, in the progressive world, we are fighting for an inclusive society that gives all people a fair chance to succeed. As for the other side? They seem like a bizarre, cartoonishly evil coalition of dimwits, racists, homophobes, and the ultra-rich. We even have serious scientific studies to back that up; their authors breathlessly proclaim that the conservative brain is inferior to the progressive brain. Unlike the conservatives, we believe in science, so we hit the “like” button and retweet the news.

But here’s the thing: I know quite a few conservatives, many of whom have probably voted for Mr. Trump – and they are about as smart, as informed, and as compassionate as my progressive friends. I think that the disconnect between the worldviews stems from something else: if you are a well-off person in a coastal city, you know people who are immigrants or who belong to other minorities, making you acutely attuned to their plight; but you may lack the same, deeply personal connection to – say – the situation of the lower middle class in the Midwest. You might have seen surprising charts or read a touching story in Mother Jones few years back, but it’s hard to think of them as individuals; they are more of a socioeconomic obstacle, a problem to be solved. The same goes for our understanding of immigration or globalization: these phenomena make our high-tech hubs more prosperous and more open; the externalities of our policies, if any, are just an abstract price that somebody else ought to bear for doing what’s morally right. And so, when Mr. Trump promises to temporarily ban travel from Muslim countries linked to terrorism or anti-American sentiments, we (rightly) gasp in disbelief; but when Mr. Obama paints an insulting caricature of rural voters as simpletons who “cling to guns or religion or antipathy to people who aren’t like them”, we smile and praise him for his wit, not understanding how the other side could be so offended by the truth. Similarly, when Mrs. Clinton chuckles while saying “we are going to put a lot of coal miners out of business” to a cheering crowd, the scene does not strike us as a thoughtless, offensive, or in poor taste. Maybe we will read a story about the miners in Mother Jones some day?

Of course, liberals take pride in caring for the common folk, but I suspect that their leaders’ attempts to reach out to the underprivileged workers in the “flyover states” often come across as ham-fisted and insincere. The establishment schools the voters about the inevitability of globalization, as if it were some cosmic imperative; they are told that to reject the premise would not just be wrong – but that it’d be a product of a diseased, nativist mind. They hear that the factories simply had to go to China or Mexico, and the goods just have to come back duty-free – all so that our complex, interconnected world can be a happier place. The workers are promised entitlements, but it stands to reason that they want dignity and hope for their children, not a lifetime on food stamps. The idle, academic debates about automation, post-scarcity societies, and Universal Basic Income probably come across as far-fetched and self-congratulatory, too.

The discourse is poisoned by cognitive biases in many other ways. The liberal media keeps writing about the unaccountable right-wing oligarchs who bankroll the conservative movement and supposedly poison people’s minds – but they offer nothing but praise when progressive causes are being bankrolled by Mr. Soros or Mr. Bloomberg. They claim that the conservatives represent “post-truth” politics – but their fact-checkers shoot down conservative claims over fairly inconsequential mistakes, while giving their favored politicians a pass on half-true platitudes about immigration, gun control, crime, or the sources of inequality. Mr. Obama sneers at the conservative bias of Fox News, but has no concern with the striking tilt to the left in the academia or in the mainstream press. The Economist finds it appropriate to refer to Trump supporters as “trumpkins” in print – but it would be unthinkable for them to refer to the fans of Mrs. Clinton using any sort of a mocking term. The pundits ponder the bold artistic statement made by the nude statues of the Republican nominee – but they would be disgusted if a conservative sculptor portrayed the Democratic counterpart in a similarly unflattering light. The commentators on MSNBC read into every violent incident at Trump rallies – but when a a random group of BLM protesters starts chanting about killing police officers, we all agree it would not be fair to cast the entire movement in a negative light.

Most progressives are either oblivious to these biases, or dismiss them as a harmless casualty of fighting the good fight. Perhaps so – and it is not my intent to imply equivalency between the causes of the left and of the right. But in the end, I suspect that the liberal echo chamber contributed to the election of Mr. Trump far more than anything that ever transpired on the right. It marginalized and excluded legitimate but alien socioeconomic concerns from the mainstream political discourse, binning them with truly bigoted and unintelligent speech – and leaving the “flyover underclass” no option other than to revolt. And it wasn’t just a revolt of the awful fringes. On the right, we had Mr. Trump – a clumsy outsider who eschews many of the core tenets of the conservative platform, and who does not convincingly represent neither the neoconservative establishment of the Bush era, nor the Bible-thumping religious right of the Tea Party. On the left, we had Mr. Sanders – an unaccomplished Senator who offered simplistic but moving slogans, who painted the accumulation of wealth as the source of our ills, and who promised to mold the United States into an idyllic version of the social democracies of Europe – supposedly governed by the workers, and not by the exploitative elites.

I think that people rallied behind Mr. Sanders and Mr. Trump not because they particularly loved the candidates or took all their promises seriously – but because they had no other credible herald for their cause. When the mainstream media derided their rebellion and the left simply laughed it off, it only served as a battle cry. When tens of millions of Trump supporters were labeled as xenophobic and sexist deplorables who deserved no place in politics, it only pushed more moderates toward the fringe. Suddenly, rational people could see themselves voting for a politically inexperienced and brash billionaire – a guy who talks about cutting taxes for the rich, who wants to cozy up to Russia, and whose VP pick previously wasn’t so sure about LGBT rights. I think it all happened not because of Mr. Trump’s character traits or thoughtful political positions, and not because half of the country hates women and minorities. He won because he was the only one to promise to “drain the swamp” – and to promise hope, not handouts, to the lower middle class.

There is a risk that this election will prove to be a step back for civil rights, or that Mr. Trump’s bold but completely untested economic policies will leave the world worse off; while not certain, it pains me to even contemplate this possibility. When we see injustice, we should fight tooth and nail. But for now, I am not swayed by the preemptively apocalyptic narrative on the left. Perhaps naively, I have faith in the benevolence of our compatriots and the strength of the institutions of – as cheesy as it sounds – one of the great nations of the world.

In which I have to debunk a second time

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/11/in-which-i-have-to-debunk-second-time.html

So Slate is doubling-down on their discredited story of a secret Trump server. Tip for journalists: if you are going to argue against an expert debunking your story, try to contact that expert first, so they don’t have to do what I’m going to do here, showing obvious flaws. Also, pay attention to the data.

The experts didn’t find anything

The story claims:

“I spoke with many DNS experts. They found the evidence strongly suggestive of a relationship between the Trump Organization and the bank”.

No, he didn’t. He gave experts limited information and asked them whether it’s consistent with a conspiracy theory. He didn’t ask if it was “suggestive” of the conspiracy theory, or that this was the best theory that fit the data.

This is why “experts” quoted in the press need to go through “media training”, to avoid getting your reputation harmed by bad journalists who try their best to put words in your mouth. You’ll be trained to recognize bad journalists like this, and how not to get sucked into their fabrications.

Jean Camp isn’t an expert

On the other hand, Jean Camp isn’t an expert. I’ve never heard of her before. She gets details wrong. Take for example in this blogpost where she discusses lookups for the domain mail.trump-email.com.moscow.alfaintra.net. She says:

This query is unusual in that is merges two hostnames into one. It makes the most sense as a human error in inserting a new hostname in some dialog window, but neglected to hit the backspace to delete the old hostname.

Uh, no. It’s normal DNS behavior with non-FQDNs. If the lookup for a name fails, computers will try again, pasting the local domain on the end. In other words, when Twitter’s DNS was taken offline by the DDoS attack a couple weeks ago, those monitoring DNS saw a zillion lookups for names like “www.twitter.com.example.com“.

I’ve reproduced this on my desktop by configuring the suffix moscow.alfaintra.net.

I then pinged “mail1.trump-email.com” and captured the packets. As you can see, after the initial lookups fail, Windows tried appending the suffix.

I don’t know what Jean Camp is an expert of, but this is sorta a basic DNS concept. It’s surprising she’d get it wrong. Of course, she may be an expert in DNS who simply had a brain fart (this happens to all of us), but looking across her posts and tweets, she doesn’t seem to be somebody who has a lot of experience with DNS. Sorry for impugning her credibility, but that’s the way the story is written. It demands that we trust the quoted “experts”. 
Call up your own IT department at Slate. Ask your IT nerds if this is how DNS operates. Note: I’m saying your average, unremarkable IT nerds can debunk an “expert” you quote in your story.
Understanding “spam” and “blacklists”

The new article has a paragraph noting that the IP address doesn’t appear on spam blocklists:

Was the server sending spam—unsolicited mail—as opposed to legitimate commercial marketing? There are databases that assiduously and comprehensively catalog spam. I entered the internet protocal address for mail1.trump-email.com to check if it ever showed up in Spamhaus and DNSBL.info. There were no traces of the IP address ever delivering spam.

This is a profound misunderstanding of how these things work.

Colloquially, we call those sending mass marketing emails, like Cendyn, “spammers”. But those running blocklists have a narrower definition. If  emails contain an option to “opt-out” of future emails, then it’s technically not “spam”.

Cendyn is constantly getting added to blocklists when people complain. They spend considerable effort contacting the many organizations maintaining blocklists, proving they do “opt-outs”, and getting “white-listed” instead of “black-listed”. Indeed, the entire spam-blacklisting industry is a bit of scam — getting white-listed often involves a bit of cash.

Those maintaining blacklists only go back a few months. The article is in error saying there’s no record ever of Cendyn sending spam. Instead, if an address comes up clean, it means there’s no record for the past few months. And, if Cendyn is in the white-lists, there would be no record of “spam” at all, anyway.

As somebody who frequently scans the entire Internet, I’m constantly getting on/off blacklists. It’s a real pain. At the moment, my scanner address “209.126.230.71” doesn’t appear to be on any blacklists. Next time a scan kicks off, it’ll probably get added — but only by a few, because most have white-listed it.

There is no IP address limitation

The story repeats the theory, which I already debunked, that the server has a weird configuration that limits who can talk to it:

The scientists theorized that the Trump and Alfa Bank servers had a secretive relationship after testing the behavior of mail1.trump-email.com using sites like Pingability. When they attempted to ping the site, they received the message “521 lvpmta14.lstrk.net does not accept mail from you.”

No, that’s how Listrake (who is the one who actually controls the server) configures all their marketing servers. Anybody can confirm this themselves by ping all the servers in this range:
In case you don’t want to do scans yourself, you can look up on Shodan and see that there’s at least 4000 servers around the Internet who give the same error message.

Again, go back to Chris Davis in your original story ask him about this. He’ll confirm that there’s nothing nefarious or weird going on here, that it’s just how Listrak has decided to configure all it’s spam-sending engines.

Either this conspiracy goes much deeper, with hundreds of servers involved, or this is a meaningless datapoint.
Where did the DNS logs come from?
Tea Leaves and Jean Camp are showing logs of private communications. Where did these logs come from? This information isn’t public. It means somebody has done something like hack into Alfa Bank. Or it means researchers who monitor DNS (for maintaing DNS, and for doing malware research) have broken their NDAs and possibly the law.
The data is incomplete and inconsistent. Those who work for other companies, like Dyn, claim it doesn’t match their own data. We have good reason to doubt these logs. There’s a good chance that the source doesn’t have as comprehensive a view as “Tea Leaves” claim. There’s also a good chance the data has been manipulated.
Specifically, I have as source who claims records for trump-email.com were changed in June, meaning either my source or Tea Leaves is lying.
Until we know more about the source of the data, it’s impossible to believe the conclusions that only Alfa Bank was doing DNS lookups.

By the way, if you are a company like Alfa Bank, and you don’t want the “research” community from seeing leaked intranet DNS requests, then you should probably reconfigure your DNS resolvers. You’ll want to look into RFC7816 “query minimization”, supported by the Unbound and Knot resolvers.

Do the graphs show interesting things?

The original “Tea Leaves” researchers are clearly acting in bad faith. They are trying to twist the data to match their conclusions. For example, in the original article, they claim that peaks in the DNS activity match campaign events. But looking at the graph, it’s clear these are unrelated. It display the common cognitive bias of seeing patterns that aren’t there.
Likewise, they claim that the timing throughout the day matches what you’d expect from humans interacting back and forth between Moscow and New York. No. This is what the activity looks like, graphing the number of queries by hour:
As you can see, there’s no pattern. When workers go home at 5pm in New York City, it’s midnight in Moscow. If humans were involved, you’d expect an eight hour lull during that time. Likewise, when workers arrive at 9am in New York City, you expect a spike in traffic for about an hour until workers in Moscow go home. You see none of that here. What you instead see is a random distribution throughout the day — the sort of distribution you’d expect if this were DNS lookups from incoming spam.
The point is that we know the original “Tea Leaves” researchers aren’t trustworthy, that they’ve convinced themselves of things that just aren’t there.
Does Trump control the server in question?

OMG, this post asks the question, after I’ve debunked the original story, and still gotten the answer wrong.
The answer is that Listrak controls the server. Not even Cendyn controls it, really, they just contract services from Listrak. In other words, not only does Trump not control it, the next level company (Cendyn) also doesn’t control it.
Does Trump control the domain in question?
OMG, this new story continues to make the claim the Trump Organization controls the domain trump-email.com, despite my debunking that Cendyn controls the domain.
Look at the WHOIS info yourself. All the contact info goes to Cendyn. It fits the pattern Cendyn chooses for their campaigns.
  • trump-email.com
  • mjh-email.com
  • denihan-email.com
  • hyatt-email.com
Cendyn even spells “Trump Orgainzation” wrong.

There’s a difference between a “server” and a “name”

The article continues to make trivial technical errors, like confusing what a server is with what a domain name is. For example:

One of the intriguing facts in my original piece was that the Trump server was shut down on Sept. 23, two days after the New York Times made inquiries to Alfa Bank

The server has never been shutdown. Instead, the name “mail1.trump-email.com” was removed from Cendyn’s DNS servers.
It’s impossible to debunk everything in these stories because they garble the technical details so much that it’s impossible to know what the heck they are claiming.
Why did Cendyn change things after Alfa Bank was notified?

It’s a curious coincidence that Cendyn changed their DNS records a couple days after the NYTimes contacted Alfa Bank.
But “coincidence” is all it is. I have years of experience with investigating data breaches. I know that such coincidences abound. There’s always weird coincidence that you are certain are meaningful, but which by the end of the investigation just aren’t.
The biggest source of coincidences is that IT is always changing things and always messing things up. It’s the nature of IT. Thus, you’ll always see a change in IT that matches some other event. Those looking for conspiracies ignore the changes that don’t match, and focus on the one that does, so it looms suspiciously.
As I’ve mentioned before, I have source that says Cendyn changed things around in June. This makes me believe that “Tea Leaves” is editing changes to highlight the one in September.
In any event, many people have noticed that the registrar email “Emily McMullin” has the same last name as Evan McMullin running against Trump in Utah. This supports my point: when you do hacking investigations, you find irrelevant connections all over the freakin’ place.
“Experts stand by their analysis”

This new article states:

I’ve checked back with eight of the nine computer scientists and engineers I consulted for my original story, and they all stood by their fundamental analysis

Well, of course, they don’t want to look like idiots. But notice the subtle rephrasing of the question: the experts stand by their analysis. It doesn’t mean the same thing as standing behind the reporters analysis. The experts made narrow judgements, which even I stand behind as mostly correct, given the data they were given at the time. None of them were asked whether the entire conspiracy theory holds up.
What you should ask is people like Chris Davis or Paul Vixie whether they stand behind my analysis in the past two posts. Or really, ask any expert. I’ve documented things in sufficient clarity. For example, go back to Chris Davis and ask him again about the “limited IP address” theory, and whether it holds up against my scan of that data center above.
Conclusion

Other major news outlets all passed on the story, because even non experts know it’s flawed. The data means nothing. The Slate journalist nonetheless went forward with the story, tricking experts, and finding some non-experts.
But as I’ve shown, given a complete technical analysis, the story falls apart. Most of what’s strange is perfectly normal. The data itself (the DNS logs) are untrustworthy. It builds upon unknown things (like how the mail server rejects IP address) as “unknowable” things that confirm the conspiracy, when they are in fact simply things unknown at the current time, which can become knowable with a little research.

What I show in my first post, and this post, is more data. This data shows context. This data explains the unknowns that Slate present. Moreover, you don’t have to trust me — anybody can replicate my work and see for themselves.


Debunking Trump’s "secret server"

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/11/debunking-trumps-secret-server.html

According to this Slate article, Trump has a secret server for communicating with Russia. Even Hillary has piled onto this story.

This is nonsense. The evidence available on the Internet is that Trump neither (directly) controls the domain “trump-email.com“, nor has access to the server. Instead, the domain was setup and controlled by Cendyn, a company that does marketing/promotions for hotels, including many of Trump’s hotels. Cendyn outsources the email portions of its campaigns to a company called Listrak, which actually owns/operates the physical server in a data center in Philidelphia.

In other words,  Trump’s response is (minus the political bits) likely true, supported by the evidence. It’s the conclusion I came to even before seeing the response.

When you view this “secret” server in context, surrounded by the other email servers operated by Listrak on behalf of Cendyn, it becomes more obvious what’s going on. In the same Internet address range of Trump’s servers you see a bunch of similar servers, many named [client]-email.com. In other words, trump-email.com is not intended as a normal email server you and I are familiar with, but as a server used for marketing/promotional campaigns.

It’s Cendyn that registered and who controls the trump-email.com domain, as seen in the WHOIS information. That the Trump Organization is the registrant, but not the admin, demonstrates that Trump doesn’t have direct control over it.

When the domain information was changed last September 23, it was Cendyn who did the change, not the Trump Organization. This link lists a bunch of other hotel-related domains that Cendyn likewise controls, some Trump related, some related to Trump’s hotel competitors, like Hyatt and Sheraton.

Cendyn’s claim they are reusing the server for some other purpose is likely true. If you are an enterprising journalist with $399 in your budget, you can find this out. Use the website http://reversewhois.domaintools.com/ to get a complete list of the 641 other domains controlled by Cendyn, then do an MX query for each one to find out which of them is using mail1.trump-email.com as their email server.

This is why we can’t have nice things on the Internet. Investigative journalism is dead. The Internet is full of clues like this if only somebody puts a few resources into figuring things out. For example, organizations that track spam will have information on exactly which promotions this server has been used for in the recent past. Those who operate public DNS resolvers, like Google’s 8.8.8.8, OpenDNS, or Dyn, may have knowledge which domain was related to mail1.trump-email.com.

Indeed, one journalist did call one of the public resolvers, and found other people queried this domain than the two listed in the Slate story — debunking it. I’ve heard from other DNS malware researchers (names remain anonymous) who confirm they’ve seen lookups for “mail1.trump-email.com” from all over the world, especially from tools like FireEye that process lots of spam email. One person claimed that lookups started failing for them back in late June — and thus the claim of successful responses until September are false. In other words, the “change” after the NYTimes queried Alfa Bank may not be because Cendyn (or Trump) changed anything, but because that was the first they checked and noticed that lookup errors were happening.

Since I wrote this blog post at midnight, so I haven’t confirmed this with anybody yet, but there’s a good chance that the IP address 66.216.133.29 has continued to spew spam for Trump hotels during this entire time. This would, of course would generate lookups (both reverse and forward). It seems like everyone who works for IT for a large company should be able to check their incoming email logs and see if they’ve been getting emails from that address over the last few months. If you work in IT, please check your logs for the last few months and Tweet me at @erratarob with the results, either positive or negative.

And finally, somebody associated with Alfa Bank IT operations confirms that executives like to stay at Trump hotels all the time (like in Vegas and New York), and there was a company function one of Trump’s golf courses. In other words, there’s good reason for the company to get spam from, and need to communicate with, Trump hotels to coordinate events.

And so on and so forth — there’s a lot of information out there if we just start digging.

Conclusion

That this is just normal marketing business from Cendyn and Listrak is the overwhelming logical explanation for all this. People are tempted to pull nefarious explanations out of their imaginations for things they don’t understand. But for those of us with experience in this sort of thing, what we see here is a normal messed up marketing (aka. spam) system that the Trump Organization doesn’t have control over. Knowing who owns and controls these servers, it’s unreasonable to believe that Trump is using them for secret emails. Far from “secret” or “private” servers as Hillary claims, these servers are wide open and obvious.

This post provides a logic explanation, but we can’t count on this being provably debunked until those like Dyn come forward, on the record, and show us lookups that don’t come from Alfa Bank. Or, those who work in big companies can pull records from their incoming email servers, to show that they’ve been receiving spam from that IP address over the last few months. Either of these would conclusively debunk the story.


But experts say…

But the article quotes several experts confirming the story, so how does that jibe with this blog post. The answer is that none of the experts confirmed the story.

Read more carefully. None of the identified experts confirmed the story. Instead, the experts looked at pieces, and confirmed part of the story. Vixie rightly confirmed that the pattern of DNS requests came from humans, and not automated systems. Chris Davis rightly confirmed the server doesn’t look like a normal email server.

Neither of them, however, confirmed that Trump has a secret server for communicating with the Russians. Both of their statements are consistent with what I describe above — that’s it’s a Cendyn operated server for marketing campaigns independent of the Trump Organization.


Those researchers violated their principles

The big story isn’t the conspiracy theory about Trump, but that these malware researchers exploited their privileged access for some purpose other than malware research.

Malware research consists of a lot of informal relationships. Researchers get DNS information from ISPs, from root servers, from services like Google’s 8.8.8.8 public DNS. It’s a huge privacy violation — justified on the principle that it’s for the general good. Sometimes the fact that DNS information is shared is explicit, like with Google’s service. Sometimes people don’t realize how their ISP shares information, or how many of the root DNS servers are monitored.

People should be angrily calling their ISPs and ask them if they share DNS information with untrustworthy researchers. People should be angrily asking ICANN, which is no longer controlled by the US government (sic), whether it’s their policy to share DNS lookup information with those who would attempt to change US elections.

There’s not many sources for this specific DNS information. Alfa Bank’s servers do their own resolution, direction from the root on down. It’s unlikely they were monitoring Alfa Bank’s servers directly, or monitoring Cendyn’s authoritative servers. That means some sort of passive DNS on some link in between, which is unlikley. Conversely, they could be monitoring one of the root domain servers — but this monitoring wouldn’t tell them the difference between a successful or failed lookup, which they claim to have. In short, of all the sources of “DNS malware information” I’ve heard about, none of it would deliver the information these researchers claim to have (well, except the NSA with their transatlantic undersea taps, of course).

Update: this tweet points out original post mentions getting data from “ams-ix23” node, which hints at AMS-IX, Amsterdam InterXchange, where many root server nodes are located.

Politifact: Yes we can fact check Kaine’s email

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/10/politifact-yes-we-can-fact-check-kaines.html

This Politifact post muddles over whether the Wikileaks leaked emails have been doctored, specifically the one about Tim Kaine being picked a year ago. The post is wrong — we can verify this email and most of the rest.

In order to bloc spam, emails nowadays contain a form of digital signatures that verify their authenticity. This is automatic, it happens on most modern email systems, without users being aware of it.

This means we can indeed validate most of the Wikileaks leaked DNC/Clinton/Podesta emails. There are many ways to do this, but the easiest is to install the popular Thunderbird email app along with the DKIM Verifier addon. Then go to the Wikileaks site and download the raw source of the email https://wikileaks.org/podesta-emails/emailid/2986.

As you see in the screenshot below, the DKIM signature verifies as true.

If somebody doctored the email, such as changing the date, then the signature would not verify. I try this in the email below, changing the date from 2015 to 2016. This causes the signature to fail.

There are some ways to forge DKIM-signed emails, specifically if the sender uses short keys. When short keys are used, hackers can “crack” them, and sign fraudulent emails. This doesn’t apply to GMail, which uses strong 2048 bit keys, as demonstrated in the following screenshot. (No, the average person isn’t supposed to understand this screen shot, but experts can).

What this means is that the only way this email could’ve been doctored is if there has been an enormous, nation-state level hack of Google to steal their signing key. It’s possible, of course, but extraordinarily improbable. It’s conspiracy-theory level thinking. Google GMail has logs of which emails went through its systems — if there was a nation-state attack able to forge them, Google would know, and they’d be telling us. (For one thing, they’d be forcing password resets on all our accounts).

Since DKIM verifies this email and most of the others, we conclude that Kaine is “pants on fire” lying about this specific email, and “mostly untrue” in his claim that the Wikileaks emails have been doctored.


On the other hand, Wikileaks only shows us some of the emails. We don’t see context. We don’t see other staffers certain it’s going to be somebody else for VP. We don’t see related email discusses that cast this one in a different light. So of course whether this (verified) email means they’d firmly chosen Kaine is “mostly unproven”. The purpose of this document isn’t diagnosing what the emails mean, only the claims by Hillary’s people that these emails have been “doctored”.


As a side note, I offer a 1-BTC (one bit coin, ~$600 at today’s exchange rate) bounty to anybody who can prove me wrong. If you can doctor the above email, then you win the bounty. Some rules apply (i.e. it needs to be a real doctored email, not a trick). I offer this bounty because already people are trying to cast doubt on whether DKIM works, without offering any evidence. Put up or shut up. Lamers aren’t welcome.

I gamergate Meredith Mciver

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/08/i-gamergate-meredith-mciver.html

One of the basic skills of hackers is “doxxing”. It’s actually not a skill. All you need to do is a quick search of public records databases through sites like Spokeo, Intelius, and Ancestry.com and you can quickly dox anybody.

During the Republican convention, Trump’s wife plagiarized Obama’s wife in a speech. A person in the Trump organization named “Meredith Mciver” took the blame for it. Trump haters immediately leapt to the conclusion that this person was fake, pointing out her Twitter and Facebook accounts were created after the controversy started.

So I’m going to go all gamergate on her and see what I can find.

According to New York public records, somebody named “Meredith Mciver” has been working for a company called the “The Trump Organization” as “Staff Writer” for many years. Her parents are Phyllis and James Mciver. Her older sister is Karen Mciver. She has an apartment at  588 W End Avenue in Manhattan (though I won’t tell you which apartment — find out for yourself). Through Ancestry.com, you can track down more information, such as her yearbook photo from 1962.

Now, all these public records could be fake, of course, but that would require a conspiracy larger than the one hiding the truth about Obama’s birth certificate.

I point this out because we have enough reasons to hate Trump (his populist demagoguery, his bankrupt character, his racism) and don’t need to search for more reasons. Yet, conspiracy theorists, “mciverers”, want to exploit this non-issue as much as they can.

My fellow Republicans: don’t support Trump

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/06/no-dilbert-trump-really-would-be-bad.html

Scott Adams, the creator of the Dilbert comic strip, has a post claiming a Trump presidency wouldn’t be as bad as people fear. It’s a good post. But it’s wrong.

Trump is certainly not as bad as his haters claim. Trump not only disables the critical-thinking ability of his supporters, but also of his enemies. In most conversations, I end up defending Trump — not because I support him as a candidate, but because I support critical-thinking. He’s only racist sometimes, most of the time I love his political incorrectness.

But with all that said, he would indeed be a horrible president. As a long-term Republican, I’d prefer a Hillary Clinton presidency, and I hate Hillary to the depths of my soul. She’s corrupt, and worst of all, she’s a leftist.

But there’s a thing worse than being a leftist (or right-winger) and that’s being a “populist demagogue”. Populist demagogues tell you that all your problems are caused by them (you know, those people), and present unrealistic solutions to problems. They appeal to base emotion and ignorance.

When nations fail because of politics, it’s almost always due to populist demagogues. Virtually all dictators are a “man of the people”, protecting the people’s interests against the powerful (somehow, the dictators themselves are never part of the “powerful”, since by definition, they are “of the people”). We see that in Venezuela right now, whose economy has crashed with oil prices (50% of their GDP was oil exports). The leader is making everything worse by running the playbook of bad populist policies. For example he’s printing money, which first year economics textbooks tell you causes inflation, then blaming the resulting inflation on the United States and the CIA manipulating prices. That’s the essence of populism: they pursue horrible policies, but blame the consequences on them.

In a Trump presidency, bad results that educated people know is caused by the government policy will instead be blamed on Mexico, China, and so on. The worst things get, the more crowd will cheer on Trump’s and congress’s bad policies, the more they punish Mexico and China, and the more they make bad policies worse.

Consider the $15 minimum wage promoted by Bernie Sanders, a hateful populist demagogue who is, if anything, worse than Trump. Hillary wanted $12.

Why not $18? Why not $25? Why not $100/hour minimum wage? Presumably, there are some negative thingies that happen the more you hike minimum wage. Presumably, there are some educated people out there who have studied this problem and can measure these things.  And there are. An example is this non-partisan, Congressional Office of Management and Budget (OMB) analysis of raising minimum wage to $10.10. It describes numerous positive and negative effects, none of which fits in a demagogic sound bite.

Raising the minimum wage has broad popular support, even among Republicans, because few are educated enough to appreciate the downsides. But yet, it doesn’t get raised. The only explanation by populists like Bernie, or Trump, is that there must be some conspiracy (such as by Wall Street billionaires) that prevents the minimum wage from being raised. The truth is that our political leaders are basing their decision on things like the OMB report. They are basing their votes on an educated analysis of the policy, not on corruption and bribes from Wall Street. Note that there is no right or wrong answer to raising the minimum wage. There are reasonable people on both sides. It’s just that this true debate based on education is far different than the public debate, which is based on emotion and ignorance.

Trade, which both Bernie and Trump oppose, is the same way. Educated people are for it, because it’s such an obvious benefit to the population as a whole. Yet, special interests exploit the ignorance of the populace, which is why most people oppose trade.

Again, since anti-trade policies are so obviously a crowd pleaser, populace demagogues explain why such policies aren’t adopted by blaming the vast conspiracy of the powerful, like Chinese lobbyists and Wall Street executives who want to move factories to Mexico.

Again, there’s really no right and wrong answer. I oppose the latest “trade” deals like TPP and TTIP because they expand regulation rather than reduce tarifs, for example. I also appreciate that while benefits of trade exceed the costs, the costs of the change are often born unfairly by some groups.

The point isn’t that you should support trade and oppose raising the minimum wage. Instead, the point is that populists present things as moral issues that transcended educated thought, and that when these policies are opposed by reasonable, educated people, the populist creates conspiracy theories explaining their opposition. Their power rests on the quality of their conspiracy theories.

All politicians are a little populist in this regard. The current one is President Obama. Yet when Obama has failed at his populist policies, like closing Gitmo, he blames the Republicans only a little bit. He hasn’t gone scorched-earth populist-demagogue on them.

The only danger to a Democracy is such populist demagoguery. We see how Alexis Tsipras was elected on a wave of populism, and proceded to make the Greek debt crisis much worse. We see how the populist leader of Venezuela is making his oil crisis much worse. When the educated opposed policies for smart reasons, ignorant crowds overran them. The educated soon learned to keep quiet.

The same will happen with a Trump presidency. When a crisis happens, and a crisis will always happen, his will revert to populist demagoguery. He’ll sweep aside any informed, rational debate on the issue. And as we’ve seen with the Republican politicians who have meekly agreed to Trump’s candidacy, very few politicians will have the backbone to stand up to him. Republicans are already mute on criticism of Trump, and Democrats so frothing at the mouth in hatred Trump that nobody listens to them, either.

Trump is unforgivably racist (though barely so, not the white supremacist his enemies claim). Trump is a crappy businessman, not nearly successful as he claims. The few successes he’s had are based on flim-flam, the faulty belief in his success. He’s a con man, not a good manager. He’s not the negotiator he claims, international politics works much different than negotiating price for building materials. On the world stage, everyone will laugh at him.

But all of these things can be forgiven, because most candidates suck just as much. Instead, the thing that makes Trump dangerous is his populist demagoguery. Historically, it this more than anything else that destroys democracies and make people’s lives worse off.