Tag Archives: CAD

French Minister of Culture Calls For Pirate Streaming Blacklist

Post Syndicated from Ernesto original https://torrentfreak.com/french-minister-of-culture-calls-for-pirate-streaming-blacklist-180423/

Nearly a decade ago, France was on the anti-piracy enforcement frontline.

The country was the first to introduce a graduated response system, Hadopi, where Internet subscribers risked losing their Internet connections if they were caught sharing torrents repeatedly.

Today this approach is no longer as effective as it once was. The bulk of all online piracy has moved from P2P downloading to streaming, and the latter isn’t traceable by anti-piracy watchdogs.

This hasn’t gone unnoticed by the French Government, Minister of Culture Françoise Nyssen in particular, who highlighted the issue to reporters a few days ago.

“The Hadopi response is no longer suitable because piracy is now 80% by streaming,” she said, quoted by local media.

While Hadopi may have outgrown its usefulness, France is not giving up the piracy fight. On the contrary, the country is now pondering new measures to target the current epidemic of pirate streaming sites.

Nyssen hopes that local authorities will implement a national pirate site blocklist to address the problem. Ideally, this should be constantly updated to ensure that pirate streaming sites remain inaccessible.

The Minister told reporters that France must “act on the sites,” by implementing “a blacklist which is constantly updated to keep them offline”.

This list would be maintained by the Hadopi agency which can then circulate it among several online intermediaries. This can include Internet providers, but also search engines and advertising networks.

The tough language will be music to the ears of the film industry and the timing doesn’t appear to be a total coincidence either.

The comments from the French Minister of Culture come shortly after several film industry groups boycotted a reception at the ministry. According to the groups, France dropped the ball on enforcement against piracy, which is blamed for more than a billion euros in losses.

The renewed promise may calm the waters for a while, but for now, it’s little more than that. It will likely take time before an effective pirate site blacklist is established, if it gets that far.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Facebook Privacy Fiasco Sees Congress Urged on Anti-Piracy Action

Post Syndicated from Andy original https://torrentfreak.com/facebook-privacy-fiasco-sees-congress-urged-on-anti-piracy-action-180420/

It has been a tumultuous few weeks for Facebook, and some would say quite rightly so. The company is a notorious harvester of personal information but last month’s Cambridge Analytica scandal really brought things to a head.

With Facebook co-founder and Chief Executive Officer Mark Zuckerberg in the midst of a PR nightmare, last Tuesday the entrepreneur appeared before the Senate. A day later he faced a grilling from lawmakers, answering questions concerning the social networking giant’s problems with user privacy and how it responds to breaches.

What practical measures Zuckerberg and his team will take to calm the storm are yet to unfold but the opportunity to broaden the attack on both Facebook and others in the user-generated content field is now being seized upon. Yes, privacy is the number one controversy at the moment but Facebook and others of its ilk need to step up and take responsibility for everything posted on their platforms.

That’s the argument presented by the American Federation of Musicians, the Content Creators Coalition, CreativeFuture, and the Independent Film & Television Alliance, who together represent more than 650 entertainment industry companies and 240,000 members. CreativeFuture alone represents more than 500 companies, including all the big Hollywood studios and major players in the music industry.

In letters sent to the Senate Committee on the Judiciary; the Senate Committee on Commerce, Science, and Transportation; and the House Energy and Commerce Committee, the coalitions urge Congress to not only ensure that Facebook gets its house in order, but that Google, Twitter, and similar platforms do so too.

The letters begin with calls to protect user data and tackle the menace of fake news but given the nature of the coalitions and their entertainment industry members, it’s no surprise to see where this is heading.

“In last week’s hearing, Mr. Zuckerberg stressed several times that Facebook must ‘take a broader view of our responsibility,’ acknowledging that it is ‘responsible for the content’ that appears on its service and must ‘take a more active view in policing the ecosystem’ it created,” the letter reads.

“While most content on Facebook is not produced by Facebook, they are the publisher and distributor of immense amounts of content to billions around the world. It is worth noting that a lot of that content is posted without the consent of the people who created it, including those in the creative industries we represent.”

The letter recalls Zuckerberg as characterizing Facebook’s failure to take a broader view of its responsibilities as a “big mistake” while noting he’s also promised change.

However, the entertainment groups contend that the way the company has conducted itself – and the manner in which many Silicon Valley companies conduct themselves – is supported and encouraged by safe harbors and legal immunities that absolve internet platforms of accountability.

“We agree that change needs to happen – but we must ask ourselves whether we can expect to see real change as long as these companies are allowed to continue to operate in a policy framework that prioritizes the growth of the internet over accountability and protects those that fail to act responsibly. We believe this question must be at the center of any action Congress takes in response to the recent failures,” the groups write.

But while the Facebook fiasco has provided the opportunity for criticism, CreativeFuture and its colleagues see the problem from a much broader perspective. They suck in companies like Google, which is also criticized for shirking its responsibilities, largely because the law doesn’t compel it to act any differently.

“Google, another major global platform that has long resisted meaningful accountability, also needs to step forward and endorse the broader view of responsibility expressed by Mr. Zuckerberg – as do many others,” they continue.

“The real problem is not Facebook, or Mark Zuckerberg, regardless of how sincerely he seeks to own the ‘mistakes’ that led to the hearing last week. The problem is endemic in a system that applies a different set of rules to the internet and fails to impose ordinary norms of accountability on businesses that are built around monetizing other people’s personal information and content.”

Noting that Congress has encouraged technology companies to prosper by using a “light hand” for the past several decades, the groups say their level of success now calls for a fresh approach and a heavier touch.

“Facebook and Google are grown-ups – and it is time they behaved that way. If they will not act, then it is up to you and your colleagues in the House to take action and not let these platforms’ abuses continue to pile up,” they conclude.

But with all that said, there is an interesting conflict that develops when presenting the solution to piracy in the context of a user privacy fiasco.

In the EU, many of the companies involved in the coalitions above are calling for pre-emptive filters to prevent allegedly infringing content being uploaded to Facebook and YouTube. That means that all user uploads to such platforms will have to be opened and scanned to see what they contain before they’re allowed online.

So, user privacy or pro-active anti-piracy filters? It might not be easy or even legal to achieve both.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Securing Elections

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/securing_electi_1.html

Elections serve two purposes. The first, and obvious, purpose is to accurately choose the winner. But the second is equally important: to convince the loser. To the extent that an election system is not transparently and auditably accurate, it fails in that second purpose. Our election systems are failing, and we need to fix them.

Today, we conduct our elections on computers. Our registration lists are in computer databases. We vote on computerized voting machines. And our tabulation and reporting is done on computers. We do this for a lot of good reasons, but a side effect is that elections now have all the insecurities inherent in computers. The only way to reliably protect elections from both malice and accident is to use something that is not hackable or unreliable at scale; the best way to do that is to back up as much of the system as possible with paper.

Recently, there have been two graphic demonstrations of how bad our computerized voting system is. In 2007, the states of California and Ohio conducted audits of their electronic voting machines. Expert review teams found exploitable vulnerabilities in almost every component they examined. The researchers were able to undetectably alter vote tallies, erase audit logs, and load malware on to the systems. Some of their attacks could be implemented by a single individual with no greater access than a normal poll worker; others could be done remotely.

Last year, the Defcon hackers’ conference sponsored a Voting Village. Organizers collected 25 pieces of voting equipment, including voting machines and electronic poll books. By the end of the weekend, conference attendees had found ways to compromise every piece of test equipment: to load malicious software, compromise vote tallies and audit logs, or cause equipment to fail.

It’s important to understand that these were not well-funded nation-state attackers. These were not even academics who had been studying the problem for weeks. These were bored hackers, with no experience with voting machines, playing around between parties one weekend.

It shouldn’t be any surprise that voting equipment, including voting machines, voter registration databases, and vote tabulation systems, are that hackable. They’re computers — often ancient computers running operating systems no longer supported by the manufacturers — and they don’t have any magical security technology that the rest of the industry isn’t privy to. If anything, they’re less secure than the computers we generally use, because their manufacturers hide any flaws behind the proprietary nature of their equipment.

We’re not just worried about altering the vote. Sometimes causing widespread failures, or even just sowing mistrust in the system, is enough. And an election whose results are not trusted or believed is a failed election.

Voting systems have another requirement that makes security even harder to achieve: the requirement for a secret ballot. Because we have to securely separate the election-roll system that determines who can vote from the system that collects and tabulates the votes, we can’t use the security systems available to banking and other high-value applications.

We can securely bank online, but can’t securely vote online. If we could do away with anonymity — if everyone could check that their vote was counted correctly — then it would be easy to secure the vote. But that would lead to other problems. Before the US had the secret ballot, voter coercion and vote-buying were widespread.

We can’t, so we need to accept that our voting systems are insecure. We need an election system that is resilient to the threats. And for many parts of the system, that means paper.

Let’s start with the voter rolls. We know they’ve already been targeted. In 2016, someone changed the party affiliation of hundreds of voters before the Republican primary. That’s just one possibility. A well-executed attack that deletes, for example, one in five voters at random — or changes their addresses — would cause chaos on election day.

Yes, we need to shore up the security of these systems. We need better computer, network, and database security for the various state voter organizations. We also need to better secure the voter registration websites, with better design and better internet security. We need better security for the companies that build and sell all this equipment.

Multiple, unchangeable backups are essential. A record of every addition, deletion, and change needs to be stored on a separate system, on write-only media like a DVD. Copies of that DVD, or — even better — a paper printout of the voter rolls, should be available at every polling place on election day. We need to be ready for anything.

Next, the voting machines themselves. Security researchers agree that the gold standard is a voter-verified paper ballot. The easiest (and cheapest) way to achieve this is through optical-scan voting. Voters mark paper ballots by hand; they are fed into a machine and counted automatically. That paper ballot is saved, and serves as a final true record in a recount in case of problems. Touch-screen machines that print a paper ballot to drop in a ballot box can also work for voters with disabilities, as long as the ballot can be easily read and verified by the voter.

Finally, the tabulation and reporting systems. Here again we need more security in the process, but we must always use those paper ballots as checks on the computers. A manual, post-election, risk-limiting audit varies the number of ballots examined according to the margin of victory. Conducting this audit after every election, before the results are certified, gives us confidence that the election outcome is correct, even if the voting machines and tabulation computers have been tampered with. Additionally, we need better coordination and communications when incidents occur.

It’s vital to agree on these procedures and policies before an election. Before the fact, when anyone can win and no one knows whose votes might be changed, it’s easy to agree on strong security. But after the vote, someone is the presumptive winner — and then everything changes. Half of the country wants the result to stand, and half wants it reversed. At that point, it’s too late to agree on anything.

The politicians running in the election shouldn’t have to argue their challenges in court. Getting elections right is in the interest of all citizens. Many countries have independent election commissions that are charged with conducting elections and ensuring their security. We don’t do that in the US.

Instead, we have representatives from each of our two parties in the room, keeping an eye on each other. That provided acceptable security against 20th-century threats, but is totally inadequate to secure our elections in the 21st century. And the belief that the diversity of voting systems in the US provides a measure of security is a dangerous myth, because few districts can be decisive and there are so few voting-machine vendors.

We can do better. In 2017, the Department of Homeland Security declared elections to be critical infrastructure, allowing the department to focus on securing them. On 23 March, Congress allocated $380m to states to upgrade election security.

These are good starts, but don’t go nearly far enough. The constitution delegates elections to the states but allows Congress to “make or alter such Regulations”. In 1845, Congress set a nationwide election day. Today, we need it to set uniform and strict election standards.

This essay originally appeared in the Guardian.

Married Torrent Tracker Couple Settle With BREIN

Post Syndicated from Ernesto original https://torrentfreak.com/married-torrent-tracker-couple-settles-with-brein-180420/

Dutch anti-piracy group BREIN has targeted operators and uploaders of pirate sites for more than a decade.

The group’s main goal is to shut the sites down. Instead of getting embroiled in dozens of lengthy court battles, it prefers to settle the matter with those responsible.

This week, BREIN announced another victory against a small torrent site, Snuffelland. The private tracker was targeted at a Dutch audience and the anti-piracy group managed to track down its operators.

According to BREIN, the site was run by a married couple from the town of Montfort, a 65-year-old man and a 51-year-old woman. In addition, the group also identified one of the uploaders, a 60-year-old man from Heukelum.

All three are unemployed and their financial position was taken into account in determining the scale of the settlement. The couple agreed to pay 2,500 euros and the uploader settled for 650 euros, with a threat of further penalties if they are caught again.

The private tracker itself was shut down and replaced by a message that was provided by BREIN.

“Making copyright-protected works available infringes the copyrights of the entitled rightsholder. Downloading from unauthorized sources is also prohibited in the Netherlands,” the message reads.

“For providers of legal content, snuffelland.org refers you to thecontentmap.nl and film.nl,” it adds.

These type of shutdowns are nothing new. BREIN has taken down hundreds of smaller sites in the past. However, only in recent years has the group has started to publish these settlement details.

That serves as a deterrent but also provides some more insight into how the group prefers to solve these cases, which appears to be relatively softly. In this case, it also disproves the notion that torrent sites are run by youngsters.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

IsoHunt Founder Returns With New Search Tool

Post Syndicated from Ernesto original https://torrentfreak.com/isohunt-founder-returns-with-new-search-tool-180419/

Of all the major torrent sites that dominated the Internet at the beginning of this decade, only a few remain.

One of the sites that fell prey to ever-increasing pressure from the entertainment industry was isoHunt.

Founded by the Canadian entrepreneur Gary Fung, the site was one of the early pioneers in the world of torrents, paving the way for many others. However, this spotlight also caught the attention of the major movie studios.

After a lengthy legal battle isoHunt’s founder eventually shut down the site late 2013. This happened after Fung signed a settlement agreement with Hollywood for no less than $110 million, on paper at least.

Launching a new torrent search engine was never really an option, but Fung decided not to let his expertise go to waste. He focused his time and efforts on a new search project instead, which was unveiled to the public this week.

The new app called “WonderSwipe” has just been added to Apple’s iOS store. It’s a mobile search app that ties into Google’s backend, but with a different user interface. While it has nothing to do with file-sharing, we decided to reach out to isoHunt’s founder to find out more.

Fung tells us that he got the idea for the app because he was frustrated with Google’s default search options on the mobile platform.

“I find myself barely do any search on the smartphone, most of the time waiting until I get to my desktop. I ask why?” Fung tells us.

One of the main issues he identified is the fact that swiping is not an option. Instead, people end up browsing through dozens of mobile browser tabs. So, Fung took Google’s infrastructure and search power, making it swipeable.

“From a UI design perspective, I find swiping through photos on the first iPhone one of the most extraordinary advances in computing. It’s so easy that babies would be doing it before they even learn how to flip open a book!

“Bringing that ease of use to the central way of conducting mobile search and research is the initial eureka I had in starting work on WonderSwipe,” Fung adds.

That was roughly three years ago, and a few hours ago WonderSwipe finally made its way into the App store. Android users will have to wait for now, but the application will eventually be available on that platform as well.

In addition to swiping through search results, the app also promises faster article loading and browsing, a reader mode with condensed search results, and a hands-free mode with automated browsing where summaries are read out loud.

WonderwSwipe


Of course, WonderSwipe is nothing like isoHunt ever was, apart from the fact that Google is a search engine that also links to torrents, indirectly.

This similarity was also brought up during the lawsuit with the MPAA, when Fung’s legal team likened isoHunt to Google in court. However, the Canadian entrepreneur doesn’t expect that Hollywood will have an issue with WonderSwipe in particular.

“isoHunt was similar to Google in how it worked as a search engine, but not in scope. Torrents are a small subset of all the webpages Google indexes,” Fung says.

“WonderSwipe’s aim is to find answers in all webpages, powered by Google search results. It presents results in extracted text and summaries with no connection to BitTorrent clients. As such, WonderSwipe can be bigger than isoHunt has ever been.”

Ironically, in recent years Hollywood has often criticized Google for linking to pirated content in its search results. These results will also be available through WonderSwipe.

However, Fung says that any copyright issues with WonderSwipe will have to be dealt with on the search engine level, by Google.

“If there are links to pirated content, tell search engines so they can take them down!” he says.

WonderSwipe is totally free and Fung tells us that he plans to monetize it with in-app purchases for pro features, and non-intrusive advertising that won’t slow down swiping or search results. More details on the future plans for the app are available here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

New – Registry of Open Data on AWS (RODA)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-registry-of-open-data-on-aws-roda/

Almost a decade ago, my colleague Deepak Singh introduced the AWS Public Datasets in his post Paging Researchers, Analysts, and Developers. I’m happy to report that Deepak is still an important part of the AWS team and that the Public Datasets program is still going strong!

Today we are announcing a new take on open and public data, the Registry of Open Data on AWS, or RODA. This registry includes existing Public Datasets and allows anyone to add their own datasets so that they can be accessed and analyzed on AWS.

Inside the Registry
The home page lists all of the datasets in the registry:

Entering a search term shrinks the list so that only the matching datasets are displayed:

Each dataset has an associated detail page, including usage examples, license info, and the information needed to locate and access the dataset on AWS:

In this case, I can access the data with a simple CLI command:

I could also access it programmatically, or download data to my EC2 instance.

Adding to the Repository
If you have a dataset that is publicly available and would like to add it to RODA , you can simply send us a pull request. Head over to the open-data-registry repo, read the CONTRIBUTING document, and create a YAML file that describes your dataset, using one of the existing files in the datasets directory as a model:

We’ll review pull requests regularly; you can “star” or watch the repo in order to track additions and changes.

Impress Me
I am looking forward to an inrush of new datasets, along with some blog posts and apps that show how to to use the data in powerful and interesting ways. Let me know what you come up with.

Jeff;

 

The DMCA and its Chilling Effects on Research

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/the_dmca_and_it.html

The Center for Democracy and Technology has a good summary of the current state of the DMCA’s chilling effects on security research.

To underline the nature of chilling effects on hacking and security research, CDT has worked to describe how tinkerers, hackers, and security researchers of all types both contribute to a baseline level of security in our digital environment and, in turn, are shaped themselves by this environment, most notably when things they do upset others and result in threats, potential lawsuits, and prosecution. We’ve published two reports (sponsored by the Hewlett Foundation and MacArthur Foundation) about needed reforms to the law and the myriad of ways that security research directly improves people’s lives. To get a more complete picture, we wanted to talk to security researchers themselves and gauge the forces that shape their work; essentially, we wanted to “take the pulse” of the security research community.

Today, we are releasing a third report in service of this effort: “Taking the Pulse of Hacking: A Risk Basis for Security Research.” We report findings after having interviewed a set of 20 security researchers and hackers — half academic and half non-academic — about what considerations they take into account when starting new projects or engaging in new work, as well as to what extent they or their colleagues have faced threats in the past that chilled their work. The results in our report show that a wide variety of constraints shape the work they do, from technical constraints to ethical boundaries to legal concerns, including the DMCA and especially the CFAA.

Note: I am a signatory on the letter supporting unrestricted security research.

Let’s stop talking about password strength

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/04/lets-stop-talking-about-password.html

Picture from EFF — CC-BY license

Near the top of most security recommendations is to use “strong passwords”. We need to stop doing this.

Yes, weak passwords can be a problem. If a website gets hacked, weak passwords are easier to crack. It’s not that this is wrong advice.

On the other hand, it’s not particularly good advice, either. It’s far down the list of important advice that people need to remember. “Weak passwords” are nowhere near the risk of “password reuse”. When your Facebook or email account gets hacked, it’s because you used the same password across many websites, not because you used a weak password.

Important websites, where the strength of your password matters, already take care of the problem. They use strong, salted hashes on the backend to protect the password. On the frontend, they force passwords to be a certain length and a certain complexity. Maybe the better advice is to not trust any website that doesn’t enforce stronger passwords (minimum of 8 characters consisting of both letters and non-letters).

To some extent, this “strong password” advice has become obsolete. A decade ago, websites had poor protection (MD5 hashes) and no enforcement of complexity, so it was up to the user to choose strong passwords. Now that important websites have changed their behavior, such as using bcrypt, there is less onus on the user.

But the real issue here is that “strong password” advice reflects the evil, authoritarian impulses of the infosec community. Instead of measuring insecurity in terms of costs vs. benefits, risks vs. rewards, we insist that it’s an issue of moral weakness. We pretend that flaws happen because people are greedy, lazy, and ignorant. We pretend that security is its own goal, a benefit we should achieve, rather than a cost we must endure.

We like giving moral advice because it’s easy: just be “stronger”. Discussing “password reuse” is more complicated, forcing us discuss password managers, writing down passwords on paper, that it’s okay to reuse passwords for crappy websites you don’t care about, and so on.

What I’m trying to say is that the moral weakness here is us. Rather then give pertinent advice we give lazy advice. We give the advice that victim shames them for being weak while pretending that we are strong.

So stop telling people to use strong passwords. It’s crass advice on your part and largely unhelpful for your audience, distracting them from the more important things.

How Pirates Use New Technologies for Old Sharing Habits

Post Syndicated from Ernesto original https://torrentfreak.com/how-pirates-use-new-technologies-for-old-sharing-habits-180415/

While piracy today is more widespread than ever, the urge to share content online has been around for several decades.

The first generation used relatively primitive tools, such as a bulletin board systems (BBS), newsgroups or IRC. Nothing too fancy, but they worked well for those who got over the initial learning curve.

When Napster came along things started to change. More content became available and with just a few clicks anyone could get an MP3 transferred from one corner of the world to another. The same was true for Kazaa and Limewire, which further popularized online piracy.

After this initial boom of piracy applications, BitTorrent came along, shaking up the sharing landscape even further. As torrent sites are web-based, pirated media became even more public and easy to find.

At the same time, BitTorrent brought back the smaller and more organized sharing culture of the early days through private trackers.

These communities often focused on a specific type of content and put strict rules and guidelines in place. They promoted sharing and avoided the spam that plagued their public counterparts.

That was fifteen years ago.

Today the piracy landscape is more diverse than ever. Private torrent trackers are still around and so are IRC and newsgroups. However, most piracy today takes place in public. Streaming sites and devices are booming, with central hosting platforms offering the majority of the underlying content.

That said, there is still an urge for some pirates to band together and some use newer technologies to do so.

This week The Outline ran an interesting piece on the use of Telegram channels to share pirated media. These groups use the encrypted communication platform to share copies of movies, TV shows, and a wide range of other material.

Telegram allows users to upload files up to 1.5GB in size, but larger ones can be split, in common with the good old newsgroups.

These type of sharing groups are not new. On social media platforms such as Facebook and VK, there are hundreds or thousands of dedicated communities that do the same. Both public and private. And Reddit has similar groups, relying on external links.

According to an administrator of a piracy-focused Telegram channel, the appeal of the platform is that the groups are not shut down so easily. While that may be the case with hyper-private groups, Telegram will still pull the plug if it receives enough complaints about a channel.

The same is true for Discord, another application that can be used to share content in ‘private’ communities. Discord is particularly popular among gamers, but pirates have also found their way to the platform.

While smaller communities are able to thrive, once the word gets out to copyright holders, the party can soon be over. This is also what the /r/piracy subreddit community found out a few days ago when its Discord server was pulled offline.

This triggered a discussion about possible alternatives. Telegram was mentioned by some, although not everyone liked the idea of connecting their phone number to a pirate group. Others mentioned Slack, Weechat, Hexchat and Riot.im.

None of these tools are revolutionary. At least, not for the intended use by this group. Some may be harder to take down than others, but they are all means to share files, directly or through external links.

What really caught our eye, however, were several mentions of an ancient application layer protocol that, apparently, hasn’t lost its use to pirates.

“I’ll make an IRC server and host that,” one user said, with others suggesting the same.

And so we have come full circle…

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

[$] A look at terminal emulators, part 2

Post Syndicated from jake original https://lwn.net/Articles/751763/rss

A comparison of the feature sets for a handful of terminal emulators was
the subject of a recent article; here I follow that up by
examining the performance of those terminals.

This might seem like a
lesser concern, but as it turns out, terminals exhibit surprisingly
high latency for such fundamental programs. I also examine what is
traditionally considered “speed” (but is really scroll bandwidth) and
memory usage, with the understanding that the impact of memory use
is less than it was when I looked at this a decade ago (in
French).

Subscribers can read on for part 2 from guest author Antoine Beaupré.

‘Pirate’ Android App Store Operator Avoids Prison

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-android-app-store-operator-avoids-prison-180413/

Assisted by police in France and the Netherlands, the FBI took down the “pirate” Android stores Appbucket, Applanet and SnappzMarket in the summer of 2012.

During the years that followed several people connected to the Android app sites were arrested and indicted, and slowly but surely these cases are reaching their conclusions.

This week the Northern District Court of Georgia announced the sentencing of one of the youngest defendants. Aaron Buckley was fifteen when he started working on Applanet, and still a teenager when armed agents raided his house.

Years passed and a lot has changed since then, Buckley’s attorney informed the court before sentencing. The former pirate, who pleaded guilty to Conspiracy to Commit Copyright Infringement and Criminal Copyright Infringement, is a completely different person today.

Similar to many people who have a run-in with the law, life wasn’t always easy on him. Computers offered a welcome escape but also dragged Buckley into trouble, something he deeply regrets now.

Following the indictment, things started to change. The Applanet operator picked up his life, away from the computer, and also got involved in community work. Among other things, he plays a leading role in a popular support community for LGBT teenagers.

Given the tough circumstances of his personal life, which we won’t elaborate on, his attorney requested a downward departure from the regular sentencing guidelines, to allow for lesser punishment.

After considering all the options, District Court Judge Timothy C. Batten agreed to a lower sentence. Unlike some other pirate app stores operators, who must spend years in prison, Buckley will not be incarcerated.

Instead, the Applanet operator, who is now in his mid-twenties, will be put on probation for three years, including a year of home confinement.

The sentence (pdf)

In addition, he has to perform 20 hours of community service and work towards passing a General Educational Development (GED) exam.

It’s tough to live with the prospect of possibly spending years in jail, especially for more than a decade. Given the circumstances, this sentence must be a huge relief.

TorrentFreak contacted Buckley, who informed us that he is happy with the outcome and ready to work on a bright future.

“I really respect the government and the judge in their sentencing and am extremely grateful that they took into account all concerns of my health and life situation in regards to possible sentences,” he tells us.

“I am just glad to have another chance to use my time and skills to hopefully contribute to society in a more positive way as much as I am capable thanks to the outcome of the case.”

Time to move on.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

MPA Reveals Scale of Worldwide Pirate Site Blocking

Post Syndicated from Andy original https://torrentfreak.com/mpa-reveals-scale-of-worldwide-pirate-site-blocking-180410/

Few people following the controversial topic of Internet piracy will be unaware of the site-blocking phenomenon. It’s now one of the main weapons in the entertainment industries’ arsenal and it’s affecting dozens of countries.

While general figures can be culled from the hundreds of news reports covering the issue, the manner in which blocking is handled in several regions means that updates aren’t always provided. New sites are regularly added to blocklists without fanfare, meaning that the public is kept largely in the dark.

Now, however, a submission to the Canadian Radio-television and Telecommunications Commission (CRTC) by Motion Picture Association Canada provides a more detailed overview. It was presented in support of the proposed blocking regime in Canada, so while the key figures are no doubt accurate, some of the supporting rhetoric should be viewed in context.

“Over the last decade, at least 42 countries have either adopted and implemented, or are legally obligated to adopt and implement, measures to ensure that ISPs take steps to disable access to copyright infringing websites, including throughout the European Union, the United Kingdom, Australia, and South Korea,” the submission reads.

The 42 blocking-capable countries referenced by the Hollywood group include the members of the European Union plus the following: Argentina, Australia, Iceland, India, Israel, Liechtenstein, Malaysia, Mexico, Norway, Russia, Singapore, South Korea, and Thailand.

While all countries have their own unique sets of legislation, countries within the EU are covered by the requirements of Article 8.3 of the INFOSEC Directive which provides that; “Member States shall ensure that rightholders are in a position to apply for an injunction against intermediaries whose services are used by a third party to infringe a copyright or related right.”

That doesn’t mean that all countries are actively blocking, however. While Bulgaria, Croatia, Cyprus, Czech Republic, Estonia, Hungary, Latvia, Liechtenstein, Lithuania, Luxembourg, Malta, Poland, Romania, Slovakia, and Slovenia have the legal basis to block infringing sites, none have yet done so.

In a significant number of other EU countries, however, blocking activity is prolific.

“To date, in at least 17 European countries, over 1,800 infringing sites and over 5,300 domains utilized by such sites have been blocked, including in the following four countries where the positive impact of site-blocking over time has been demonstrated,” MPA Canada notes.

Major blocking nations in the EU

At this point, it’s worth pointing out that authority to block sites is currently being obtained in two key ways, either through the courts or via an administrative process.

In the examples above, the UK and Denmark are dealt with via the former, with Italy and Portugal handled via the latter. At least as far as the volume of sites is concerned, court processes – which can be expensive – tend to yield lower site blocking levels than those carried out through an administrative process. Indeed, the MPAA has praised Portugal’s super-streamlined efforts as something to aspire to.

Outside Europe, the same two processes are also in use. For example, Australia, Argentina, and Singapore utilize the judicial route while South Korea, Mexico, Malaysia and Indonesia have opted for administrative remedies.

“Across 10 of these countries, over 1,100 infringing sites and over 1,500 domains utilized by such sites have been blocked,” MPA Canada reveals.

To date, South Korea has blocked 460 sites and 547 domains, while Australia has blocked 91 sites and 355 domains. In the case of the latter, “research has confirmed the increasingly positive impact that site-blocking has, as a greater number of sites are blocked over time,” the Hollywood group notes.

Although by no means comprehensive, MPA Canada lists the following “Notorious Sites” as subject to blocking in multiple countries via both judicial and administrative means. Most will be familiar, with the truly notorious The Pirate Bay heading the pile. Several no longer exist in their original form but in many cases, clones are blocked as if they still represent the original target.


The methods used to block the sites vary from country to country, dependent on what courts deem fit and in consideration of ISPs’ technical capabilities. Three main tools are in use including DNS blocking, IP address blocking, and URL blocking, which can also include Deep Packet Inspection.

The MPA submission (pdf) is strongly in favor of adding Canada to the list of site-blocking countries detailed above. The Hollywood group believes that the measures are both effective and proportionate, citing reduced usage of blocked sites, reduced traffic to pirate sites in general, and increased visits to legitimate platforms.

“There is every reason to believe that the website blocking measures [presented to the CRTC] will lead to the same beneficial results in Canada,” MPA Canada states.

While plenty of content creators and distributors are in favor of proposals, all signs suggest they will have a battle on their hands, with even some ISPs coming out in opposition.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Publisher Gets Carte Blanche to Seize New Sci-Hub Domains

Post Syndicated from Ernesto original https://torrentfreak.com/publisher-gets-carte-blanche-to-seize-new-sci-hub-domains-180410/

While Sci-Hub is loved by thousands of researchers and academics around the world, copyright holders are doing everything in their power to wipe if off the web.

Following a $15 million defeat against Elsevier last June, the American Chemical Society (ACS) won a default judgment of $4.8 million in copyright damages a few months later.

The publisher was further granted a broad injunction, requiring various third-party services to stop providing access to the site. This includes domain registries, hosting companies and search engines.

Soon after the order was signed, several of Sci-Hub’s domain names became unreachable as domain registries and Cloudflare complied with the court order. Still, Sci-Hub remained available all this time, with help from several newly registered domain names.

Frustrated by Sci-Hub’s resilience, ACS recently went back to court asking for an amended injunction. The publisher requested the authority to seize any and all Sci-Hub domain names, also those that will be registered in the future.

“Plaintiff has been forced to engage in a game of ‘whac-a-mole’ whereby new ‘sci-hub’ domain names emerge,” ACS informed the court.

“Further complicating matters, some registries, registrars, and Internet service providers have refused to disable newer Sci-Hub domain names that were not specifically identified in the Complaint or the injunction”

Soon after the request was submitted, US District Court Judge Leonie Brinkema agreed to the amended language.

The amended injunction now requires search engines, hosting companies, domain registrars, and other service or software providers, to cease facilitating access to Sci-Hub. This includes, but is not limited to, the following domain names.

‘sci-hub.ac, scihub.biz, sci-hub.bz, sci-hub.cc, sci-hub.cf, sci-hub.cn, sci-hub.ga, sci-hub.gq, scihub.hk, sci-hub.is, sci-hub.la, sci-hub.name, sci-hub.nu, sci-hub.nz, sci-hub.onion, scihub22266oqcxt.onion, sci-hub.tw, and sci-hub.ws.’

From the injunction

The new injunction makes ACS’ enforcement efforts much more effective. It effectively means that third-party services can no longer refuse to comply because a Sci-Hub domain is not listed in the complaint or injunction.

This already appears to have had some effect, as several domain names including sci-hub.la and sci-hub.tv became inaccessible soon after the paperwork was signed. Still, it is unlikely that it will help to shut down the site completely.

Several service providers are not receptive to US Court orders. One example is Iceland’s domain registry ISNIC and indeed, at the time of writing, Sci-Hub.is is still widely available.

Seizing .onion domain names, which are used on the Tor network, may also prove to be a challenge. After all, there is no central registration organization involved.

For now, Sci-Hub founder and operator Alexandra Elbakyan appears determined to keep the site online, whatever it takes. While it may be a hassle for users to find the latest working domain names, the new court order is not the end of the “whac-a-mole” just yet.

A copy of the amended injunction is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Community profile: Dave Akerman

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-dave-akerman/

This column is from The MagPi issue 61. You can download a PDF of the full issue for free, or subscribe to receive the print edition through your letterbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve our charitable goals.

The pinned tweet on Dave Akerman’s Twitter account shows a table displaying the various components needed for a high-altitude balloon (HAB) flight. Batteries, leads, a camera and Raspberry Pi, plus an unusually themed payload. The caption reads ‘The Queen, The Duke of York, and my TARDIS”, and sums up Dave’s maker career in a heartbeat.

David Akerman on Twitter

The Queen, The Duke of York, and my TARDIS 🙂 #UKHAS #RaspberryPi

Though writing software for industrial automation pays the bills, the majority of Dave’s time is spent in the world of high-altitude ballooning and the ever-growing community that encompasses it. And, while he makes some money sending business-themed balloons to near space for the likes of Aardman Animations, Confused.com, and the BBC, Dave is best known in the Raspberry Pi community for his use of the small computer in every payload, and his work as a tutor alongside the Foundation’s staff at Skycademy events.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave continues to help others while breaking records and having a good time exploring the atmosphere.

Dave has dedicated many hours and many, many more miles to assist with the Foundation’s Skycademy programme, helping to explore high-altitude ballooning with educators from across the UK. Using a Raspberry Pi and various other pieces of lightweight tech, Dave and Foundation staff member James Robinson explored the incorporation of high-altitude ballooning into education. Through Skycademy, educators were able to learn new skills and take them to the classroom, setting off their own balloons with their students, and recording the results on Raspberry Pis.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave’s most recent flight broke a new record. On 13 August 2017, his HAB payload was able to send back the highest images taken by any amateur flight.

But education isn’t the only reason for Dave’s involvement in the HAB community. As with anyone passionate about a specific hobby, Dave strives to break records. The most recent record-breaking flight took place on 13 August 2017, when Dave’s Raspberry Pi Zero HAB sent home the highest images taken by any amateur high-altitude balloon launch: at 43014 metres. No other HAB balloon has provided images from such an altitude, and the lightweight nature of the Pi Zero definitely helped, as Dave went on to mention on Twitter a few days later.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave is recognised as being the first person to incorporate a Raspberry Pi into a HAB payload, and continues to break records with the help of the little green board. More recently, he’s been able to lighten the load by using the Raspberry Pi Zero.

When the first Pi made its way to near space, Dave tore the computer apart in order to meet the weight restriction. The Pi in the Sky board was created to add the extra features needed for the flight. Since then, the HAT has experienced a few changes.

Dave Akerman The MagPi Raspberry Pi Community Profile

The Pi in the Sky board, created specifically for HAB flights.

Dave first fell in love with high-altitude ballooning after coming across the hobby in a video shared on a photographic forum. With a lifelong interest in space thanks to watching the Moon landings as a boy, plus a talent for electronics and photography, it seems a natural progression for him. Throw in his coding skills from learning to program on a Teletype and it’s no wonder he was ready and eager to take to the skies, so to speak, and capture the curvature of the Earth. What was so great about using the Raspberry Pi was the instant gratification he got from receiving images in real time as they were taken during the flight. While other devices could control a camera and store captured images for later retrieval, thanks to the Pi Dave was able to transmit the files back down to Earth and check the progress of his balloon while attempting to break records with a flight.

Dave Akerman The MagPi Raspberry Pi Community Profile Morph

One of the many commercial flights Dave has organised featured the classic children’s TV character Morph, a creation of the Aardman Animations studio known for Wallace and Gromit. Morph took to the sky twice in his mission to reach near space, and finally succeeded in 2016.

High-altitude ballooning isn’t the only part of Dave’s life that incorporates a Raspberry Pi. Having “lost count” of how many Pis he has running tasks, Dave has also created radio receivers for APRS (ham radio data), ADS-B (aircraft tracking), and OGN (gliders), along with a time-lapse camera in his garden, and he has a few more Pi for tinkering purposes.

The post Community profile: Dave Akerman appeared first on Raspberry Pi.

Spooky Torrent Warns EZTV Users About “Huge Security Risk”

Post Syndicated from Ernesto original https://torrentfreak.com/spooky-torrent-warns-eztv-users-about-huge-security-risk-180408/

For more than a decade EZTV has been a widely recognized brand among many BitTorrent users and known as one of the main TV-distribution groups.

While the original EZTV shut down following a hostile takeover, the people who took over are still serving torrents to millions of people every month.

Generally speaking, EZTV takes releases from outside encoders which they then distribute with their own nametag. It’s been like this for years and has never caused any real problems.

Last week, however, a disturbing release was added to the site, sending a message to EZTV users. What appeared to be a regular release of Lucifer S03E19, turned into something darker.

Ten minutes into the episode, a red warning appears, telling viewers that EZTV.ag is a huge security risk.

Huge Security Risk

Throughout the rest of the episode, a few dozen IP-addresses appear plastered across the screen. Needless to say, this makes the program rather unwatchable.

According to the earlier message, these IP-addresses are “used on EZTV.ag.” This seems to suggest that the website has a leak somewhere unless it refers to IP-addresses of downloaders, which are public anyway.

IP-addresses

It is hard to grasp what’s really going on here and there is no direct evidence that the site has been breached in any way. Not directly at least.

At the end of the episode, a final message appears, adding to the intrigue. The message comes from the encoder DeXoX and offers up a complete IP-address database, email addresses of registered EZTV users, and more.

DeXoX

Again, we have not been able to verify the validity of these claims but it’s certainly not good PR for EZTV. The spooky torrent has been downloaded by thousands of people already and is still listed on the site several days after first appearing.

We are not familiar with DeXoX, but it appears that the person behind the handle is not a fan of EZTV.ag, to say the least.

It remains unclear how the torrent was added to the site. It could be that the EZTV site has indeed been breached in some way, or DeXoX has access to the site where EZTV sources its material. In any event, the release page or the site itself contains no warnings, only the video itself.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Backblaze Announces B2 Compute Partnerships

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/introducing-cloud-compute-services/

Backblaze Announces B2 Compute Partnerships

In 2015, we announced Backblaze B2 Cloud Storage — the most affordable, high performance storage cloud on the planet. The decision to release B2 as a service was in direct response to customers asking us if they could use the same cloud storage infrastructure we use for our Computer Backup service. With B2, we entered a market in direct competition with Amazon S3, Google Cloud Services, and Microsoft Azure Storage. Today, we have over 500 petabytes of data from customers in over 150 countries. At $0.005 / GB / month for storage (1/4th of S3) and $0.01 / GB for downloads (1/5th of S3), it turns out there’s a healthy market for cloud storage that’s easy and affordable.

As B2 has grown, customers wanted to use our cloud storage for a variety of use cases that required not only storage but compute. We’re happy to say that through partnerships with Packet & ServerCentral, today we’re announcing that compute is now available for B2 customers.

Cloud Compute and Storage

Backblaze has directly connected B2 with the compute servers of Packet and ServerCentral, thereby allowing near-instant (< 10 ms) data transfers between services. Also, transferring data between B2 and both our compute partners is free.

  • Storing data in B2 and want to run an AI analysis on it? — There are no fees to move the data to our compute partners.
  • Generating data in an application? — Run the application with one of our partners and store it in B2.
  • Transfers are free and you’ll save more than 50% off of the equivalent set of services from AWS.

These partnerships enable B2 customers to use compute, give our compute partners’ customers access to cloud storage, and introduce new customers to industry-leading storage and compute — all with high-performance, low-latency, and low-cost.

Is This a Big Deal? We Think So

Compute is one of the most requested services from our customers Why? Because it unlocks a number of use cases for them. Let’s look at three popular examples:

Transcoding Media Files

B2 has earned wide adoption in the Media & Entertainment (“M&E”) industry. Our affordable storage and download pricing make B2 great for a wide variety of M&E use cases. But many M&E workflows require compute. Content syndicators, like American Public Television, need the ability to transcode files to meet localization and distribution management requirements.

There are a multitude of reasons that transcode is needed — thumbnail and proxy generation enable M&E professionals to work efficiently. Without compute, the act of transcoding files remains cumbersome. Either the files need to be brought down from the cloud, transcoded, and then pushed back up or they must be kept locally until the project is complete. Both scenarios are inefficient.

Starting today, any content producer can spin up compute with one of our partners, pay by the hour for their transcode processing, and return the new media files to B2 for storage and distribution. The company saves money, moves faster, and ensures their files are safe and secure.

Disaster Recovery

Backblaze’s heritage is based on providing outstanding backup services. When you have incredibly affordable cloud storage, it ends up being a great destination for your backup data.

Most enterprises have virtual machines (“VMs”) running in their infrastructure and those VMs need to be backed up. In a disaster scenario, a business wants to know they can get back up and running quickly.

With all data stored in B2, a business can get up and running quickly. Simply restore your backed up VM to one of our compute providers, and your business will be able to get back online.

Since B2 does not place restrictions, delays, or penalties on getting data out, customers can get back up and running quickly and affordably.

Saving $74 Million (aka “The Dropbox Effect”)

Ten years ago, Backblaze decided that S3 was too costly a platform to build its cloud storage business. Instead, we created the Backblaze Storage Pod and our own cloud storage infrastructure. That decision enabled us to offer our customers storage at a previously unavailable price point and maintain those prices for over a decade. It also laid the foundation for Netflix Open Connect and Facebook Open Compute.

Dropbox recently migrated the majority of their cloud services off of AWS and onto Dropbox’s own infrastructure. By leaving AWS, Dropbox was able to build out their own data centers and still save over $74 Million. They achieved those savings by avoiding the fees AWS charges for storing and downloading data, which, incidentally, are five times higher than Backblaze B2.

For Dropbox, being able to realize savings was possible because they have access to enough capital and expertise that they can build out their own infrastructure. For companies that have such resources and scale, that’s a great answer.

“Before this offering, the economics of the cloud would have made our business simply unviable.” — Gabriel Menegatti, SlicingDice

The questions Backblaze and our compute partners pondered was “how can we democratize the Dropbox effect for our storage and compute customers? How can we help customers do more and pay less?” The answer we came up with was to connect Backblaze’s B2 storage with strategic compute partners and remove any transfer fees between them. You may not save $74 million as Dropbox did, but you can choose the optimal providers for your use case and realize significant savings in the process.

This Sounds Good — Tell Me More About Your Partners

We’re very fortunate to be launching our compute program with two fantastic partners in Packet and ServerCentral. These partners allow us to offer a range of computing services.

Packet

We recommend Packet for customers that need on-demand, high performance, bare metal servers available by the hour. They also have robust offerings for private / customized deployments. Their offerings end up costing 50-75% of the equivalent offerings from EC2.

To get started with Packet and B2, visit our partner page on Packet.net.

ServerCentral

ServerCentral is the right partner for customers that have business and IT challenges that require more than “just” hardware. They specialize in fully managed, custom cloud solutions that solve complex business and IT challenges. ServerCentral also has expertise in managed network solutions to address global connectivity and content delivery.

To get started with ServerCentral and B2, visit our partner page on ServerCentral.com.

What’s Next?

We’re excited to find out. The combination of B2 and compute unlocks use cases that were previously impossible or at least unaffordable.

“The combination of performance and price offered by this partnership enables me to create an entirely new business line. Before this offering, the economics of the cloud would have made our business simply unviable,” noted Gabriel Menegatti, co-founder at SlicingDice, a serverless data warehousing service. “Knowing that transfers between compute and B2 are free means I don’t have to worry about my business being successful. And, with download pricing from B2 at just $0.01 GB, I know I’m avoiding a 400% tax from AWS on data I retrieve.”

What can you do with B2 & compute? Please share your ideas with us in the comments. And, for those attending NAB 2018 in Las Vegas next week, please come by and say hello!

The post Backblaze Announces B2 Compute Partnerships appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

A geometric Rust adventure

Post Syndicated from Eevee original https://eev.ee/blog/2018/03/30/a-geometric-rust-adventure/

Hi. Yes. Sorry. I’ve been trying to write this post for ages, but I’ve also been working on a huge writing project, and apparently I have a very limited amount of writing mana at my disposal. I think this is supposed to be a Patreon reward from January. My bad. I hope it’s super great to make up for the wait!

I recently ported some math code from C++ to Rust in an attempt to do a cool thing with Doom. Here is my story.

The problem

I presented it recently as a conundrum (spoilers: I solved it!), but most of those details are unimportant.

The short version is: I have some shapes. I want to find their intersection.

Really, I want more than that: I want to drop them all on a canvas, intersect everything with everything, and pluck out all the resulting polygons. The input is a set of cookie cutters, and I want to press them all down on the same sheet of dough and figure out what all the resulting contiguous pieces are. And I want to know which cookie cutter(s) each piece came from.

But intersection is a good start.

Example of the goal.  Given two squares that overlap at their corners, I want to find the small overlap piece, plus the two L-shaped pieces left over from each square

I’m carefully referring to the input as shapes rather than polygons, because each one could be a completely arbitrary collection of lines. Obviously there’s not much you can do with shapes that aren’t even closed, but at the very least, I need to handle concavity and multiple disconnected polygons that together are considered a single input.

This is a non-trivial problem with a lot of edge cases, and offhand I don’t know how to solve it robustly. I’m not too eager to go figure it out from scratch, so I went hunting for something I could build from.

(Infuriatingly enough, I can just dump all the shapes out in an SVG file and any SVG viewer can immediately solve the problem, but that doesn’t quite help me. Though I have had a few people suggest I just rasterize the whole damn problem, and after all this, I’m starting to think they may have a point.)

Alas, I couldn’t find a Rust library for doing this. I had a hard time finding any library for doing this that wasn’t a massive fully-featured geometry engine. (I could’ve used that, but I wanted to avoid non-Rust dependencies if possible, since distributing software is already enough of a nightmare.)

A Twitter follower directed me towards a paper that described how to do very nearly what I wanted and nothing else: “A simple algorithm for Boolean operations on polygons” by F. Martínez (2013). Being an academic paper, it’s trapped in paywall hell; sorry about that. (And as I understand it, none of the money you’d pay to get the paper would even go to the authors? Is that right? What a horrible and predatory system for discovering and disseminating knowledge.)

The paper isn’t especially long, but it does describe an awful lot of subtle details and is mostly written in terms of its own reference implementation. Rather than write my own implementation based solely on the paper, I decided to try porting the reference implementation from C++ to Rust.

And so I fell down the rabbit hole.

The basic algorithm

Thankfully, the author has published the sample code on his own website, if you want to follow along. (It’s the bottom link; the same author has, confusingly, published two papers on the same topic with similar titles, four years apart.)

If not, let me describe the algorithm and how the code is generally laid out. The algorithm itself is based on a sweep line, where a vertical line passes across the plane and ✨ does stuff ✨ as it encounters various objects. This implementation has no physical line; instead, it keeps track of which segments from the original polygon would be intersecting the sweep line, which is all we really care about.

A vertical line is passing rightwards over a couple intersecting shapes.  The line current intersects two of the shapes' sides, and these two sides are the "sweep list"

The code is all bundled inside a class with only a single public method, run, because… that’s… more object-oriented, I guess. There are several helper methods, and state is stored in some attributes. A rough outline of run is:

  1. Run through all the line segments in both input polygons. For each one, generate two SweepEvents (one for each endpoint) and add them to a std::deque for storage.

    Add pointers to the two SweepEvents to a std::priority_queue, the event queue. This queue uses a custom comparator to order the events from left to right, so the top element is always the leftmost endpoint.

  2. Loop over the event queue (where an “event” means the sweep line passed over the left or right end of a segment). Encountering a left endpoint means the sweep line is newly touching that segment, so add it to a std::set called the sweep list. An important point is that std::set is ordered, and the sweep list uses a comparator that keeps segments in order vertically.

    Encountering a right endpoint means the sweep line is leaving a segment, so that segment is removed from the sweep list.

  3. When a segment is added to the sweep list, it may have up to two neighbors: the segment above it and the segment below it. Call possibleIntersection to check whether it intersects either of those neighbors. (This is nearly sufficient to find all intersections, which is neat.)

  4. If possibleIntersection detects an intersection, it will split each segment into two pieces then and there. The old segment is shortened in-place to become the left part, and a new segment is created for the right part. The new endpoints at the point of intersection are added to the event queue.

  5. Some bookkeeping is done along the way to track which original polygons each segment is inside, and eventually the segments are reconstructed into new polygons.

Hopefully that’s enough to follow along. It took me an inordinately long time to tease this out. The comments aren’t especially helpful.

1
    std::deque<SweepEvent> eventHolder;    // It holds the events generated during the computation of the boolean operation

Syntax and basic semantics

The first step was to get something that rustc could at least parse, which meant translating C++ syntax to Rust syntax.

This was surprisingly straightforward! C++ classes become Rust structs. (There was no inheritance here, thankfully.) All the method declarations go away. Method implementations only need to be indented and wrapped in impl.

I did encounter some unnecessarily obtuse uses of the ternary operator:

1
(prevprev != sl.begin()) ? --prevprev : prevprev = sl.end();

Rust doesn’t have a ternary — you can use a regular if block as an expression — so I expanded these out.

C++ switch blocks become Rust match blocks, but otherwise function basically the same. Rust’s enums are scoped (hallelujah), so I had to explicitly spell out where enum values came from.

The only really annoying part was changing function signatures; C++ types don’t look much at all like Rust types, save for the use of angle brackets. Rust also doesn’t pass by implicit reference, so I needed to sprinkle a few &s around.

I would’ve had a much harder time here if this code had relied on any remotely esoteric C++ functionality, but thankfully it stuck to pretty vanilla features.

Language conventions

This is a geometry problem, so the sample code unsurprisingly has its own home-grown point type. Rather than port that type to Rust, I opted to use the popular euclid crate. Not only is it code I didn’t have to write, but it already does several things that the C++ code was doing by hand inline, like dot products and cross products. And all I had to do was add one line to Cargo.toml to use it! I have no idea how anyone writes C or C++ without a package manager.

The C++ code used getters, i.e. point.x (). I’m not a huge fan of getters, though I do still appreciate the need for them in lowish-level systems languages where you want to future-proof your API and the language wants to keep a clear distinction between attribute access and method calls. But this is a point, which is nothing more than two of the same numeric type glued together; what possible future logic might you add to an accessor? The euclid authors appear to side with me and leave the coordinates as public fields, so I took great joy in removing all the superfluous parentheses.

Polygons are represented with a Polygon class, which has some number of Contours. A contour is a single contiguous loop. Something you’d usually think of as a polygon would only have one, but a shape with a hole would have two: one for the outside, one for the inside. The weird part of this arrangement was that Polygon implemented nearly the entire STL container interface, then waffled between using it and not using it throughout the rest of the code. Rust lets anything in the same module access non-public fields, so I just skipped all that and used polygon.contours directly. Hell, I think I made contours public.

Finally, the SweepEvent type has a pol field that’s declared as an enum PolygonType (either SUBJECT or CLIPPING, to indicate which of the two inputs it is), but then some other code uses the same field as a numeric index into a polygon’s contours. Boy I sure do love static typing where everything’s a goddamn integer. I wanted to extend the algorithm to work on arbitrarily many input polygons anyway, so I scrapped the enum and this became a usize.


Then I got to all the uses of STL. I have only a passing familiarity with the C++ standard library, and this code actually made modest use of it, which caused some fun days-long misunderstandings.

As mentioned, the SweepEvents are stored in a std::deque, which is never read from. It took me a little thinking to realize that the deque was being used as an arena: it’s the canonical home for the structs so pointers to them can be tossed around freely. (It can’t be a std::vector, because that could reallocate and invalidate all the pointers; std::deque is probably a doubly-linked list, and guarantees no reallocation.)

Rust’s standard library does have a doubly-linked list type, but I knew I’d run into ownership hell here later anyway, so I think I replaced it with a Rust Vec to start with. It won’t compile either way, so whatever. We’ll get back to this in a moment.

The list of segments currently intersecting the sweep line is stored in a std::set. That type is explicitly ordered, which I’m very glad I knew already. Rust has two set types, HashSet and BTreeSet; unsurprisingly, the former is unordered and the latter is ordered. Dropping in BTreeSet and fixing some method names got me 90% of the way there.

Which brought me to the other 90%. See, the C++ code also relies on finding nodes adjacent to the node that was just inserted, via STL iterators.

1
2
3
next = prev = se->posSL = it = sl.insert(se).first;
(prev != sl.begin()) ? --prev : prev = sl.end();
++next;

I freely admit I’m bad at C++, but this seems like something that could’ve used… I don’t know, 1 comment. Or variable names more than two letters long. What it actually does is:

  1. Add the current sweep event (se) to the sweep list (sl), which returns a pair whose first element is an iterator pointing at the just-inserted event.

  2. Copies that iterator to several other variables, including prev and next.

  3. If the event was inserted at the beginning of the sweep list, set prev to the sweep list’s end iterator, which in C++ is a legal-but-invalid iterator meaning “the space after the end” or something. This is checked for in later code, to see if there is a previous event to look at. Otherwise, decrement prev, so it’s now pointing at the event immediately before the inserted one.

  4. Increment next normally. If the inserted event is last, then this will bump next to the end iterator anyway.

In other words, I need to get the previous and next elements from a BTreeSet. Rust does have bidirectional iterators, which BTreeSet supports… but BTreeSet::insert only returns a bool telling me whether or not anything was inserted, not the position. I came up with this:

1
2
3
let mut maybe_below = active_segments.range(..segment).last().map(|v| *v);
let mut maybe_above = active_segments.range(segment..).next().map(|v| *v);
active_segments.insert(segment);

The range method returns an iterator over a subset of the tree. The .. syntax makes a range (where the right endpoint is exclusive), so ..segment finds the part of the tree before the new segment, and segment.. finds the part of the tree after it. (The latter would start with the segment itself, except I haven’t inserted it yet, so it’s not actually there.)

Then the standard next() and last() methods on bidirectional iterators find me the element I actually want. But the iterator might be empty, so they both return an Option. Also, iterators tend to return references to their contents, but in this case the contents are already references, and I don’t want a double reference, so the map call dereferences one layer — but only if the Option contains a value. Phew!

This is slightly less efficient than the C++ code, since it has to look up where segment goes three times rather than just one. I might be able to get it down to two with some more clever finagling of the iterator, but microsopic performance considerations were a low priority here.

Finally, the event queue uses a std::priority_queue to keep events in a desired order and efficiently pop the next one off the top.

Except priority queues act like heaps, where the greatest (i.e., last) item is made accessible.

Sorting out sorting

C++ comparison functions return true to indicate that the first argument is less than the second argument. Sweep events occur from left to right. You generally implement sorts so that the first thing comes, erm, first.

But sweep events go in a priority queue, and priority queues surface the last item, not the first. This C++ code handled this minor wrinkle by implementing its comparison backwards.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
struct SweepEventComp : public std::binary_function<SweepEvent, SweepEvent, bool> { // for sorting sweep events
// Compare two sweep events
// Return true means that e1 is placed at the event queue after e2, i.e,, e1 is processed by the algorithm after e2
bool operator() (const SweepEvent* e1, const SweepEvent* e2)
{
    if (e1->point.x () > e2->point.x ()) // Different x-coordinate
        return true;
    if (e2->point.x () > e1->point.x ()) // Different x-coordinate
        return false;
    if (e1->point.y () != e2->point.y ()) // Different points, but same x-coordinate. The event with lower y-coordinate is processed first
        return e1->point.y () > e2->point.y ();
    if (e1->left != e2->left) // Same point, but one is a left endpoint and the other a right endpoint. The right endpoint is processed first
        return e1->left;
    // Same point, both events are left endpoints or both are right endpoints.
    if (signedArea (e1->point, e1->otherEvent->point, e2->otherEvent->point) != 0) // not collinear
        return e1->above (e2->otherEvent->point); // the event associate to the bottom segment is processed first
    return e1->pol > e2->pol;
}
};

Maybe it’s just me, but I had a hell of a time just figuring out what problem this was even trying to solve. I still have to reread it several times whenever I look at it, to make sure I’m getting the right things backwards.

Making this even more ridiculous is that there’s a second implementation of this same sort, with the same name, in another file — and that one’s implemented forwards. And doesn’t use a tiebreaker. I don’t entirely understand how this even compiles, but it does!

I painstakingly translated this forwards to Rust. Unlike the STL, Rust doesn’t take custom comparators for its containers, so I had to implement ordering on the types themselves (which makes sense, anyway). I wrapped everything in the priority queue in a Reverse, which does what it sounds like.

I’m fairly pleased with Rust’s ordering model. Most of the work is done in Ord, a trait with a cmp() method returning an Ordering (one of Less, Equal, and Greater). No magic numbers, no need to implement all six ordering methods! It’s incredible. Ordering even has some handy methods on it, so the usual case of “order by this, then by this” can be written as:

1
2
return self.point().x.cmp(&other.point().x)
    .then(self.point().y.cmp(&other.point().y));

Well. Just kidding! It’s not quite that easy. You see, the points here are composed of floats, and floats have the fun property that not all of them are comparable. Specifically, NaN is not less than, greater than, or equal to anything else, including itself. So IEEE 754 float ordering cannot be expressed with Ord. Unless you want to just make up an answer for NaN, but Rust doesn’t tend to do that.

Rust’s float types thus implement the weaker PartialOrd, whose method returns an Option<Ordering> instead. That makes the above example slightly uglier:

1
2
return self.point().x.partial_cmp(&other.point().x).unwrap()
    .then(self.point().y.partial_cmp(&other.point().y).unwrap())

Also, since I use unwrap() here, this code will panic and take the whole program down if the points are infinite or NaN. Don’t do that.

This caused some minor inconveniences in other places; for example, the general-purpose cmp::min() doesn’t work on floats, because it requires an Ord-erable type. Thankfully there’s a f64::min(), which handles a NaN by returning the other argument.

(Cool story: for the longest time I had this code using f32s. I’m used to translating int to “32 bits”, and apparently that instinct kicked in for floats as well, even floats spelled double.)

The only other sorting adventure was this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
// Due to overlapping edges the resultEvents array can be not wholly sorted
bool sorted = false;
while (!sorted) {
    sorted = true;
    for (unsigned int i = 0; i < resultEvents.size (); ++i) {
        if (i + 1 < resultEvents.size () && sec (resultEvents[i], resultEvents[i+1])) {
            std::swap (resultEvents[i], resultEvents[i+1]);
            sorted = false;
        }
    }
}

(I originally misread this comment as saying “the array cannot be wholly sorted” and had no idea why that would be the case, or why the author would then immediately attempt to bubble sort it.)

I’m still not sure why this uses an ad-hoc sort instead of std::sort. But I’m used to taking for granted that general-purpose sorting implementations are tuned to work well for almost-sorted data, like Python’s. Maybe C++ is untrustworthy here, for some reason. I replaced it with a call to .sort() and all seemed fine.

Phew! We’re getting there. Finally, my code appears to type-check.

But now I see storm clouds gathering on the horizon.

Ownership hell

I have a problem. I somehow run into this problem every single time I use Rust. The solutions are never especially satisfying, and all the hacks I might use if forced to write C++ turn out to be unsound, which is even more annoying because rustc is just sitting there with this smug “I told you so expression” and—

The problem is ownership, which Rust is fundamentally built on. Any given value must have exactly one owner, and Rust must be able to statically convince itself that:

  1. No reference to a value outlives that value.
  2. If a mutable reference to a value exists, no other references to that value exist at the same time.

This is the core of Rust. It guarantees at compile time that you cannot lose pointers to allocated memory, you cannot double-free, you cannot have dangling pointers.

It also completely thwarts a lot of approaches you might be inclined to take if you come from managed languages (where who cares, the GC will take care of it) or C++ (where you just throw pointers everywhere and hope for the best apparently).

For example, pointer loops are impossible. Rust’s understanding of ownership and lifetimes is hierarchical, and it simply cannot express loops. (Rust’s own doubly-linked list type uses raw pointers and unsafe code under the hood, where “unsafe” is an escape hatch for the usual ownership rules. Since I only recently realized that pointers to the inside of a mutable Vec are a bad idea, I figure I should probably not be writing unsafe code myself.)

This throws a few wrenches in the works.

Problem the first: pointer loops

I immediately ran into trouble with the SweepEvent struct itself. A SweepEvent pulls double duty: it represents one endpoint of a segment, but each left endpoint also handles bookkeeping for the segment itself — which means that most of the fields on a right endpoint are unused. Also, and more importantly, each SweepEvent has a pointer to the corresponding SweepEvent at the other end of the same segment. So a pair of SweepEvents point to each other.

Rust frowns upon this. In retrospect, I think I could’ve kept it working, but I also think I’m wrong about that.

My first step was to wrench SweepEvent apart. I moved all of the segment-stuff (which is virtually all of it) into a single SweepSegment type, and then populated the event queue with a SweepEndpoint tuple struct, similar to:

1
2
3
4
5
6
enum SegmentEnd {
    Left,
    Right,
}

struct SweepEndpoint<'a>(&'a SweepSegment, SegmentEnd);

This makes SweepEndpoint essentially a tuple with a name. The 'a is a lifetime and says, more or less, that a SweepEndpoint cannot outlive the SweepSegment it references. Makes sense.

Problem solved! I no longer have mutually referential pointers. But I do still have pointers (well, references), and they have to point to something.

Problem the second: where’s all the data

Which brings me to the problem I always run into with Rust. I have a bucket of things, and I need to refer to some of them multiple times.

I tried half a dozen different approaches here and don’t clearly remember all of them, but I think my core problem went as follows. I translated the C++ class to a Rust struct with some methods hanging off of it. A simplified version might look like this.

1
2
3
4
struct Algorithm {
    arena: LinkedList<SweepSegment>,
    event_queue: BinaryHeap<SweepEndpoint>,
}

Ah, hang on — SweepEndpoint needs to be annotated with a lifetime, so Rust can enforce that those endpoints don’t live longer than the segments they refer to. No problem?

1
2
3
4
struct Algorithm<'a> {
    arena: LinkedList<SweepSegment>,
    event_queue: BinaryHeap<SweepEndpoint<'a>>,
}

Okay! Now for some methods.

1
2
3
4
5
6
7
8
fn run(&mut self) {
    self.arena.push_back(SweepSegment{ data: 5 });
    self.event_queue.push(SweepEndpoint(self.arena.back().unwrap(), SegmentEnd::Left));
    self.event_queue.push(SweepEndpoint(self.arena.back().unwrap(), SegmentEnd::Right));
    for event in &self.event_queue {
        println!("{:?}", event)
    }
}

Aaand… this doesn’t work. Rust “cannot infer an appropriate lifetime for autoref due to conflicting requirements”. The trouble is that self.arena.back() takes a reference to self.arena, and then I put that reference in the event queue. But I promised that everything in the event queue has lifetime 'a, and I don’t actually know how long self lives here; I only know that it can’t outlive 'a, because that would invalidate the references it holds.

A little random guessing let me to change &mut self to &'a mut self — which is fine because the entire impl block this lives in is already parameterized by 'a — and that makes this compile! Hooray! I think that’s because I’m saying self itself has exactly the same lifetime as the references it holds onto, which is true, since it’s referring to itself.

Let’s get a little more ambitious and try having two segments.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
fn run(&'a mut self) {
    self.arena.push_back(SweepSegment{ data: 5 });
    self.event_queue.push(SweepEndpoint(self.arena.back().unwrap(), SegmentEnd::Left));
    self.event_queue.push(SweepEndpoint(self.arena.back().unwrap(), SegmentEnd::Right));
    self.arena.push_back(SweepSegment{ data: 17 });
    self.event_queue.push(SweepEndpoint(self.arena.back().unwrap(), SegmentEnd::Left));
    self.event_queue.push(SweepEndpoint(self.arena.back().unwrap(), SegmentEnd::Right));
    for event in &self.event_queue {
        println!("{:?}", event)
    }
}

Whoops! Rust complains that I’m trying to mutate self.arena while other stuff is referring to it. And, yes, that’s true — I have references to it in the event queue, and Rust is preventing me from potentially deleting everything from the queue when references to it still exist. I’m not actually deleting anything here, of course (though I could be if this were a Vec!), but Rust’s type system can’t encode that (and I dread the thought of a type system that can).

I struggled with this for a while, and rapidly encountered another complete showstopper:

1
2
3
4
5
6
fn run(&'a mut self) {
    self.mutate_something();
    self.mutate_something();
}

fn mutate_something(&'a mut self) {}

Rust objects that I’m trying to borrow self mutably, twice — once for the first call, once for the second.

But why? A borrow is supposed to end automatically once it’s no longer used, right? Maybe if I throw some braces around it for scope… nope, that doesn’t help either.

It’s true that borrows usually end automatically, but here I have explicitly told Rust that mutate_something() should borrow with the lifetime 'a, which is the same as the lifetime in run(). So the first call explicitly borrows self for at least the rest of the method. Removing the lifetime from mutate_something() does fix this error, but if that method tries to add new segments, I’m back to the original problem.

Oh no. The mutation in the C++ code is several calls deep. Porting it directly seems nearly impossible.

The typical solution here — at least, the first thing people suggest to me on Twitter — is to wrap basically everything everywhere in Rc<RefCell<T>>, which gives you something that’s reference-counted (avoiding questions of ownership) and defers borrow checks until runtime (avoiding questions of mutable borrows). But that seems pretty heavy-handed here — not only does RefCell add .borrow() noise anywhere you actually want to interact with the underlying value, but do I really need to refcount these tiny structs that only hold a handful of floats each?

I set out to find a middle ground.

Solution, kind of

I really, really didn’t want to perform serious surgery on this code just to get it to build. I still didn’t know if it worked at all, and now I had to rearrange it without being able to check if I was breaking it further. (This isn’t Rust’s fault; it’s a natural problem with porting between fairly different paradigms.)

So I kind of hacked it into working with minimal changes, producing a grotesque abomination which I’m ashamed to link to. Here’s how!

First, I got rid of the class. It turns out this makes lifetime juggling much easier right off the bat. I’m pretty sure Rust considers everything in a struct to be destroyed simultaneously (though in practice it guarantees it’ll destroy fields in order), which doesn’t leave much wiggle room. Locals within a function, on the other hand, can each have their own distinct lifetimes, which solves the problem of expressing that the borrows won’t outlive the arena.

Speaking of the arena, I solved the mutability problem there by switching to… an arena! The typed-arena crate (a port of a type used within Rust itself, I think) is an allocator — you give it a value, and it gives you back a reference, and the reference is guaranteed to be valid for as long as the arena exists. The method that does this is sneaky and takes &self rather than &mut self, so Rust doesn’t know you’re mutating the arena and won’t complain. (One drawback is that the arena will never free anything you give to it, but that’s not a big problem here.)


My next problem was with mutation. The main loop repeatedly calls possibleIntersection with pairs of segments, which can split either or both segment. Rust definitely doesn’t like that — I’d have to pass in two &muts, both of which are mutable references into the same arena, and I’d have a bunch of immutable references into that arena in the sweep list and elsewhere. This isn’t going to fly.

This is kind of a shame, and is one place where Rust seems a little overzealous. Something like this seems like it ought to be perfectly valid:

1
2
3
4
let mut v = vec![1u32, 2u32];
let a = &mut v[0];
let b = &mut v[1];
// do stuff with a, b

The trouble is, Rust only knows the type signature, which here is something like index_mut(&'a mut self, index: usize) -> &'a T. Nothing about that says that you’re borrowing distinct elements rather than some core part of the type — and, in fact, the above code is only safe because you’re borrowing distinct elements. In the general case, Rust can’t possibly know that. It seems obvious enough from the different indexes, but nothing about the type system even says that different indexes have to return different values. And what if one were borrowed as &mut v[1] and the other were borrowed with v.iter_mut().next().unwrap()?

Anyway, this is exactly where people start to turn to RefCell — if you’re very sure you know better than Rust, then a RefCell will skirt the borrow checker while still enforcing at runtime that you don’t have more than one mutable borrow at a time.

But half the lines in this algorithm examine the endpoints of a segment! I don’t want to wrap the whole thing in a RefCell, or I’ll have to say this everywhere:

1
if segment1.borrow().point.x < segment2.borrow().point.x { ... }

Gross.

But wait — this code only mutates the points themselves in one place. When a segment is split, the original segment becomes the left half, and a new segment is created to be the right half. There’s no compelling need for this; it saves an allocation for the left half, but it’s not critical to the algorithm.

Thus, I settled on a compromise. My segment type now looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
struct SegmentPacket {
    // a bunch of flags and whatnot used in the algorithm
}
struct SweepSegment {
    left_point: MapPoint,
    right_point: MapPoint,
    faces_outwards: bool,
    index: usize,
    order: usize,
    packet: RefCell<SegmentPacket>,
}

I do still need to call .borrow() or .borrow_mut() to get at the stuff in the “packet”, but that’s far less common, so there’s less noise overall. And I don’t need to wrap it in Rc because it’s part of a type that’s allocated in the arena and passed around only via references.


This still leaves me with the problem of how to actually perform the splits.

I’m not especially happy with what I came up with, I don’t know if I can defend it, and I suspect I could do much better. I changed possibleIntersection so that rather than performing splits, it returns the points at which each segment needs splitting, in the form (usize, Option<MapPoint>, Option<MapPoint>). (The usize is used as a flag for calling code and oughta be an enum, but, isn’t yet.)

Now the top-level function is responsible for all arena management, and all is well.

Except, er. possibleIntersection is called multiple times, and I don’t want to copy-paste a dozen lines of split code after each call. I tried putting just that code in its own function, which had the world’s most godawful signature, and that didn’t work because… uh… hm. I can’t remember why, exactly! Should’ve written that down.

I tried a local closure next, but closures capture their environment by reference, so now I had references to a bunch of locals for as long as the closure existed, which meant I couldn’t mutate those locals. Argh. (This seems a little silly to me, since the closure’s references cannot possibly be used for anything if the closure isn’t being called, but maybe I’m missing something. Or maybe this is just a limitation of lifetimes.)

Increasingly desperate, I tried using a macro. But… macros are hygienic, which means that any new name you use inside a macro is different from any name outside that macro. The macro thus could not see any of my locals. Usually that’s good, but here I explicitly wanted the macro to mess with my locals.

I was just about to give up and go live as a hermit in a cabin in the woods, when I discovered something quite incredible. You can define local macros! If you define a macro inside a function, then it can see any locals defined earlier in that function. Perfect!

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
macro_rules! _split_segment (
    ($seg:expr, $pt:expr) => (
        {
            let pt = $pt;
            let seg = $seg;
            // ... waaay too much code ...
        }
    );
);

loop {
    // ...
    // This is possibleIntersection, renamed because Rust rightfully complains about camelCase
    let cross = handle_intersections(Some(segment), maybe_above);
    if let Some(pt) = cross.1 {
        segment = _split_segment!(segment, pt);
    }
    if let Some(pt) = cross.2 {
        maybe_above = Some(_split_segment!(maybe_above.unwrap(), pt));
    }
    // ...
}

(This doesn’t actually quite match the original algorithm, which has one case where a segment can be split twice. I realized that I could just do the left-most split, and a later iteration would perform the other split. I sure hope that’s right, anyway.)

It’s a bit ugly, and I ran into a whole lot of implicit behavior from the C++ code that I had to fix — for example, the segment is sometimes mutated just before it’s split, purely as a shortcut for mutating the left part of the split. But it finally compiles! And runs! And kinda worked, a bit!

Aftermath

I still had a lot of work to do.

For one, this code was designed for intersecting two shapes, not mass-intersecting a big pile of shapes. The basic algorithm doesn’t care about how many polygons you start with — all it sees is segments — but the code for constructing the return value needed some heavy modification.

The biggest change by far? The original code traced each segment once, expecting the result to be only a single shape. I had to change that to trace each side of each segment once, since the vast bulk of the output consists of shapes which share a side. This violated a few assumptions, which I had to hack around.

I also ran into a couple very bad edge cases, spent ages debugging them, then found out that the original algorithm had a subtle workaround that I’d commented out because it was awkward to port but didn’t seem to do anything. Whoops!

The worst was a precision error, where a vertical line could be split on a point not quite actually on the line, which wreaked all kinds of havoc. I worked around that with some tasteful rounding, which is highly dubious but makes the output more appealing to my squishy human brain. (I might switch to the original workaround, but I really dislike that even simple cases can spit out points at 1500.0000000000003. The whole thing is parameterized over the coordinate type, so maybe I could throw a rational type in there and cross my fingers?)

All that done, I finally, finally, after a couple months of intermittent progress, got what I wanted!

This is Doom 2’s MAP01. The black area to the left of center is where the player starts. Gray areas indicate where the player can walk from there, with lighter shades indicating more distant areas, where “distance” is measured by the minimum number of line crossings. Red areas can’t be reached at all.

(Note: large playable chunks of the map, including the exit room, are red. That’s because those areas are behind doors, and this code doesn’t understand doors yet.)

(Also note: The big crescent in the lower-right is also black because I was lazy and looked for the player’s starting sector by checking the bbox, and that sector’s bbox happens to match.)

The code that generated this had to go out of its way to delete all the unreachable zones around solid walls. I think I could modify the algorithm to do that on the fly pretty easily, which would probably speed it up a bit too. Downside is that the algorithm would then be pretty specifically tied to this problem, and not usable for any other kind of polygon intersection, which I would think could come up elsewhere? The modifications would be pretty minor, though, so maybe I could confine them to a closure or something.

Some final observations

It runs surprisingly slowly. Like, multiple seconds. Unless I add --release, which speeds it up by a factor of… some number with multiple digits. Wahoo. Debug mode has a high price, especially with a lot of calls in play.

The current state of this code is on GitHub. Please don’t look at it. I’m very sorry.

Honestly, most of my anguish came not from Rust, but from the original code relying on lots of fairly subtle behavior without bothering to explain what it was doing or even hint that anything unusual was going on. God, I hate C++.

I don’t know if the Rust community can learn from this. I don’t know if I even learned from this. Let’s all just quietly forget about it.

Now I just need to figure this one out…

Major Pirate Site Operators’ Sentences Increased on Appeal

Post Syndicated from Andy original https://torrentfreak.com/major-pirate-site-operators-sentences-increased-on-appeal-180330/

With The Pirate Bay the most famous pirate site in Swedish history still in full swing, a lesser known streaming platform started to gain traction more than half a decade ago.

From humble beginnings, Swefilmer eventually grew to become Sweden’s most popular movie and TV show streaming site. At one stage it was credited alongside another streaming portal for serving up to 25% of all online video streaming in Sweden.

But in 2015, everything came crashing down. An operator of the site in his early twenties was raided by local police and arrested. An older Turkish man, who was accused of receiving donations from users and setting up Swefilmer’s deals with advertisers, was later arrested in Germany.

Their activities between November 2013 and June 2015 landed them an appearance before the Varberg District Court last January, where they were accused of making more than $1.5m in advertising revenue from copyright infringement.

The prosecutor described the site as being like “organized crime”. The then 26-year-old was described as the main player behind the site, with the then 23-year-old playing a much smaller role. The latter received an estimated $4,000 of the proceeds, the former was said to have pocketed more than $1.5m.

As expected, things didn’t go well. The older man, who was described as leading a luxury lifestyle, was convicted of 1,044 breaches of copyright law and serious money laundering offenses. He was sentenced to three years in prison and ordered to forfeit 14,000,000 SEK (US$1.68m).

Due to his minimal role, the younger man was given probation and ordered to complete 120 hours of community service. Speaking with TorrentFreak at the time, the 23-year-old said he was relieved at the relatively light sentence but noted it may not be over yet.

Indeed, as is often the case with these complex copyright prosecutions, the matter found itself at the Court of Appeal of Western Sweden. On Wednesday its decision was handed down and it’s bad news for both men.

“The Court of Appeal, like the District Court, judges the men for breach of copyright law,” the Court said in a statement.

“They are judged to have made more than 1,400 copyrighted films available through the Swefilmer streaming service, without obtaining permission from copyright holders. One of the men is also convicted of gross money laundering because he received revenues from the criminal activity.”

In respect of the now 27-year-old, the Court decided to hand down a much more severe sentence, extending the term of imprisonment from three to four years.

There was some better news in respect of the amount he has to forfeit to the state, however. The District Court set this amount at 14,000,000 SEK (US$1.68m) but the Court of Appeal reduced it to ‘just’ 4,000,000 SEK (US$482,280).

The younger man’s conditional sentence was upheld but community service was replaced with a fine of 10,000 SEK (US$1,200). Also, along with his accomplice, he must now pay significant damages to a Norwegian plaintiff in the case.

“Both men will jointly pay damages of NOK 2.2 million (US$283,000) together with interest to Nordisk Film A / S for copyright infringement in one of the films posted on the website,” the Court writes in its decision.

But even now, the matter may not be closed. Ansgar Firsching, the older man’s lawyer, told SVT that the case could go all the way to the Supreme Court.

“I have informed my client about the content of the judgment and it is highly likely that he will turn to the Supreme Court,” Firsching said.

It appears that the 27-year-old will argue that at the time of the alleged offenses, merely linking to copyrighted content was not a criminal offense but whether this approach will succeed is seriously up for debate.

While linking was previously considered by some to sit in a legal gray area, the District Court drew heavily on the GS Media ruling handed down by the European Court of Justice in September 2016.

In that case, the EU Court found that those who post links to content they do not know is infringing in a non-commercial environment usually don’t commit infringement. The Swefilmer case doesn’t immediately appear to fit either of those parameters.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.