Tag Archives: HAB

New software to get you started with high-altitude ballooning

Post Syndicated from James Robinson original https://www.raspberrypi.org/blog/pytrack-skygate-hab-software/

Right now, we’re working on an online project pathway to support you with all your high-altitude balloon (HAB) flight activities, whether you run them with students or as a hobby. We’ll release the resources later in the year, but in the meantime we have some exciting new HAB software to share with you!

High altitude ballooning with Pi Zero

Skycademy and early HAB software

Over the past few years, I’ve been lucky enough to conduct several high-altitude balloon (HAB) flights and to help educators who wanted to do HAB projects with learners. In the Foundation’s Skycademy programme, supported by UKHAS members, in particular Dave Akerman, we’ve trained more than 50 teachers to successfully launch near-space missions with their students.

high-altitude balloning Raspberry Pi
high-altitude balloning Raspberry Pi
Dave Akerman high-altitude balloning Raspberry Pi

Whenever I advise people who are planning a HAB mission, I tell them that the separate elements actually aren’t that complicated. The difficulty lies in juggling them all at the same time to successfully launch, track, and recover your balloon.

Over the years, some excellent tools and software packages have been developed to help with HAB launches. Dave Akerman’s Pi In The Sky (PITS) software gave beginners the chance to control their first payloads: you enter your own specs into a configuration file, and the software, written in C, handles the rest. Dave’s Long Range (LoRa) gateway software then tracks the payload, receiving balloon data and plotting the flight’s trajectory on a real-time map.

Dave Akerman high-altitude balloning Raspberry Pi

Dave at a Skycademy event

These tools, while useful, present two challenges to the novice HAB enthusiast:

  • Exposing and adapting the workings of the software is challenging for novice learners, given that it is written in C
  • The existing tracking software and tools are fragmented: one application received LoRa signals; another received radioteletype (RTTY) data; photos were received and had to be manually opened elsewhere; and so on

Introducing Pytrack and SkyGate

Making ballooning as accessible as possible is something we’ve been keen to do since we first got involved in 2015. So I’m delighted to reveal that over the past year, we’ve worked with Dave to produce two new applications to support HAB activities!

Pytrack

Pytrack is a Python implementation of Dave’s original PITS software, and it offers several advantages:

  • Learners can create their own tracker in a simpler programming language, rather than simply configuring the existing software
  • The core mechanics of the tracker are exposed for the learner to understand, but complex details are abstracted away
  • Learners can integrate the technology with standard Python libraries and existing projects
  • Pytrack is modular, allowing learners to experiment with underlying radio components

SkyGate

After our last Skycademy event, I started to look for a way to make tracking a payload in flight easier. For Skycademy, we made a hacky tracking box using a Pi, a 7” screen, and a very rough GUI app that I wrote in a hurry lovingly toiled over.

Skygate high-altitude balloning Raspberry Pi
Skygate high-altitude balloning Raspberry Pi

 

Since then, we have gone on to develop SkyGate, a complete tracking application which runs on a Pi and fits nicely on a 7” screen. It brings together all the tracking functionalities into one intuitive application:

  • Live tunable LoRa reception and decoding
  • Live tunable RTTY reception and decoding (with compatible USB SDR)
  • Image reception and previewing
  • GPS tracking to report your location (when using compatible GPS USB dongle)
  • Data, images, and GPS upload functionality to HabHub tracking site
  • An Overview tab presenting a high-level summary and bearing to payload
  • Full customisation via the Settings tab

You can get involved!

We would love HAB enthusiasts to test and experiment with both Pytrack and SkyGate, and to give us feedback. Your input will really help us to write the full guide that we’ll release later this year.

To get started, install both programmes using your command prompt/terminal.

For your payload, run:

sudo apt update
sudo apt install python3-pytrack

And your receiver, run:

sudo apt update
sudo apt install skygate

Follow this guide to start using Pytrack, and read this overview on SkyGate and what you’ll need for a tracking box. To give us your feedback, please raise issues on the respective GitHub repos: for Pytrack here, and for SkyGate here.

We’ve developed these software packages to make launching and tracking a HAB payload easier and more flexible, and we hope you’ll think we’ve succeeded.

Happy ballooning!

Disclaimer: each country has its own laws regarding HAB launches and radio transmissions in their airspace. Before you attempt to carry out your own HAB flight, you need to ensure you have permission and are complying with all local laws.

The post New software to get you started with high-altitude ballooning appeared first on Raspberry Pi.

Gliding to earth with the Raspberry Pi Zero

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/zero-hab-glider/

RaptorTech’s goal was to drop a glider from the edge of space, and with a Raspberry Pi and a high-altitude weather balloon, their vision became a reality.

Dropping a glider from 10km with a high-altitude weather balloon

The goal of this project was to drop a glider from the edge of space using a high altitude weather balloon. The glider is entirely homemade and uses the opensource Pixhawk flight controller + a Raspberry Pi Zero to disconnect at the desired altitude and fly to a predetermined landing location.

High-altitude ballooning

Here at Pi Towers, we thoroughly enjoy the link between high-altitude balloon (HAB) enthusiasts and the Raspberry Pi community, from Dave Akerman‘s first attempt at sending a Raspberry Pi to near-space, to our own Skycademy programme training educators in high-altitude ballooning. HABs and the Pi go together like the macaroni and cheese, peanut butter and jelly, chips and gravy…you get the idea.

The RaptorTech glider

The RaptorTech team equipped their glider with a Pixhawk flight controller and the small $5 Raspberry Pi Zero to control the time point when the glider disconnects from the HAB, and to allow the glider to autonomously navigate back to a specific landing site.

RaptorTech high-altitude balloon Raspberry Pi Zero glider

They made the glider out of foam core and coroplast, with a covering of tape to waterproof the body. Inside it were two cameras, two servos, the Raspberry Pi Zero, and the Pixhawk flight controller with added GPS tracker (in case the glider got lost on the way home). The electronics were protected by handwarmers from freezing at high altitude.

The Raspberry Pi Zero ran a Python script to control the Pixhawk. At take-off, the Zero set the controller into manual mode to keep the glider from trying to fly off toward its final destination. When the glider reached a pre-determined altitude, the Zero disconnected the glider from the HAB by setting off a solid state relay to burn through the connecting wire. Then the Pi started up the flight controller to direct the glider home. You can find the code for this process here.

All systems go

Due to time limitations and weather restrictions, the RaptorTech team had to drop their glider from 10km instead of 30km as they’d planned. They were pleased to report the safe, successful return of their glider to about 10m from the pre-set landing point.

RaptorTech high-altitude balloon Raspberry Pi Zero glider

If you’d like to follow the adventures of RaptorTech, check out their Facebook page. You can also follow them on YouTube and on their website for more RC plane-based mayhem.

A note from Dave Akerman: “It’s worth pointing out that not only do all HAB flights need permission but that such permission would normally ONLY be for payloads being dropped by parachute. Free-flying gliders, planes, drones etc. are not allowed with specific permission. My understanding, from a HABber in the USA (where this flight was), is that the FAA will not provide such permission. In any case, before dropping anything from a HAB without a parachute, get specific permission first.”

The post Gliding to earth with the Raspberry Pi Zero appeared first on Raspberry Pi.

Majority of Canadians Consume Online Content Legally, Survey Finds

Post Syndicated from Andy original https://torrentfreak.com/majority-of-canadians-consume-online-content-legally-survey-finds-180531/

Back in January, a coalition of companies and organizations with ties to the entertainment industries called on local telecoms regulator CRTC to implement a national website blocking regime.

Under the banner of Fairplay Canada, members including Bell, Cineplex, Directors Guild of Canada, Maple Leaf Sports and Entertainment, Movie Theatre Association of Canada, and Rogers Media, spoke of an industry under threat from marauding pirates. But just how serious is this threat?

The results of a new survey commissioned by Innovation Science and Economic Development Canada (ISED) in collaboration with the Department of Canadian Heritage (PCH) aims to shine light on the problem by revealing the online content consumption habits of citizens in the Great White North.

While there are interesting findings for those on both sides of the site-blocking debate, the situation seems somewhat removed from the Armageddon scenario predicted by the entertainment industries.

Carried out among 3,301 Canadians aged 12 years and over, the Kantar TNS study aims to cover copyright infringement in six key content areas – music, movies, TV shows, video games, computer software, and eBooks. Attitudes and behaviors are also touched upon while measuring the effectiveness of Canada’s copyright measures.

General Digital Content Consumption

In its introduction, the report notes that 28 million Canadians used the Internet in the three-month study period to November 27, 2017. Of those, 22 million (80%) consumed digital content. Around 20 million (73%) streamed or accessed content, 16 million (59%) downloaded content, while 8 million (28%) shared content.

Music, TV shows and movies all battled for first place in the consumption ranks, with 48%, 48%, and 46% respectively.

Copyright Infringement

According to the study, the majority of Canadians do things completely by the book. An impressive 74% of media-consuming respondents said that they’d only accessed material from legal sources in the preceding three months.

The remaining 26% admitted to accessing at least one illegal file in the same period. Of those, just 5% said that all of their consumption was from illegal sources, with movies (36%), software (36%), TV shows (34%) and video games (33%) the most likely content to be consumed illegally.

Interestingly, the study found that few demographic factors – such as gender, region, rural and urban, income, employment status and language – play a role in illegal content consumption.

“We found that only age and income varied significantly between consumers who infringed by downloading or streaming/accessing content online illegally and consumers who did not consume infringing content online,” the report reads.

“More specifically, the profile of consumers who downloaded or streamed/accessed infringing content skewed slightly younger and towards individuals with household incomes of $100K+.”

Licensed services much more popular than pirate haunts

It will come as no surprise that Netflix was the most popular service with consumers, with 64% having used it in the past three months. Sites like YouTube and Facebook were a big hit too, visited by 36% and 28% of content consumers respectively.

Overall, 74% of online content consumers use licensed services for content while 42% use social networks. Under a third (31%) use a combination of peer-to-peer (BitTorrent), cyberlocker platforms, or linking sites. Stream-ripping services are used by 9% of content consumers.

“Consumers who reported downloading or streaming/accessing infringing content only are less likely to use licensed services and more likely to use peer-to-peer/cyberlocker/linking sites than other consumers of online content,” the report notes.

Attitudes towards legal consumption & infringing content

In common with similar surveys over the years, the Kantar research looked at the reasons why people consume content from various sources, both legal and otherwise.

Convenience (48%), speed (36%) and quality (34%) were the most-cited reasons for using legal sources. An interesting 33% of respondents said they use legal sites to avoid using illegal sources.

On the illicit front, 54% of those who obtained unauthorized content in the previous three months said they did so due to it being free, with 40% citing convenience and 34% mentioning speed.

Almost six out of ten (58%) said lower costs would encourage them to switch to official sources, with 47% saying they’d move if legal availability was improved.

Canada’s ‘Notice-and-Notice’ warning system

People in Canada who share content on peer-to-peer systems like BitTorrent without permission run the risk of receiving an infringement notice warning them to stop. These are sent by copyright holders via users’ ISPs and the hope is that the shock of receiving a warning will turn consumers back to the straight and narrow.

The study reveals that 10% of online content consumers over the age of 12 have received one of these notices but what kind of effect have they had?

“Respondents reported that receiving such a notice resulted in the following: increased awareness of copyright infringement (38%), taking steps to ensure password protected home networks (27%), a household discussion about copyright infringement (27%), and discontinuing illegal downloading or streaming (24%),” the report notes.

While these are all positives for the entertainment industries, Kantar reports that almost a quarter (24%) of people who receive a notice simply ignore them.

Stream-ripping

Once upon a time, people obtaining music via P2P networks was cited as the music industry’s greatest threat but, with the advent of sites like YouTube, so-called stream-ripping is the latest bogeyman.

According to the study, 11% of Internet users say they’ve used a stream-ripping service. They are most likely to be male (62%) and predominantly 18 to 34 (52%) years of age.

“Among Canadians who have used a service to stream-rip music or entertainment, nearly half (48%) have used stream-ripping sites, one-third have used downloader apps (38%), one-in-seven (14%) have used a stream-ripping plug-in, and one-in-ten (10%) have used stream-ripping software,” the report adds.

Set-Top Boxes and VPNs

Few general piracy studies would be complete in 2018 without touching on set-top devices and Virtual Private Networks and this report doesn’t disappoint.

More than one in five (21%) respondents aged 12+ reported using a VPN, with the main purpose of securing communications and Internet browsing (57%).

A relatively modest 36% said they use a VPN to access free content while 32% said the aim was to access geo-blocked content unavailable in Canada. Just over a quarter (27%) said that accessing content from overseas at a reasonable price was the main motivator.

One in ten (10%) of respondents reported using a set-top box, with 78% stating they use them to access paid-for content. Interestingly, only a small number say they use the devices to infringe.

“A minority use set-top boxes to access other content that is not legal or they are unsure if it is legal (16%), or to access live sports that are not legal or they are unsure if it is legal (11%),” the report notes.

“Individuals who consumed a mix of legal and illegal content online are more likely to use VPN services (42%) or TV set-top boxes (21%) than consumers who only downloaded or streamed/accessed legal content.”

Kantar says that the findings of the report will be used to help policymakers evaluate how Canada’s Copyright Act is coping with a changing market and technological developments.

“This research will provide the necessary information required to further develop copyright policy in Canada, as well as to provide a foundation to assess the effectiveness of the measures to address copyright infringement, should future analysis be undertaken,” it concludes.

The full report can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Monitoring your Amazon SNS message filtering activity with Amazon CloudWatch

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/monitoring-your-amazon-sns-message-filtering-activity-with-amazon-cloudwatch/

This post is courtesy of Otavio Ferreira, Manager, Amazon SNS, AWS Messaging.

Amazon SNS message filtering provides a set of string and numeric matching operators that allow each subscription to receive only the messages of interest. Hence, SNS message filtering can simplify your pub/sub messaging architecture by offloading the message filtering logic from your subscriber systems, as well as the message routing logic from your publisher systems.

After you set the subscription attribute that defines a filter policy, the subscribing endpoint receives only the messages that carry attributes matching this filter policy. Other messages published to the topic are filtered out for this subscription. In this way, the native integration between SNS and Amazon CloudWatch provides visibility into the number of messages delivered, as well as the number of messages filtered out.

CloudWatch metrics are captured automatically for you. To get started with SNS message filtering, see Filtering Messages with Amazon SNS.

Message Filtering Metrics

The following six CloudWatch metrics are relevant to understanding your SNS message filtering activity:

  • NumberOfMessagesPublished – Inbound traffic to SNS. This metric tracks all the messages that have been published to the topic.
  • NumberOfNotificationsDelivered – Outbound traffic from SNS. This metric tracks all the messages that have been successfully delivered to endpoints subscribed to the topic. A delivery takes place either when the incoming message attributes match a subscription filter policy, or when the subscription has no filter policy at all, which results in a catch-all behavior.
  • NumberOfNotificationsFilteredOut – This metric tracks all the messages that were filtered out because they carried attributes that didn’t match the subscription filter policy.
  • NumberOfNotificationsFilteredOut-NoMessageAttributes – This metric tracks all the messages that were filtered out because they didn’t carry any attributes at all and, consequently, didn’t match the subscription filter policy.
  • NumberOfNotificationsFilteredOut-InvalidAttributes – This metric keeps track of messages that were filtered out because they carried invalid or malformed attributes and, thus, didn’t match the subscription filter policy.
  • NumberOfNotificationsFailed – This last metric tracks all the messages that failed to be delivered to subscribing endpoints, regardless of whether a filter policy had been set for the endpoint. This metric is emitted after the message delivery retry policy is exhausted, and SNS stops attempting to deliver the message. At that moment, the subscribing endpoint is likely no longer reachable. For example, the subscribing SQS queue or Lambda function has been deleted by its owner. You may want to closely monitor this metric to address message delivery issues quickly.

Message filtering graphs

Through the AWS Management Console, you can compose graphs to display your SNS message filtering activity. The graph shows the number of messages published, delivered, and filtered out within the timeframe you specify (1h, 3h, 12h, 1d, 3d, 1w, or custom).

SNS message filtering for CloudWatch Metrics

To compose an SNS message filtering graph with CloudWatch:

  1. Open the CloudWatch console.
  2. Choose Metrics, SNS, All Metrics, and Topic Metrics.
  3. Select all metrics to add to the graph, such as:
    • NumberOfMessagesPublished
    • NumberOfNotificationsDelivered
    • NumberOfNotificationsFilteredOut
  4. Choose Graphed metrics.
  5. In the Statistic column, switch from Average to Sum.
  6. Title your graph with a descriptive name, such as “SNS Message Filtering”

After you have your graph set up, you may want to copy the graph link for bookmarking, emailing, or sharing with co-workers. You may also want to add your graph to a CloudWatch dashboard for easy access in the future. Both actions are available to you on the Actions menu, which is found above the graph.

Summary

SNS message filtering defines how SNS topics behave in terms of message delivery. By using CloudWatch metrics, you gain visibility into the number of messages published, delivered, and filtered out. This enables you to validate the operation of filter policies and more easily troubleshoot during development phases.

SNS message filtering can be implemented easily with existing AWS SDKs by applying message and subscription attributes across all SNS supported protocols (Amazon SQS, AWS Lambda, HTTP, SMS, email, and mobile push). CloudWatch metrics for SNS message filtering is available now, in all AWS Regions.

For information about pricing, see the CloudWatch pricing page.

For more information, see:

The Benefits of Side Projects

Post Syndicated from Bozho original https://techblog.bozho.net/the-benefits-of-side-projects/

Side projects are the things you do at home, after work, for your own “entertainment”, or to satisfy your desire to learn new stuff, in case your workplace doesn’t give you that opportunity (or at least not enough of it). Side projects are also a way to build stuff that you think is valuable but not necessarily “commercialisable”. Many side projects are open-sourced sooner or later and some of them contribute to the pool of tools at other people’s disposal.

I’ve outlined one recommendation about side projects before – do them with technologies that are new to you, so that you learn important things that will keep you better positioned in the software world.

But there are more benefits than that – serendipitous benefits, for example. And I’d like to tell some personal stories about that. I’ll focus on a few examples from my list of side projects to show how, through a sort-of butterfly effect, they helped shape my career.

The computoser project, no matter how cool algorithmic music composition, didn’t manage to have much of a long term impact. But it did teach me something apart from niche musical theory – how to read a bulk of scientific papers (mostly computer science) and understand them without being formally trained in the particular field. We’ll see how that was useful later.

Then there was the “State alerts” project – a website that scraped content from public institutions in my country (legislation, legislation proposals, decisions by regulators, new tenders, etc.), made them searchable, and “subscribable” – so that you get notified when a keyword of interest is mentioned in newly proposed legislation, for example. (I obviously subscribed for “information technologies” and “electronic”).

And that project turned out to have a significant impact on the following years. First, I chose a new technology to write it with – Scala. Which turned out to be of great use when I started working at TomTom, and on the 3rd day I was transferred to a Scala project, which was way cooler and much more complex than the original one I was hired for. It was a bit ironic, as my colleagues had just read that “I don’t like Scala” a few weeks earlier, but nevertheless, that was one of the most interesting projects I’ve worked on, and it went on for two years. Had I not known Scala, I’d probably be gone from TomTom much earlier (as the other project was restructured a few times), and I would not have learned many of the scalability, architecture and AWS lessons that I did learn there.

But the very same project had an even more important follow-up. Because if its “civic hacking” flavour, I was invited to join an informal group of developers (later officiated as an NGO) who create tools that are useful for society (something like MySociety.org). That group gathered regularly, discussed both tools and policies, and at some point we put up a list of policy priorities that we wanted to lobby policy makers. One of them was open source for the government, the other one was open data. As a result of our interaction with an interim government, we donated the official open data portal of my country, functioning to this day.

As a result of that, a few months later we got a proposal from the deputy prime minister’s office to “elect” one of the group for an advisor to the cabinet. And we decided that could be me. So I went for it and became advisor to the deputy prime minister. The job has nothing to do with anything one could imagine, and it was challenging and fascinating. We managed to pass legislation, including one that requires open source for custom projects, eID and open data. And all of that would not have been possible without my little side project.

As for my latest side project, LogSentinel – it became my current startup company. And not without help from the previous two mentioned above – the computer science paper reading was of great use when I was navigating the crypto papers landscape, and from the government job I not only gained invaluable legal knowledge, but I also “got” a co-founder.

Some other side projects died without much fanfare, and that’s fine. But the ones above shaped my “story” in a way that would not have been possible otherwise.

And I agree that such serendipitous chain of events could have happened without side projects – I could’ve gotten these opportunities by meeting someone at a bar (unlikely, but who knows). But we, as software engineers, are capable of tilting chance towards us by utilizing our skills. Side projects are our “extracurricular activities”, and they often lead to unpredictable, but rather positive chains of events. They would rarely be the only factor, but they are certainly great at unlocking potential.

The post The Benefits of Side Projects appeared first on Bozho's tech blog.

Roku Displays FBI Anti-Piracy Warning to Legitimate YouTube & Netflix Users

Post Syndicated from Andy original https://torrentfreak.com/roku-displays-fbi-anti-piracy-warning-to-legitimate-youtube-netflix-users-180516/

In 2018, dealing with copyright infringement claims is a daily issue for many content platforms. The law in many regions demands swift attention and in order to appease copyright holders, most platforms are happy to oblige.

While it’s not unusual for ‘pirate’ content and services to suddenly disappear in response to a DMCA or similar notice, the same is rarely true for entire legitimate services.

But that’s what appeared to happen on the Roku platform during the night, when YouTube, Netflix and other channels disappeared only to be replaced with an ominous anti-piracy warning.

As the embedded tweet shows, the message caused confusion among Roku users who were only using their devices to access legal content. Messages replacing Netflix and YouTube seemed to have caused the greatest number of complaints but many other services were affected.

FoxSportsGo, FandangoNow, and India-focused YuppTV and Hotstar were also blacked out. As were the yoga and transformational videos specialists over at Gaia, the horror buffs at ChillerFlix, and UK TV service BritBox.

But while users scratched their heads, with some misguidedly blaming Roku for not being diligent enough against piracy, Roku took to Twitter to reveal that rather than anti-piracy complaints against the channels in question, a technical hitch was to blame.

However, a subsequent statement to CNET suggested that while blacking out Netflix and YouTube might have been accidental, Roku appears to have been taking anti-piracy action against another channel or channels at the time, with the measures inadvertently spilling over to innocent parties.

“We use that warning when we detect content that has violated copyright,” Roku said in a statement.

“Some channels in our Channel Store displayed that message and became inaccessible after Roku implemented a targeted anti-piracy measure on the platform.”

The precise nature of the action taken by Roku is unknown but it’s clear that copyright infringement is currently a hot topic for the platform.

Roku is currently fighting legal action in Mexico which ordered its products off the shelves following complaints that its platform is used by pirates. That led to an FBI warning being shown for what was believed to be the first time against the XTV and other channels last year.

This March, Roku took action against the popular USTVNow channel following what was described as a “third party” copyright infringement complaint. Just a couple of weeks later, Roku followed up by removing the controversial cCloud channel.

With Roku currently fighting to have sales reinstated in Mexico against a backdrop of claims that up to 40% of its users are pirates, it’s unlikely that Roku is suddenly going to go soft on piracy, so more channel outages can be expected in the future.

In the meantime, the scary FBI warnings of last evening are beginning to fade away (for legitimate channels at least) after the company issued advice on how to fix the problem.

“The recent outage which affected some channels has been resolved. Go to Settings > System > System update > Check now for a software update. Some channels may require you to log in again. Thank you for your patience,” the company wrote in an update.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

YouTube Won’t Put Up With Blatant Piracy Tutorials Forever

Post Syndicated from Andy original https://torrentfreak.com/youtube-wont-put-up-with-blatant-piracy-tutorials-forever-180506/

Once upon a time, Internet users’ voices would be heard in limited circles, on platforms such as Usenet or other niche platforms.

Then, with the rise of forum platforms such as phpBB in 2000 and Invision Power Board in 2002, thriving communities could gather in public to discuss endless specialist topics, including file-sharing of course.

When dedicated piracy forums began to gain traction, it was pretty much a free-for-all. People discussed obtaining free content absolutely openly. Nothing was taboo and no one considered that there would be any repercussions. As such, moderation was limited to keeping troublemakers in check.

As the years progressed and lawsuits against both sites and services became more commonplace, most sites that weren’t actually serving illegal content began to consider their positions. Run by hobbyists, most didn’t want the hassle of a multi-million dollar lawsuit, so links to pirate content began to diminish and the more overt piracy tutorials began to disappear underground.

Those that remained in plain sight became much more considered. Tutorials on how to pirate specific Hollywood blockbusters were no longer needed, a plain general tutorial would suffice. And, as communities matured and took time to understand the implications of their actions, those without political motivations realized that drawing attention to potential criminality was neither required nor necessary.

Then YouTube and social media happened and almost overnight, no one was in charge and anyone could say whatever they liked.

In this new reality, there were no irritating moderator-type figures removing links to this and that, and nobody warning people against breaking rules that suddenly didn’t exist anymore. In essence, previously tight-knit and street-wise file-sharing and piracy communities not only became fragmented, but also chaotic.

This meant that anyone could become a leader and in some cases, this was the utopia that many had hoped for. Not only couldn’t the record labels or Hollywood tell people what to do anymore, discussion site operators couldn’t either. For those who didn’t abuse the power and for those who knew no better, this was a much-needed breath of fresh air. But, like all good things, it was unlikely to last forever.

Where most file-sharing of yesterday was carried out by hobbyist enthusiasts, many of today’s pirates are far more casual. They’re just as thirsty for content, but they don’t want to spend hours hunting for it. They want it all on a plate, at the flick of a switch, delivered to their TV with a minimum of hassle.

With online discussions increasingly seen as laborious and old-fashioned, many mainstream pirates have turned to easy-to-consume videos. In support of their Kodi media player habits, YouTube has become the educational platform of choice for millions.

As a result, there is now a long line of self-declared Kodi piracy specialists scooping up millions of views on YouTube. Their videos – which in many cases are thinly veiled advertisements for third party addons, Kodi ‘builds’, illegal IPTV services, and obscure Android APKs – are now the main way for a new generation to obtain direct advice on pirating.

Many of the videos are incredibly blatant, like the past 15 years of litigation never happened. All the lessons learned by the phpBB board operators of yesteryear, of how to achieve their goals of sharing information without getting shut down, have been long forgotten. In their place, a barrage of daily videos designed to generate clicks and affiliate revenue, no matter what the cost, no matter what the risk.

It’s pretty clear that these videos are at least partly responsible for the phenomenal uptick in Kodi and Android-based piracy over the past few years. In that respect, many lovers of free content will be eternally grateful for the service they’ve provided. But like many piracy movements over the years, people shouldn’t get too attached to them, at least in their current form.

Thanks to the devil-may-care approach of many influential YouTubers, it won’t be long before a whole new set of moderators begin flexing their muscles. While your average phpBB moderator could be reasoned with in order to get a second chance, a determined and largely faceless YouTube will eject offenders without so much as a clear explanation.

When this happens (and it’s only a question of time given the growing blatancy of many tutorials) YouTubers will not only lose their voices but their revenue streams too. While YouTube’s partner programs bring in some welcome cash, the profitable affiliate schemes touted on these channels for external products will also be under threat.

Perhaps the most surprising thing in this drama-waiting-to-happen is that many of the most popular YouTubers can hardly be considered young and naive. While some are of more tender years, most – with their undoubted skill, knowledge and work ethic – should know better for their 30 or 40 years on this planet. Yet not only do they make their names public, they feature their faces heavily in their videos too.

Still, it’s likely that it will take some big YouTube accounts to fall before YouTubers respond by shaving the sharp edges off their blatant promotion of illegal activity. And there’s little doubt that those advertising products (which is most of them) will have to do so sooner rather than later.

Just this week, YouTube made it clear that it won’t tolerate people making money from the promotion of illegal activities.

“YouTube creators may include paid endorsements as part of their content only if the product or service they are endorsing complies with our advertising policies,” YouTube told the BBC.

“We will be working with creators going forward so they better understand that in video promotions [they] must not promote dishonest activity.”

That being said, like many other players in the piracy and file-sharing space over the past 18 years, YouTubers will eventually begin to learn that not only can the smart survive, they can flourish too.

Sure, there will be people out there who’ll protest that free speech allows citizens to express themselves in a manner of their choosing. But try PM’ing that to YouTube in response to a strike, and see how that fares.

When they say you’re done, the road back is a long one.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Video Deters People From Pirate Sites…Or Encourages Them to Start One?

Post Syndicated from Andy original https://torrentfreak.com/video-deters-people-from-pirate-sites-or-encourages-them-to-start-one-180505/

There are almost as many anti-piracy strategies as there are techniques for downloading.

Litigation and education are probably the two most likely to be seen by the public, who are often directly targeted by the entertainment industries.

Over the years this has led to many campaigns, one of which famously stated that piracy is a crime while equating it to the physical theft of a car, a handbag, a television, or a regular movie DVD. It’s debatable whether these campaigns have made much difference but they have raised awareness and some of the responses have been hilarious.

While success remains hard to measure, it hasn’t stopped these PSAs from being made. The latest efforts come out of Sweden, where the country’s Patent and Registration Office (PRV) was commissioned by the government to increase public awareness of copyright and help change attitudes surrounding streaming and illegal downloading.

“The purpose is, among other things, to reduce the use of illegal streaming sites and make it easier and safer to find and choose legal options,” PRV says.

“Every year, criminal networks earn millions of dollars from illegal streaming. This money comes from advertising on illegal sites and is used for other criminal activities. The purpose of our film is to inform about this.”

The series of videos show pirates in their supposed natural habitats of beautiful mansions, packed with luxurious items such as indoor pools, fancy staircases, and stacks of money. For some reason (perhaps to depict anonymity, perhaps to suggest something more sinister) the pirates are all dressed in animal masks, such as this one enjoying his Dodge Viper.

The clear suggestion here is that people who visit pirate sites and stream unlicensed content are helping to pay for this guy’s bright green car. The same holds true for his indoor swimming pool, jet bike, and gold chains in the next clip.

While some might have a problem with pirates getting rich from their clicks, it can’t have escaped the targets of these videos that they too are benefiting from the scheme. Granted, hyena-man gets the pool and the Viper, but they get the latest movies. It seems unlikely that pirate streamers refused to watch the copy of Black Panther that leaked onto the web this week (a month before its retail release) on the basis that someone else was getting rich from it.

That being said, most people will probably balk at elements of the full PSA, which suggests that revenue from illegal streaming goes on to fuel other crimes, such as prescription drug offenses.

After reporting piracy cases for more than twelve years, no one at TF has ever seen evidence of this happening with any torrent or streaming site operators. Still, it makes good drama for the full video, embedded below.

“In the film we follow a fictional occupational criminal who gives us a tour of his beautiful villa. He proudly shows up his multi-criminal activity, which was made possible by means of advertising money from his illegal streaming services,” PRV explains.

The dark tone and creepy masks are bound to put some people off but one has to question the effect this kind of video could have on younger people. Do pirates really make mountains of money so huge that they can only be counted by machine? If they do, then it’s a lot less risky than almost any other crime that yields this claimed level of profit.

With that in mind, will this video deter the public or simply encourage people to get involved for some of that big money? We sent a link to the operator of a large pirate site for his considered opinion.

“WTF,” he responded.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Ransomware Update: Viruses Targeting Business IT Servers

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/ransomware-update-viruses-targeting-business-it-servers/

Ransomware warning message on computer

As ransomware attacks have grown in number in recent months, the tactics and attack vectors also have evolved. While the primary method of attack used to be to target individual computer users within organizations with phishing emails and infected attachments, we’re increasingly seeing attacks that target weaknesses in businesses’ IT infrastructure.

How Ransomware Attacks Typically Work

In our previous posts on ransomware, we described the common vehicles used by hackers to infect organizations with ransomware viruses. Most often, downloaders distribute trojan horses through malicious downloads and spam emails. The emails contain a variety of file attachments, which if opened, will download and run one of the many ransomware variants. Once a user’s computer is infected with a malicious downloader, it will retrieve additional malware, which frequently includes crypto-ransomware. After the files have been encrypted, a ransom payment is demanded of the victim in order to decrypt the files.

What’s Changed With the Latest Ransomware Attacks?

In 2016, a customized ransomware strain called SamSam began attacking the servers in primarily health care institutions. SamSam, unlike more conventional ransomware, is not delivered through downloads or phishing emails. Instead, the attackers behind SamSam use tools to identify unpatched servers running Red Hat’s JBoss enterprise products. Once the attackers have successfully gained entry into one of these servers by exploiting vulnerabilities in JBoss, they use other freely available tools and scripts to collect credentials and gather information on networked computers. Then they deploy their ransomware to encrypt files on these systems before demanding a ransom. Gaining entry to an organization through its IT center rather than its endpoints makes this approach scalable and especially unsettling.

SamSam’s methodology is to scour the Internet searching for accessible and vulnerable JBoss application servers, especially ones used by hospitals. It’s not unlike a burglar rattling doorknobs in a neighborhood to find unlocked homes. When SamSam finds an unlocked home (unpatched server), the software infiltrates the system. It is then free to spread across the company’s network by stealing passwords. As it transverses the network and systems, it encrypts files, preventing access until the victims pay the hackers a ransom, typically between $10,000 and $15,000. The low ransom amount has encouraged some victimized organizations to pay the ransom rather than incur the downtime required to wipe and reinitialize their IT systems.

The success of SamSam is due to its effectiveness rather than its sophistication. SamSam can enter and transverse a network without human intervention. Some organizations are learning too late that securing internet-facing services in their data center from attack is just as important as securing endpoints.

The typical steps in a SamSam ransomware attack are:

1
Attackers gain access to vulnerable server
Attackers exploit vulnerable software or weak/stolen credentials.
2
Attack spreads via remote access tools
Attackers harvest credentials, create SOCKS proxies to tunnel traffic, and abuse RDP to install SamSam on more computers in the network.
3
Ransomware payload deployed
Attackers run batch scripts to execute ransomware on compromised machines.
4
Ransomware demand delivered requiring payment to decrypt files
Demand amounts vary from victim to victim. Relatively low ransom amounts appear to be designed to encourage quick payment decisions.

What all the organizations successfully exploited by SamSam have in common is that they were running unpatched servers that made them vulnerable to SamSam. Some organizations had their endpoints and servers backed up, while others did not. Some of those without backups they could use to recover their systems chose to pay the ransom money.

Timeline of SamSam History and Exploits

Since its appearance in 2016, SamSam has been in the news with many successful incursions into healthcare, business, and government institutions.

March 2016
SamSam appears

SamSam campaign targets vulnerable JBoss servers
Attackers hone in on healthcare organizations specifically, as they’re more likely to have unpatched JBoss machines.

April 2016
SamSam finds new targets

SamSam begins targeting schools and government.
After initial success targeting healthcare, attackers branch out to other sectors.

April 2017
New tactics include RDP

Attackers shift to targeting organizations with exposed RDP connections, and maintain focus on healthcare.
An attack on Erie County Medical Center costs the hospital $10 million over three months of recovery.
Erie County Medical Center attacked by SamSam ransomware virus

January 2018
Municipalities attacked

• Attack on Municipality of Farmington, NM.
• Attack on Hancock Health.
Hancock Regional Hospital notice following SamSam attack
• Attack on Adams Memorial Hospital
• Attack on Allscripts (Electronic Health Records), which includes 180,000 physicians, 2,500 hospitals, and 7.2 million patients’ health records.

February 2018
Attack volume increases

• Attack on Davidson County, NC.
• Attack on Colorado Department of Transportation.
SamSam virus notification

March 2018
SamSam shuts down Atlanta

• Second attack on Colorado Department of Transportation.
• City of Atlanta suffers a devastating attack by SamSam.
The attack has far-reaching impacts — crippling the court system, keeping residents from paying their water bills, limiting vital communications like sewer infrastructure requests, and pushing the Atlanta Police Department to file paper reports.
Atlanta Ransomware outage alert
• SamSam campaign nets $325,000 in 4 weeks.
Infections spike as attackers launch new campaigns. Healthcare and government organizations are once again the primary targets.

How to Defend Against SamSam and Other Ransomware Attacks

The best way to respond to a ransomware attack is to avoid having one in the first place. If you are attacked, making sure your valuable data is backed up and unreachable by ransomware infection will ensure that your downtime and data loss will be minimal or none if you ever suffer an attack.

In our previous post, How to Recover From Ransomware, we listed the ten ways to protect your organization from ransomware.

  1. Use anti-virus and anti-malware software or other security policies to block known payloads from launching.
  2. Make frequent, comprehensive backups of all important files and isolate them from local and open networks. Cybersecurity professionals view data backup and recovery (74% in a recent survey) by far as the most effective solution to respond to a successful ransomware attack.
  3. Keep offline backups of data stored in locations inaccessible from any potentially infected computer, such as disconnected external storage drives or the cloud, which prevents them from being accessed by the ransomware.
  4. Install the latest security updates issued by software vendors of your OS and applications. Remember to patch early and patch often to close known vulnerabilities in operating systems, server software, browsers, and web plugins.
  5. Consider deploying security software to protect endpoints, email servers, and network systems from infection.
  6. Exercise cyber hygiene, such as using caution when opening email attachments and links.
  7. Segment your networks to keep critical computers isolated and to prevent the spread of malware in case of attack. Turn off unneeded network shares.
  8. Turn off admin rights for users who don’t require them. Give users the lowest system permissions they need to do their work.
  9. Restrict write permissions on file servers as much as possible.
  10. Educate yourself, your employees, and your family in best practices to keep malware out of your systems. Update everyone on the latest email phishing scams and human engineering aimed at turning victims into abettors.

Please Tell Us About Your Experiences with Ransomware

Have you endured a ransomware attack or have a strategy to avoid becoming a victim? Please tell us of your experiences in the comments.

The post Ransomware Update: Viruses Targeting Business IT Servers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

How Many Piracy Warnings Would Get You to Stop?

Post Syndicated from Andy original https://torrentfreak.com/how-many-piracy-warnings-would-get-you-to-stop-180422/

For the past several years, copyright holders in the US and Europe have been trying to reach out to file-sharers in an effort to change their habits.

Whether via high-profile publicity lawsuits or a simple email, it’s hoped that by letting people know they aren’t anonymous, they’ll stop pirating and buy more content instead.

Traditionally, most ISPs haven’t been that keen on passing infringement notices on. However, the BMG v Cox lawsuit seems to have made a big difference, with a growing number of ISPs now visibly warning their users that they operate a repeat infringer policy.

But perhaps the big question is how seriously users take these warnings because – let’s face it – that’s the entire point of their existence.

There can be little doubt that a few recipients will be scurrying away at the slightest hint of trouble, intimidated by the mere suggestion that they’re being watched.

Indeed, a father in the UK – who received a warning last year as part of the Get it Right From a Genuine Site campaign – confidently and forcefully assured TF that there would be no more illegal file-sharing taking place on his ten-year-old son’s computer again – ever.

In France, where the HADOPI anti-piracy scheme received much publicity, people receiving an initial notice are most unlikely to receive additional ones in future. A December 2017 report indicated that of nine million first warning notices sent to alleged pirates since 2012, ‘just’ 800,000 received a follow-up warning on top.

The suggestion is that people either stop their piracy after getting a notice or two, or choose to “go dark” instead, using streaming sites for example or perhaps torrenting behind a decent VPN.

But for some people, the message simply doesn’t sink in early on.

A post on Reddit this week by a TWC Spectrum customer revealed that despite a wealth of readily available information (including masses in the specialist subreddit where the post was made), even several warnings fail to have an effect.

“Was just hit with my 5th copyright violation. They halted my internet and all,” the self-confessed pirate wrote.

There are at least three important things to note from this opening sentence.

Firstly, the first four warnings did nothing to change the user’s piracy habits. Secondly, Spectrum presumably had enough at five warnings and kicked in a repeat-infringer suspension, presumably to avoid the same fate as Cox in the BMG case. Third, the account suspension seems to have changed the game.

Notably, rather than some huge blockbuster movie, that fifth warning came due to something rather less prominent.

“Thought I could sneak in a random episode of Rosanne. The new one that aired LOL. That fast. Under 24 hours I got shut off. Which makes me feel like [ISPs] do monitor your traffic and its not just the people sending them notices,” the post read.

Again, some interesting points here.

Any content can be monitored by rightsholders but if it’s popular in the US then a warning delivered via an ISP seems to be more likely than elsewhere. However, the misconception that the monitoring is done by ISPs persists, despite that not being the case.

ISPs do not monitor users’ file-sharing activity, anti-piracy companies do. They can grab an IP address the second someone enters a torrent swarm, or even connects to a tracker. It happens in an instant, at a time of their choosing. Quickly jumping in and out of a torrent is no guarantee and the fallacy of not getting caught due to a failure to seed is just that – a fallacy.

But perhaps the most important thing is that after five warnings and a disconnection, the Reddit user decided to take action. Sadly for the people behind Rosanne, it’s not exactly the reaction they’d have hoped for.

“I do not want to push it but I am curious to what happens 6th time, and if I would even be safe behind a VPN,” he wrote.

“Just want to learn how to use a VPN and Sonarr and have a guilt free stress free torrent watching.”

Of course, there was no shortage of advice.

“If you have gotten 5 notices, you really should of learnt [sic] how to use a VPN before now,” one poster noted, perhaps inevitably.

But curiously, or perhaps obviously given the number of previous warnings, the fifth warning didn’t come as a surprise to the user.

“I knew they were going to hit me for it. I just didn’t think a 195mb file would do it. They were getting me for Disney movies in the past,” he added.

So how do you grab the attention of a persistent infringer like this? Five warnings and a suspension apparently. But clearly, not even that is a guarantee of success. Perhaps this is why most ‘strike’ schemes tend to give up on people who can’t be rehabilitated.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Confused About the Hybrid Cloud? You’re Not Alone

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/confused-about-the-hybrid-cloud-youre-not-alone/

Hybrid Cloud. What is it?

Do you have a clear understanding of the hybrid cloud? If you don’t, it’s not surprising.

Hybrid cloud has been applied to a greater and more varied number of IT solutions than almost any other recent data management term. About the only thing that’s clear about the hybrid cloud is that the term hybrid cloud wasn’t invented by customers, but by vendors who wanted to hawk whatever solution du jour they happened to be pushing.

Let’s be honest. We’re in an industry that loves hype. We can’t resist grafting hyper, multi, ultra, and super and other prefixes onto the beginnings of words to entice customers with something new and shiny. The alphabet soup of cloud-related terms can include various options for where the cloud is located (on-premises, off-premises), whether the resources are private or shared in some degree (private, community, public), what type of services are offered (storage, computing), and what type of orchestrating software is used to manage the workflow and the resources. With so many moving parts, it’s no wonder potential users are confused.

Let’s take a step back, try to clear up the misconceptions, and come up with a basic understanding of what the hybrid cloud is. To be clear, this is our viewpoint. Others are free to do what they like, so bear that in mind.

So, What is the Hybrid Cloud?

The hybrid cloud refers to a cloud environment made up of a mixture of on-premises private cloud resources combined with third-party public cloud resources that use some kind of orchestration between them.

To get beyond the hype, let’s start with Forrester Research‘s idea of the hybrid cloud: “One or more public clouds connected to something in my data center. That thing could be a private cloud; that thing could just be traditional data center infrastructure.”

To put it simply, a hybrid cloud is a mash-up of on-premises and off-premises IT resources.

To expand on that a bit, we can say that the hybrid cloud refers to a cloud environment made up of a mixture of on-premises private cloud[1] resources combined with third-party public cloud resources that use some kind of orchestration[2] between them. The advantage of the hybrid cloud model is that it allows workloads and data to move between private and public clouds in a flexible way as demands, needs, and costs change, giving businesses greater flexibility and more options for data deployment and use.

In other words, if you have some IT resources in-house that you are replicating or augmenting with an external vendor, congrats, you have a hybrid cloud!

Private Cloud vs. Public Cloud

The cloud is really just a collection of purpose built servers. In a private cloud, the servers are dedicated to a single tenant or a group of related tenants. In a public cloud, the servers are shared between multiple unrelated tenants (customers). A public cloud is off-site, while a private cloud can be on-site or off-site — or on-prem or off-prem.

As an example, let’s look at a hybrid cloud meant for data storage, a hybrid data cloud. A company might set up a rule that says all accounting files that have not been touched in the last year are automatically moved off-prem to cloud storage to save cost and reduce the amount of storage needed on-site. The files are still available; they are just no longer stored on your local systems. The rules can be defined to fit an organization’s workflow and data retention policies.

The hybrid cloud concept also contains cloud computing. For example, at the end of the quarter, order processing application instances can be spun up off-premises in a hybrid computing cloud as needed to add to on-premises capacity.

Hybrid Cloud Benefits

If we accept that the hybrid cloud combines the best elements of private and public clouds, then the benefits of hybrid cloud solutions are clear, and we can identify the primary two benefits that result from the blending of private and public clouds.

Diagram of the Components of the Hybrid Cloud

Benefit 1: Flexibility and Scalability

Undoubtedly, the primary advantage of the hybrid cloud is its flexibility. It takes time and money to manage in-house IT infrastructure and adding capacity requires advance planning.

The cloud is ready and able to provide IT resources whenever needed on short notice. The term cloud bursting refers to the on-demand and temporary use of the public cloud when demand exceeds resources available in the private cloud. For example, some businesses experience seasonal spikes that can put an extra burden on private clouds. These spikes can be taken up by a public cloud. Demand also can vary with geographic location, events, or other variables. The public cloud provides the elasticity to deal with these and other anticipated and unanticipated IT loads. The alternative would be fixed cost investments in on-premises IT resources that might not be efficiently utilized.

For a data storage user, the on-premises private cloud storage provides, among other benefits, the highest speed access. For data that is not frequently accessed, or needed with the absolute lowest levels of latency, it makes sense for the organization to move it to a location that is secure, but less expensive. The data is still readily available, and the public cloud provides a better platform for sharing the data with specific clients, users, or with the general public.

Benefit 2: Cost Savings

The public cloud component of the hybrid cloud provides cost-effective IT resources without incurring capital expenses and labor costs. IT professionals can determine the best configuration, service provider, and location for each service, thereby cutting costs by matching the resource with the task best suited to it. Services can be easily scaled, redeployed, or reduced when necessary, saving costs through increased efficiency and avoiding unnecessary expenses.

Comparing Private vs Hybrid Cloud Storage Costs

To get an idea of the difference in storage costs between a purely on-premises solutions and one that uses a hybrid of private and public storage, we’ll present two scenarios. For each scenario we’ll use data storage amounts of 100 terabytes, 1 petabyte, and 2 petabytes. Each table is the same format, all we’ve done is change how the data is distributed: private (on-premises) cloud or public (off-premises) cloud. We are using the costs for our own B2 Cloud Storage in this example. The math can be adapted for any set of numbers you wish to use.

Scenario 1    100% of data on-premises storage

Data Stored
Data stored On-Premises: 100% 100 TB 1,000 TB 2,000 TB
On-premises cost range Monthly Cost
Low — $12/TB/Month $1,200 $12,000 $24,000
High — $20/TB/Month $2,000 $20,000 $40,000

Scenario 2    20% of data on-premises with 80% public cloud storage (B2)

Data Stored
Data stored On-Premises: 20% 20 TB 200 TB 400 TB
Data stored in Cloud: 80% 80 TB 800 TB 1,600 TB
On-premises cost range Monthly Cost
Low — $12/TB/Month $240 $2,400 $4,800
High — $20/TB/Month $400 $4,000 $8,000
Public cloud cost range Monthly Cost
Low — $5/TB/Month (B2) $400 $4,000 $8,000
High — $20/TB/Month $1,600 $16,000 $32,000
On-premises + public cloud cost range Monthly Cost
Low $640 $6,400 $12,800
High $2,000 $20,000 $40,000

As can be seen in the numbers above, using a hybrid cloud solution and storing 80% of the data in the cloud with a provider such as Backblaze B2 can result in significant savings over storing only on-premises. For other cost scenarios, see the B2 Cost Calculator.

When Hybrid Might Not Always Be the Right Fit

There are circumstances where the hybrid cloud might not be the best solution. Smaller organizations operating on a tight IT budget might best be served by a purely public cloud solution. The cost of setting up and running private servers is substantial.

An application that requires the highest possible speed might not be suitable for hybrid, depending on the specific cloud implementation. While latency does play a factor in data storage for some users, it is less of a factor for uploading and downloading data than it is for organizations using the hybrid cloud for computing. Because Backblaze recognized the importance of speed and low-latency for customers wishing to use computing on data stored in B2, we directly connected our data centers with those of our computing partners, ensuring that latency would not be an issue even for a hybrid cloud computing solution.

It is essential to have a good understanding of workloads and their essential characteristics in order to make the hybrid cloud work well for you. Each application needs to be examined for the right mix of private cloud, public cloud, and traditional IT resources that fit the particular workload in order to benefit most from a hybrid cloud architecture.

The Hybrid Cloud Can Be a Win-Win Solution

From the high altitude perspective, any solution that enables an organization to respond in a flexible manner to IT demands is a win. Avoiding big upfront capital expenses for in-house IT infrastructure will appeal to the CFO. Being able to quickly spin up IT resources as they’re needed will appeal to the CTO and VP of Operations.

Should You Go Hybrid?

We’ve arrived at the bottom line and the question is, should you or your organization embrace hybrid cloud infrastructures?

According to 451 Research, by 2019, 69% of companies will operate in hybrid cloud environments, and 60% of workloads will be running in some form of hosted cloud service (up from 45% in 2017). That indicates that the benefits of the hybrid cloud appeal to a broad range of companies.

In Two Years, More Than Half of Workloads Will Run in Cloud

Clearly, depending on an organization’s needs, there are advantages to a hybrid solution. While it might have been possible to dismiss the hybrid cloud in the early days of the cloud as nothing more than a buzzword, that’s no longer true. The hybrid cloud has evolved beyond the marketing hype to offer real solutions for an increasingly complex and challenging IT environment.

If an organization approaches the hybrid cloud with sufficient planning and a structured approach, a hybrid cloud can deliver on-demand flexibility, empower legacy systems and applications with new capabilities, and become a catalyst for digital transformation. The result can be an elastic and responsive infrastructure that has the ability to quickly respond to changing demands of the business.

As data management professionals increasingly recognize the advantages of the hybrid cloud, we can expect more and more of them to embrace it as an essential part of their IT strategy.

Tell Us What You’re Doing with the Hybrid Cloud

Are you currently embracing the hybrid cloud, or are you still uncertain or hanging back because you’re satisfied with how things are currently? Maybe you’ve gone totally hybrid. We’d love to hear your comments below on how you’re dealing with the hybrid cloud.


[1] Private cloud can be on-premises or a dedicated off-premises facility.

[2] Hybrid cloud orchestration solutions are often proprietary, vertical, and task dependent.

The post Confused About the Hybrid Cloud? You’re Not Alone appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Audit Trail Overview

Post Syndicated from Bozho original https://techblog.bozho.net/audit-trail-overview/

As part of my current project (secure audit trail) I decided to make a survey about the use of audit trail “in the wild”.

I haven’t written in details about this project of mine (unlike with some other projects). Mostly because it’s commercial and I don’t want to use my blog as a direct promotion channel (though I am doing that at the moment, ironically). But the aim of this post is to shed some light on how audit trail is used.

The survey can be found here. The questions are basically: does your current project have audit trail functionality, and if yes, is it protected from tampering. If not – do you think you should have such functionality.

The results are interesting (although with only around 50 respondents)

So more than half of the systems (on which respondents are working) don’t have audit trail. While audit trail is recommended by information security and related standards, it may not find place in the “busy schedule” of a software project, even though it’s fairly easy to provide a trivial implementation (e.g. I’ve written how to quickly setup one with Hibernate and Spring)

A trivial implementation might do in many cases but if the audit log is critical (e.g. access to sensitive data, performing financial operations etc.), then relying on a trivial implementation might not be enough. In other words – if the sysadmin can access the database and delete or modify the audit trail, then it doesn’t serve much purpose. Hence the next question – how is the audit trail protected from tampering:

And apparently, from the less than 50% of projects with audit trail, around 50% don’t have technical guarantees that the audit trail can’t be tampered with. My guess is it’s more, because people have different understanding of what technical measures are sufficient. E.g. someone may think that digitally signing your log files (or log records) is sufficient, but in fact it isn’t, as whole files (or records) can be deleted (or fully replaced) without a way to detect that. Timestamping can help (and a good audit trail solution should have that), but it doesn’t guarantee the order of events or prevent a malicious actor from deleting or inserting fake ones. And if timestamping is done on a log file level, then any not-yet-timestamped log file is vulnerable to manipulation.

I’ve written about event logs before and their two flavours – event sourcing and audit trail. An event log can effectively be considered audit trail, but you’d need additional security to avoid the problems mentioned above.

So, let’s see what would various levels of security and usefulness of audit logs look like. There are many papers on the topic (e.g. this and this), and they often go into the intricate details of how logging should be implemented. I’ll try to give an overview of the approaches:

  • Regular logs – rely on regular INFO log statements in the production logs to look for hints of what has happened. This may be okay, but is harder to look for evidence (as there is non-auditable data in those log files as well), and it’s not very secure – usually logs are collected (e.g. with graylog) and whoever has access to the log collector’s database (or search engine in the case of Graylog), can manipulate the data and not be caught
  • Designated audit trail – whether it’s stored in the database or in logs files. It has the proper business-event level granularity, but again doesn’t prevent or detect tampering. With lower risk systems that may is perfectly okay.
  • Timestamped logs – whether it’s log files or (harder to implement) database records. Timestamping is good, but if it’s not an external service, a malicious actor can get access to the local timestamping service and issue fake timestamps to either re-timestamp tampered files. Even if the timestamping is not compromised, whole entries can be deleted. The fact that they are missing can sometimes be deduced based on other factors (e.g. hour of rotation), but regularly verifying that is extra effort and may not always be feasible.
  • Hash chaining – each entry (or sequence of log files) could be chained (just as blockchain transactions) – the next one having the hash of the previous one. This is a good solution (whether it’s local, external or 3rd party), but it has the risk of someone modifying or deleting a record, getting your entire chain and re-hashing it. All the checks will pass, but the data will not be correct
  • Hash chaining with anchoring – the head of the chain (the hash of the last entry/block) could be “anchored” to an external service that is outside the capabilities of a malicious actor. Ideally, a public blockchain, alternatively – paper, a public service (twitter), email, etc. That way a malicious actor can’t just rehash the whole chain, because any check against the external service would fail.
  • WORM storage (write once, ready many). You could send your audit logs almost directly to WORM storage, where it’s impossible to replace data. However, that is not ideal, as WORM storage can be slow and expensive. For example AWS Glacier has rather big retrieval times and searching through recent data makes it impractical. It’s actually cheaper than S3, for example, and you can have expiration policies. But having to support your own WORM storage is expensive. It is a good idea to eventually send the logs to WORM storage, but “fresh” audit trail should probably not be “archived” so that it’s searchable and some actionable insight can be gained from it.
  • All-in-one – applying all of the above “just in case” may be unnecessary for every project out there, but that’s what I decided to do at LogSentinel. Business-event granularity with timestamping, hash chaining, anchoring, and eventually putting to WORM storage – I think that provides both security guarantees and flexibility.

I hope the overview is useful and the results from the survey shed some light on how this aspect of information security is underestimated.

The post Audit Trail Overview appeared first on Bozho's tech blog.

Piracy Falls 6%, in Spain, But It’s Still a Multi-Billion Euro Problem

Post Syndicated from Andy original https://torrentfreak.com/piracy-falls-6-in-spain-but-its-still-a-multi-billion-euro-problem-180409/

The Coalition of Creators and Content Industries, which represents Spain’s leading entertainment industry companies, is keeping a close eye on the local piracy landscape.

The outfit has just published its latest Piracy Observatory and Digital Content Consumption Habits report, carried out by the independent consultant GFK, and there is good news to report on headline piracy figures.

During 2017, the report estimates that people accessed unlicensed digital content just over four billion times, which equates to almost 21.9 billion euros in lost revenues. While this is a significant number, it’s a decrease of 6% compared to 2016 and an accumulated decrease of 9% compared to 2015, the coalition reports.

Overall, movies are most popular with pirates, with 34% helping themselves to content without paying.

“The volume of films accessed illegally during 2017 was 726 million, with a market value of 5.7 billion euros, compared to 6.9 billion in 2016. 35% of accesses happened while the film was still on screens in cinema theaters, while this percentage was 33% in 2016,” the report notes.

TV shows are in a close second position with 30% of users gobbling up 945 million episodes illegally during 2017. A surprisingly high 24% of users went for eBooks, with music relegated to fourth place with ‘just’ 22%, followed by videogames (11%) and football (10%).

The reasons given by pirates for their habits are both varied and familiar. 51% said that original content is too expensive while 43% said that taking the illegal route “is fast and easy”. Half of the pirates said that simply paying for an internet connection was justification for getting content for free.

A quarter of all pirates believe that they aren’t doing anyone any harm, with the same number saying they get content without paying because there are no consequences for doing so. But it isn’t just pirates themselves in the firing line.

Perhaps unsurprisingly given the current climate, the report heavily criticizes search engines for facilitating access to infringing content.

“With 75%, search engines are the main method of accessing illegal content and Google is used for nine out of ten accesses to pirate content,” the report reads.

“Regarding social networks, Facebook is the most used method of access (83%), followed by Twitter (42%) and Instagram (34%). Therefore it is most valuable that Facebook has reached agreements with different industries to become a legal source and to regulate access to content.”

Once on pirate sites, some consumers reported difficulties in determining whether they’re legal or not. Around 15% said that they had “big difficulties” telling whether a site is authorized with 44% saying they had problems “sometimes”.

That being said, given the amount of advertising on pirate sites, it’s no surprise that most knew a pirate site when they visited one and, according to the report, advertising placement is only on the up.

Just over a quarter of advertising appearing on pirate sites features well-known brands, although this is a reduction from more than 37% in 2016. This needs to be further improved, the coalition says, via collaboration between all parties involved in the industry.

A curious claim from the report is that 81% of pirate site users said they were required to register in order to use a platform. This resulted in “transferring personal data” to pirate site operators who gather it in databases that are used for profitable “e-marketing campaigns”.

“Pirate sites also get much more valuable data than one could imagine which allow them to get important economic benefits, as for example, Internet surfing habits, other websites visited by consumers, preferences, likes, and purchase habits,” the report states.

So what can be done to reduce consumer reliance on pirate sites? The report finds that consumers are largely in line with how the entertainment industries believe piracy should or could be tackled.

“The most efficient measures against piracy would be, according to the internet users’ own view, blocking access to the website offering content (78%) and penalizing internet providers (73%),” the report reads.

“Following these two, the best measure to reduce infringements would be, according to consumers, to promote social awareness campaigns against piracy (61%). This suggests that increased collaboration between the content sector and the ISPs (Internet Service Providers) could count on consumers’ support and positive assessment.”

Finally, consumers in Spain are familiar with the legal options, should they wish to take that route in future. Netflix awareness in the country is at 91%, Spotify at 81%, with Movistar+ and HBO at 80% and 68% respectively.

“This invalidates the reasons given by pirate users who said they did so because of the lack of an accessible legal offer at affordable prices,” the report adds.

However, those who take the plunge into the legal world don’t always kick the pirate habit, with the paper stating that users of pirates sites tend to carry on pirating, although they do pirate less in some sectors, notably music. The study also departs from findings in other regions that pirates can also be avid consumers of legitimate content.

Several reports, from the UK, Sweden, Australia, and even from Hollywood, have clearly indicated that pirates are the entertainment industries’ best customers.

In Spain, however, the situation appears to be much more pessimistic, with only 8% of people who access illegal digital content paying for legal content too. That seems low given that Netflix alone had more than a million Spanish subscribers at the end of 2017 and six million Spanish households currently subscribe to other pay TV services.

The report is available here (Spanish, pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Community profile: Dave Akerman

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-dave-akerman/

This column is from The MagPi issue 61. You can download a PDF of the full issue for free, or subscribe to receive the print edition through your letterbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve our charitable goals.

The pinned tweet on Dave Akerman’s Twitter account shows a table displaying the various components needed for a high-altitude balloon (HAB) flight. Batteries, leads, a camera and Raspberry Pi, plus an unusually themed payload. The caption reads ‘The Queen, The Duke of York, and my TARDIS”, and sums up Dave’s maker career in a heartbeat.

David Akerman on Twitter

The Queen, The Duke of York, and my TARDIS 🙂 #UKHAS #RaspberryPi

Though writing software for industrial automation pays the bills, the majority of Dave’s time is spent in the world of high-altitude ballooning and the ever-growing community that encompasses it. And, while he makes some money sending business-themed balloons to near space for the likes of Aardman Animations, Confused.com, and the BBC, Dave is best known in the Raspberry Pi community for his use of the small computer in every payload, and his work as a tutor alongside the Foundation’s staff at Skycademy events.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave continues to help others while breaking records and having a good time exploring the atmosphere.

Dave has dedicated many hours and many, many more miles to assist with the Foundation’s Skycademy programme, helping to explore high-altitude ballooning with educators from across the UK. Using a Raspberry Pi and various other pieces of lightweight tech, Dave and Foundation staff member James Robinson explored the incorporation of high-altitude ballooning into education. Through Skycademy, educators were able to learn new skills and take them to the classroom, setting off their own balloons with their students, and recording the results on Raspberry Pis.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave’s most recent flight broke a new record. On 13 August 2017, his HAB payload was able to send back the highest images taken by any amateur flight.

But education isn’t the only reason for Dave’s involvement in the HAB community. As with anyone passionate about a specific hobby, Dave strives to break records. The most recent record-breaking flight took place on 13 August 2017, when Dave’s Raspberry Pi Zero HAB sent home the highest images taken by any amateur high-altitude balloon launch: at 43014 metres. No other HAB balloon has provided images from such an altitude, and the lightweight nature of the Pi Zero definitely helped, as Dave went on to mention on Twitter a few days later.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave is recognised as being the first person to incorporate a Raspberry Pi into a HAB payload, and continues to break records with the help of the little green board. More recently, he’s been able to lighten the load by using the Raspberry Pi Zero.

When the first Pi made its way to near space, Dave tore the computer apart in order to meet the weight restriction. The Pi in the Sky board was created to add the extra features needed for the flight. Since then, the HAT has experienced a few changes.

Dave Akerman The MagPi Raspberry Pi Community Profile

The Pi in the Sky board, created specifically for HAB flights.

Dave first fell in love with high-altitude ballooning after coming across the hobby in a video shared on a photographic forum. With a lifelong interest in space thanks to watching the Moon landings as a boy, plus a talent for electronics and photography, it seems a natural progression for him. Throw in his coding skills from learning to program on a Teletype and it’s no wonder he was ready and eager to take to the skies, so to speak, and capture the curvature of the Earth. What was so great about using the Raspberry Pi was the instant gratification he got from receiving images in real time as they were taken during the flight. While other devices could control a camera and store captured images for later retrieval, thanks to the Pi Dave was able to transmit the files back down to Earth and check the progress of his balloon while attempting to break records with a flight.

Dave Akerman The MagPi Raspberry Pi Community Profile Morph

One of the many commercial flights Dave has organised featured the classic children’s TV character Morph, a creation of the Aardman Animations studio known for Wallace and Gromit. Morph took to the sky twice in his mission to reach near space, and finally succeeded in 2016.

High-altitude ballooning isn’t the only part of Dave’s life that incorporates a Raspberry Pi. Having “lost count” of how many Pis he has running tasks, Dave has also created radio receivers for APRS (ham radio data), ADS-B (aircraft tracking), and OGN (gliders), along with a time-lapse camera in his garden, and he has a few more Pi for tinkering purposes.

The post Community profile: Dave Akerman appeared first on Raspberry Pi.

Japan Seeks to Outmaneuver Constitution With Piracy Blocking Proposals

Post Syndicated from Andy original https://torrentfreak.com/japan-seeks-to-outmaneuver-constitution-with-piracy-blocking-proposals-180406/

Speaking at a news conference last month, Japan’s Chief Cabinet Secretary Yoshihide Suga said that the Japanese government is considering measures to prohibit access to pirate sites, initially to protect the country’s manga and anime industries.

“The damage is getting worse. We are considering the possibilities of all measures including site blocking,” he said.

But Japan has a problem.

The country has no specific legislation that allows for site-blocking of any kind, let alone on copyright infringement grounds. In fact, the constitution expressly supports freedom of speech and expressly forbids censorship.

“Freedom of assembly and association as well as speech, press and all other forms of expression are guaranteed,” Article 21 reads.

“No censorship shall be maintained, nor shall the secrecy of any means of communication be violated,” the constitution adds.

Nevertheless, the government appears determined to do something about the piracy threat. As detailed last month, that looks like manifesting itself in a site-blocking regime. But how will this be achieved?

Mainichi reports that the government will argue there are grounds for “averting present danger”, a phrase that’s detailed in Article 37 of Japan’s Penal Code.

“An act unavoidably performed to avert a present danger to the life, body, liberty
or property of oneself or any other person is not punishable only when the harm
produced by such act does not exceed the harm to be averted,” the Article (pdf) begins.

It’s fairly clear that this branch of Japanese law was never designed for use against pirate sites. Furthermore, there is also a clause noting that where an act (in this case blocking) causes excessive harm it may lead “to the punishment being reduced or may exculpate the offender in light of the circumstances.”

How, when, or if that ever comes into play will remain to be seen but in common with most legal processes against pirate site operators elsewhere, few turn up to argue in their defense. A contested process is therefore unlikely.

It appears that rather than forcing Internet providers into compliance, the government will ask for their “understanding” on the basis that damage is being done to the anime and manga industries. ISPs reportedly already cooperate to censor child abuse sites so it’s hoped a similar agreement can be reached on piracy.

Initially, the blocking requests will relate to just three as-yet-unnamed platforms, one local and two based outside the country. Of course, this is just the tip of the iceberg and if ISPs agree to block this trio, more demands are sure to follow.

Meanwhile, the government is also working towards tightening up the law to deal with an estimated 200 local sites that link, but do not host pirated content. Under current legislation, linking isn’t considered illegal, which is a major problem given the manner in which most file-sharing and streaming is carried out these days.

However, there are also concerns that any amendments to tackle linking could fall foul of the constitutional right to freedom of expression. It’s a problem that has been tackled elsewhere, notably in Europe, but in most cases the latter has been trumped by the former. In any event, the government will need to tread carefully.

The proposals are expected to be formally approved at a Cabinet meeting on crime prevention policy later this month, Mainichi reports.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Git v2.17.0 released

Post Syndicated from corbet original https://lwn.net/Articles/750815/rss

Version 2.17.0 of the Git source-code management system is out. It
includes a long list of relatively minor tweaks. “Since Git 1.7.9,
‘git merge’ defaulted to –no-ff (i.e. even when the side branch being
merged is a descendant of the current commit, create a merge commit instead
of fast-forwarding) when merging a tag object. This was appropriate
default for integrators who pull signed tags from their downstream
contributors, but caused an unnecessary merges when used by downstream
contributors who habitually ‘catch up’ their topic branches with tagged
releases from the upstream. Update ‘git merge’ to default to –no-ff only
when merging a tag object that does *not* sit at its usual place in
refs/tags/ hierarchy, and allow fast-forwarding otherwise, to mitigate the
problem.

UK IPTV Provider ACE Calls it Quits, Cites Mounting Legal Pressure

Post Syndicated from Andy original https://torrentfreak.com/uk-iptv-provider-ace-calls-it-quits-cites-mounting-legal-pressure-180402/

Terms including “Kodi box” are now in common usage in the UK and thanks to continuing coverage in the tabloid media, more and more people are learning that free content is just a few clicks away.

In parallel, premium IPTV services are also on the up. In basic terms, these provide live TV and sports through an Internet connection in a consumer-friendly way. When bundled with beautiful interfaces and fully functional Electronic Program Guides (EPG), they’re almost indistinguishable from services offered by Sky and BTSport, for example.

These come at a price, typically up to £10 per month or £20 for a three-month package, but for the customer this represents good value for money. Many providers offer several thousand channels in decent quality and reliability is much better than free streams. This kind of service was offered by prominent UK provider ACE TV but an announcement last December set alarm bells ringing.

“It saddens me to announce this, but due to pressure from the authorities in the UK, we are no longer selling new subscriptions. This obviously includes trials,” ACE said in a statement.

ACE insisted that it would continue as a going concern, servicing existing customers. However, it did keep its order books open for a while longer, giving people one last chance to subscribe to the service for anything up to a year. And with that ACE continued more quietly in the background, albeit with a disabled Facebook page.

But things were not well in ACE land. Like all major IPTV providers delivering services to the UK, ACE was subjected to blocking action by the English Premier League and UEFA. High Court injunctions allow ISPs in the UK to block their pirate streams in real-time, meaning that matches were often rendered inaccessible to ACE’s customers.

While this blocking can be mitigated when the customer uses a VPN, most don’t want to go to the trouble. Some IPTV providers have engaged in a game of cat-and-mouse with the blocking efforts, some with an impressive level of success. However, it appears that the nuisance eventually took its toll on ACE.

“The ISPs in the UK and across Europe have recently become much more aggressive in blocking our service while football games are in progress,” ACE said in a statement last month.

“In order to get ourselves off of the ISP blacklist we are going to black out the EPL games for all users (including VPN users) starting on Monday. We believe that this will enable us to rebuild the bypass process and successfully provide you with all EPL games.”

People familiar with the blocking process inform TF that this is unlikely to have worked.

Although nobody outside the EPL’s partners knows exactly how the system works, it appears that anti-piracy companies simply subscribe to IPTV services themselves and extract the IP addresses serving the content. ISPs then block them. No pause would’ve helped the situation.

Then, on March 24, another announcement indicated that ACE probably wouldn’t make it very far into 2019.

“It is with sorrow that we announce that we are no longer accepting renewals, upgrades to existing subscriptions or the purchase of new credits. We plan to support existing subscriptions until they expire,” the team wrote.

“EPL games including highlights continue to be blocked and are not expected to be reinstated before the end of the season.”

The suggestion was that ACE would keep going, at least for a while, but chat transcripts with the company obtained by TF last month indicated that ACE would probably shut down, sooner rather than later. Less than a week on, that proved to be the case.

On or around March 29, ACE began sending emails out to customers, announcing the end of the company.

“We recently announced that Ace was no longer accepting renewals or offering new reseller credits but planned to support existing subscription. Due to mounting legal pressure in the UK we have been forced to change our plans and we are now announcing that Ace will close down at the end of March,” the email read.

“This means that from April 1st onwards the Ace service will no longer work.”

April 1 was yesterday and it turns out it wasn’t a joke. Customers who paid in advance no longer have a service and those who paid a year up front are particularly annoyed. So-called ‘re-sellers’ of ACE are fuming more than most.

Re-sellers effectively act as sales agents for IPTV providers, buying access to the service at a reduced rate and making a small profit on each subscriber they sign up. They get a nice web interface to carry out the transactions and it’s something that anyone can do.

However, this generally requires investment from the re-seller in order to buy ‘credits’ up front, which are used to sell services to new customers. Those who invested money in this way with ACE are now in trouble.

“If anyone from ACE is reading here, yer a bunch of fuckin arseholes. I hope your next shite is a hedgehog!!” one shouted on Reddit. “Being a reseller for them and losing hundreds a pounds is bad enough!!”

While the loss of a service is probably a shock to more recent converts to the world of IPTV, those with experience of any kind of pirate TV product should already be well aware that this is nothing out of the ordinary.

For those who bought hacked or cloned satellite cards in the 1990s, to those who used ‘chipped’ cable boxes a little later on, the free rides all come to an end at some point. It’s just a question of riding the wave when it arrives and paying attention to the next big thing, without investing too much money at the wrong time.

For ACE’s former customers, it’s simply a case of looking for a new provider. There are plenty of them, some with zero intent of shutting down. There are rumors that ACE might ‘phoenix’ themselves under another name but that’s also par for the course when people feel they’re owed money and suspicions are riding high.

“Please do not ask if we are rebranding/setting up a new service, the answer is no,” ACE said in a statement.

And so the rollercoaster continues…

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

A geometric Rust adventure

Post Syndicated from Eevee original https://eev.ee/blog/2018/03/30/a-geometric-rust-adventure/

Hi. Yes. Sorry. I’ve been trying to write this post for ages, but I’ve also been working on a huge writing project, and apparently I have a very limited amount of writing mana at my disposal. I think this is supposed to be a Patreon reward from January. My bad. I hope it’s super great to make up for the wait!

I recently ported some math code from C++ to Rust in an attempt to do a cool thing with Doom. Here is my story.

The problem

I presented it recently as a conundrum (spoilers: I solved it!), but most of those details are unimportant.

The short version is: I have some shapes. I want to find their intersection.

Really, I want more than that: I want to drop them all on a canvas, intersect everything with everything, and pluck out all the resulting polygons. The input is a set of cookie cutters, and I want to press them all down on the same sheet of dough and figure out what all the resulting contiguous pieces are. And I want to know which cookie cutter(s) each piece came from.

But intersection is a good start.

Example of the goal.  Given two squares that overlap at their corners, I want to find the small overlap piece, plus the two L-shaped pieces left over from each square

I’m carefully referring to the input as shapes rather than polygons, because each one could be a completely arbitrary collection of lines. Obviously there’s not much you can do with shapes that aren’t even closed, but at the very least, I need to handle concavity and multiple disconnected polygons that together are considered a single input.

This is a non-trivial problem with a lot of edge cases, and offhand I don’t know how to solve it robustly. I’m not too eager to go figure it out from scratch, so I went hunting for something I could build from.

(Infuriatingly enough, I can just dump all the shapes out in an SVG file and any SVG viewer can immediately solve the problem, but that doesn’t quite help me. Though I have had a few people suggest I just rasterize the whole damn problem, and after all this, I’m starting to think they may have a point.)

Alas, I couldn’t find a Rust library for doing this. I had a hard time finding any library for doing this that wasn’t a massive fully-featured geometry engine. (I could’ve used that, but I wanted to avoid non-Rust dependencies if possible, since distributing software is already enough of a nightmare.)

A Twitter follower directed me towards a paper that described how to do very nearly what I wanted and nothing else: “A simple algorithm for Boolean operations on polygons” by F. Martínez (2013). Being an academic paper, it’s trapped in paywall hell; sorry about that. (And as I understand it, none of the money you’d pay to get the paper would even go to the authors? Is that right? What a horrible and predatory system for discovering and disseminating knowledge.)

The paper isn’t especially long, but it does describe an awful lot of subtle details and is mostly written in terms of its own reference implementation. Rather than write my own implementation based solely on the paper, I decided to try porting the reference implementation from C++ to Rust.

And so I fell down the rabbit hole.

The basic algorithm

Thankfully, the author has published the sample code on his own website, if you want to follow along. (It’s the bottom link; the same author has, confusingly, published two papers on the same topic with similar titles, four years apart.)

If not, let me describe the algorithm and how the code is generally laid out. The algorithm itself is based on a sweep line, where a vertical line passes across the plane and ✨ does stuff ✨ as it encounters various objects. This implementation has no physical line; instead, it keeps track of which segments from the original polygon would be intersecting the sweep line, which is all we really care about.

A vertical line is passing rightwards over a couple intersecting shapes.  The line current intersects two of the shapes' sides, and these two sides are the "sweep list"

The code is all bundled inside a class with only a single public method, run, because… that’s… more object-oriented, I guess. There are several helper methods, and state is stored in some attributes. A rough outline of run is:

  1. Run through all the line segments in both input polygons. For each one, generate two SweepEvents (one for each endpoint) and add them to a std::deque for storage.

    Add pointers to the two SweepEvents to a std::priority_queue, the event queue. This queue uses a custom comparator to order the events from left to right, so the top element is always the leftmost endpoint.

  2. Loop over the event queue (where an “event” means the sweep line passed over the left or right end of a segment). Encountering a left endpoint means the sweep line is newly touching that segment, so add it to a std::set called the sweep list. An important point is that std::set is ordered, and the sweep list uses a comparator that keeps segments in order vertically.

    Encountering a right endpoint means the sweep line is leaving a segment, so that segment is removed from the sweep list.

  3. When a segment is added to the sweep list, it may have up to two neighbors: the segment above it and the segment below it. Call possibleIntersection to check whether it intersects either of those neighbors. (This is nearly sufficient to find all intersections, which is neat.)

  4. If possibleIntersection detects an intersection, it will split each segment into two pieces then and there. The old segment is shortened in-place to become the left part, and a new segment is created for the right part. The new endpoints at the point of intersection are added to the event queue.

  5. Some bookkeeping is done along the way to track which original polygons each segment is inside, and eventually the segments are reconstructed into new polygons.

Hopefully that’s enough to follow along. It took me an inordinately long time to tease this out. The comments aren’t especially helpful.

1
    std::deque<SweepEvent> eventHolder;    // It holds the events generated during the computation of the boolean operation

Syntax and basic semantics

The first step was to get something that rustc could at least parse, which meant translating C++ syntax to Rust syntax.

This was surprisingly straightforward! C++ classes become Rust structs. (There was no inheritance here, thankfully.) All the method declarations go away. Method implementations only need to be indented and wrapped in impl.

I did encounter some unnecessarily obtuse uses of the ternary operator:

1
(prevprev != sl.begin()) ? --prevprev : prevprev = sl.end();

Rust doesn’t have a ternary — you can use a regular if block as an expression — so I expanded these out.

C++ switch blocks become Rust match blocks, but otherwise function basically the same. Rust’s enums are scoped (hallelujah), so I had to explicitly spell out where enum values came from.

The only really annoying part was changing function signatures; C++ types don’t look much at all like Rust types, save for the use of angle brackets. Rust also doesn’t pass by implicit reference, so I needed to sprinkle a few &s around.

I would’ve had a much harder time here if this code had relied on any remotely esoteric C++ functionality, but thankfully it stuck to pretty vanilla features.

Language conventions

This is a geometry problem, so the sample code unsurprisingly has its own home-grown point type. Rather than port that type to Rust, I opted to use the popular euclid crate. Not only is it code I didn’t have to write, but it already does several things that the C++ code was doing by hand inline, like dot products and cross products. And all I had to do was add one line to Cargo.toml to use it! I have no idea how anyone writes C or C++ without a package manager.

The C++ code used getters, i.e. point.x (). I’m not a huge fan of getters, though I do still appreciate the need for them in lowish-level systems languages where you want to future-proof your API and the language wants to keep a clear distinction between attribute access and method calls. But this is a point, which is nothing more than two of the same numeric type glued together; what possible future logic might you add to an accessor? The euclid authors appear to side with me and leave the coordinates as public fields, so I took great joy in removing all the superfluous parentheses.

Polygons are represented with a Polygon class, which has some number of Contours. A contour is a single contiguous loop. Something you’d usually think of as a polygon would only have one, but a shape with a hole would have two: one for the outside, one for the inside. The weird part of this arrangement was that Polygon implemented nearly the entire STL container interface, then waffled between using it and not using it throughout the rest of the code. Rust lets anything in the same module access non-public fields, so I just skipped all that and used polygon.contours directly. Hell, I think I made contours public.

Finally, the SweepEvent type has a pol field that’s declared as an enum PolygonType (either SUBJECT or CLIPPING, to indicate which of the two inputs it is), but then some other code uses the same field as a numeric index into a polygon’s contours. Boy I sure do love static typing where everything’s a goddamn integer. I wanted to extend the algorithm to work on arbitrarily many input polygons anyway, so I scrapped the enum and this became a usize.


Then I got to all the uses of STL. I have only a passing familiarity with the C++ standard library, and this code actually made modest use of it, which caused some fun days-long misunderstandings.

As mentioned, the SweepEvents are stored in a std::deque, which is never read from. It took me a little thinking to realize that the deque was being used as an arena: it’s the canonical home for the structs so pointers to them can be tossed around freely. (It can’t be a std::vector, because that could reallocate and invalidate all the pointers; std::deque is probably a doubly-linked list, and guarantees no reallocation.)

Rust’s standard library does have a doubly-linked list type, but I knew I’d run into ownership hell here later anyway, so I think I replaced it with a Rust Vec to start with. It won’t compile either way, so whatever. We’ll get back to this in a moment.

The list of segments currently intersecting the sweep line is stored in a std::set. That type is explicitly ordered, which I’m very glad I knew already. Rust has two set types, HashSet and BTreeSet; unsurprisingly, the former is unordered and the latter is ordered. Dropping in BTreeSet and fixing some method names got me 90% of the way there.

Which brought me to the other 90%. See, the C++ code also relies on finding nodes adjacent to the node that was just inserted, via STL iterators.

1
2
3
next = prev = se->posSL = it = sl.insert(se).first;
(prev != sl.begin()) ? --prev : prev = sl.end();
++next;

I freely admit I’m bad at C++, but this seems like something that could’ve used… I don’t know, 1 comment. Or variable names more than two letters long. What it actually does is:

  1. Add the current sweep event (se) to the sweep list (sl), which returns a pair whose first element is an iterator pointing at the just-inserted event.

  2. Copies that iterator to several other variables, including prev and next.

  3. If the event was inserted at the beginning of the sweep list, set prev to the sweep list’s end iterator, which in C++ is a legal-but-invalid iterator meaning “the space after the end” or something. This is checked for in later code, to see if there is a previous event to look at. Otherwise, decrement prev, so it’s now pointing at the event immediately before the inserted one.

  4. Increment next normally. If the inserted event is last, then this will bump next to the end iterator anyway.

In other words, I need to get the previous and next elements from a BTreeSet. Rust does have bidirectional iterators, which BTreeSet supports… but BTreeSet::insert only returns a bool telling me whether or not anything was inserted, not the position. I came up with this:

1
2
3
let mut maybe_below = active_segments.range(..segment).last().map(|v| *v);
let mut maybe_above = active_segments.range(segment..).next().map(|v| *v);
active_segments.insert(segment);

The range method returns an iterator over a subset of the tree. The .. syntax makes a range (where the right endpoint is exclusive), so ..segment finds the part of the tree before the new segment, and segment.. finds the part of the tree after it. (The latter would start with the segment itself, except I haven’t inserted it yet, so it’s not actually there.)

Then the standard next() and last() methods on bidirectional iterators find me the element I actually want. But the iterator might be empty, so they both return an Option. Also, iterators tend to return references to their contents, but in this case the contents are already references, and I don’t want a double reference, so the map call dereferences one layer — but only if the Option contains a value. Phew!

This is slightly less efficient than the C++ code, since it has to look up where segment goes three times rather than just one. I might be able to get it down to two with some more clever finagling of the iterator, but microsopic performance considerations were a low priority here.

Finally, the event queue uses a std::priority_queue to keep events in a desired order and efficiently pop the next one off the top.

Except priority queues act like heaps, where the greatest (i.e., last) item is made accessible.

Sorting out sorting

C++ comparison functions return true to indicate that the first argument is less than the second argument. Sweep events occur from left to right. You generally implement sorts so that the first thing comes, erm, first.

But sweep events go in a priority queue, and priority queues surface the last item, not the first. This C++ code handled this minor wrinkle by implementing its comparison backwards.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
struct SweepEventComp : public std::binary_function<SweepEvent, SweepEvent, bool> { // for sorting sweep events
// Compare two sweep events
// Return true means that e1 is placed at the event queue after e2, i.e,, e1 is processed by the algorithm after e2
bool operator() (const SweepEvent* e1, const SweepEvent* e2)
{
    if (e1->point.x () > e2->point.x ()) // Different x-coordinate
        return true;
    if (e2->point.x () > e1->point.x ()) // Different x-coordinate
        return false;
    if (e1->point.y () != e2->point.y ()) // Different points, but same x-coordinate. The event with lower y-coordinate is processed first
        return e1->point.y () > e2->point.y ();
    if (e1->left != e2->left) // Same point, but one is a left endpoint and the other a right endpoint. The right endpoint is processed first
        return e1->left;
    // Same point, both events are left endpoints or both are right endpoints.
    if (signedArea (e1->point, e1->otherEvent->point, e2->otherEvent->point) != 0) // not collinear
        return e1->above (e2->otherEvent->point); // the event associate to the bottom segment is processed first
    return e1->pol > e2->pol;
}
};

Maybe it’s just me, but I had a hell of a time just figuring out what problem this was even trying to solve. I still have to reread it several times whenever I look at it, to make sure I’m getting the right things backwards.

Making this even more ridiculous is that there’s a second implementation of this same sort, with the same name, in another file — and that one’s implemented forwards. And doesn’t use a tiebreaker. I don’t entirely understand how this even compiles, but it does!

I painstakingly translated this forwards to Rust. Unlike the STL, Rust doesn’t take custom comparators for its containers, so I had to implement ordering on the types themselves (which makes sense, anyway). I wrapped everything in the priority queue in a Reverse, which does what it sounds like.

I’m fairly pleased with Rust’s ordering model. Most of the work is done in Ord, a trait with a cmp() method returning an Ordering (one of Less, Equal, and Greater). No magic numbers, no need to implement all six ordering methods! It’s incredible. Ordering even has some handy methods on it, so the usual case of “order by this, then by this” can be written as:

1
2
return self.point().x.cmp(&other.point().x)
    .then(self.point().y.cmp(&other.point().y));

Well. Just kidding! It’s not quite that easy. You see, the points here are composed of floats, and floats have the fun property that not all of them are comparable. Specifically, NaN is not less than, greater than, or equal to anything else, including itself. So IEEE 754 float ordering cannot be expressed with Ord. Unless you want to just make up an answer for NaN, but Rust doesn’t tend to do that.

Rust’s float types thus implement the weaker PartialOrd, whose method returns an Option<Ordering> instead. That makes the above example slightly uglier:

1
2
return self.point().x.partial_cmp(&other.point().x).unwrap()
    .then(self.point().y.partial_cmp(&other.point().y).unwrap())

Also, since I use unwrap() here, this code will panic and take the whole program down if the points are infinite or NaN. Don’t do that.

This caused some minor inconveniences in other places; for example, the general-purpose cmp::min() doesn’t work on floats, because it requires an Ord-erable type. Thankfully there’s a f64::min(), which handles a NaN by returning the other argument.

(Cool story: for the longest time I had this code using f32s. I’m used to translating int to “32 bits”, and apparently that instinct kicked in for floats as well, even floats spelled double.)

The only other sorting adventure was this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
// Due to overlapping edges the resultEvents array can be not wholly sorted
bool sorted = false;
while (!sorted) {
    sorted = true;
    for (unsigned int i = 0; i < resultEvents.size (); ++i) {
        if (i + 1 < resultEvents.size () && sec (resultEvents[i], resultEvents[i+1])) {
            std::swap (resultEvents[i], resultEvents[i+1]);
            sorted = false;
        }
    }
}

(I originally misread this comment as saying “the array cannot be wholly sorted” and had no idea why that would be the case, or why the author would then immediately attempt to bubble sort it.)

I’m still not sure why this uses an ad-hoc sort instead of std::sort. But I’m used to taking for granted that general-purpose sorting implementations are tuned to work well for almost-sorted data, like Python’s. Maybe C++ is untrustworthy here, for some reason. I replaced it with a call to .sort() and all seemed fine.

Phew! We’re getting there. Finally, my code appears to type-check.

But now I see storm clouds gathering on the horizon.

Ownership hell

I have a problem. I somehow run into this problem every single time I use Rust. The solutions are never especially satisfying, and all the hacks I might use if forced to write C++ turn out to be unsound, which is even more annoying because rustc is just sitting there with this smug “I told you so expression” and—

The problem is ownership, which Rust is fundamentally built on. Any given value must have exactly one owner, and Rust must be able to statically convince itself that:

  1. No reference to a value outlives that value.
  2. If a mutable reference to a value exists, no other references to that value exist at the same time.

This is the core of Rust. It guarantees at compile time that you cannot lose pointers to allocated memory, you cannot double-free, you cannot have dangling pointers.

It also completely thwarts a lot of approaches you might be inclined to take if you come from managed languages (where who cares, the GC will take care of it) or C++ (where you just throw pointers everywhere and hope for the best apparently).

For example, pointer loops are impossible. Rust’s understanding of ownership and lifetimes is hierarchical, and it simply cannot express loops. (Rust’s own doubly-linked list type uses raw pointers and unsafe code under the hood, where “unsafe” is an escape hatch for the usual ownership rules. Since I only recently realized that pointers to the inside of a mutable Vec are a bad idea, I figure I should probably not be writing unsafe code myself.)

This throws a few wrenches in the works.

Problem the first: pointer loops

I immediately ran into trouble with the SweepEvent struct itself. A SweepEvent pulls double duty: it represents one endpoint of a segment, but each left endpoint also handles bookkeeping for the segment itself — which means that most of the fields on a right endpoint are unused. Also, and more importantly, each SweepEvent has a pointer to the corresponding SweepEvent at the other end of the same segment. So a pair of SweepEvents point to each other.

Rust frowns upon this. In retrospect, I think I could’ve kept it working, but I also think I’m wrong about that.

My first step was to wrench SweepEvent apart. I moved all of the segment-stuff (which is virtually all of it) into a single SweepSegment type, and then populated the event queue with a SweepEndpoint tuple struct, similar to:

1
2
3
4
5
6
enum SegmentEnd {
    Left,
    Right,
}

struct SweepEndpoint<'a>(&'a SweepSegment, SegmentEnd);

This makes SweepEndpoint essentially a tuple with a name. The 'a is a lifetime and says, more or less, that a SweepEndpoint cannot outlive the SweepSegment it references. Makes sense.

Problem solved! I no longer have mutually referential pointers. But I do still have pointers (well, references), and they have to point to something.

Problem the second: where’s all the data

Which brings me to the problem I always run into with Rust. I have a bucket of things, and I need to refer to some of them multiple times.

I tried half a dozen different approaches here and don’t clearly remember all of them, but I think my core problem went as follows. I translated the C++ class to a Rust struct with some methods hanging off of it. A simplified version might look like this.

1
2
3
4
struct Algorithm {
    arena: LinkedList<SweepSegment>,
    event_queue: BinaryHeap<SweepEndpoint>,
}

Ah, hang on — SweepEndpoint needs to be annotated with a lifetime, so Rust can enforce that those endpoints don’t live longer than the segments they refer to. No problem?

1
2
3
4
struct Algorithm<'a> {
    arena: LinkedList<SweepSegment>,
    event_queue: BinaryHeap<SweepEndpoint<'a>>,
}

Okay! Now for some methods.

1
2
3
4
5
6
7
8
fn run(&mut self) {
    self.arena.push_back(SweepSegment{ data: 5 });
    self.event_queue.push(SweepEndpoint(self.arena.back().unwrap(), SegmentEnd::Left));
    self.event_queue.push(SweepEndpoint(self.arena.back().unwrap(), SegmentEnd::Right));
    for event in &self.event_queue {
        println!("{:?}", event)
    }
}

Aaand… this doesn’t work. Rust “cannot infer an appropriate lifetime for autoref due to conflicting requirements”. The trouble is that self.arena.back() takes a reference to self.arena, and then I put that reference in the event queue. But I promised that everything in the event queue has lifetime 'a, and I don’t actually know how long self lives here; I only know that it can’t outlive 'a, because that would invalidate the references it holds.

A little random guessing let me to change &mut self to &'a mut self — which is fine because the entire impl block this lives in is already parameterized by 'a — and that makes this compile! Hooray! I think that’s because I’m saying self itself has exactly the same lifetime as the references it holds onto, which is true, since it’s referring to itself.

Let’s get a little more ambitious and try having two segments.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
fn run(&'a mut self) {
    self.arena.push_back(SweepSegment{ data: 5 });
    self.event_queue.push(SweepEndpoint(self.arena.back().unwrap(), SegmentEnd::Left));
    self.event_queue.push(SweepEndpoint(self.arena.back().unwrap(), SegmentEnd::Right));
    self.arena.push_back(SweepSegment{ data: 17 });
    self.event_queue.push(SweepEndpoint(self.arena.back().unwrap(), SegmentEnd::Left));
    self.event_queue.push(SweepEndpoint(self.arena.back().unwrap(), SegmentEnd::Right));
    for event in &self.event_queue {
        println!("{:?}", event)
    }
}

Whoops! Rust complains that I’m trying to mutate self.arena while other stuff is referring to it. And, yes, that’s true — I have references to it in the event queue, and Rust is preventing me from potentially deleting everything from the queue when references to it still exist. I’m not actually deleting anything here, of course (though I could be if this were a Vec!), but Rust’s type system can’t encode that (and I dread the thought of a type system that can).

I struggled with this for a while, and rapidly encountered another complete showstopper:

1
2
3
4
5
6
fn run(&'a mut self) {
    self.mutate_something();
    self.mutate_something();
}

fn mutate_something(&'a mut self) {}

Rust objects that I’m trying to borrow self mutably, twice — once for the first call, once for the second.

But why? A borrow is supposed to end automatically once it’s no longer used, right? Maybe if I throw some braces around it for scope… nope, that doesn’t help either.

It’s true that borrows usually end automatically, but here I have explicitly told Rust that mutate_something() should borrow with the lifetime 'a, which is the same as the lifetime in run(). So the first call explicitly borrows self for at least the rest of the method. Removing the lifetime from mutate_something() does fix this error, but if that method tries to add new segments, I’m back to the original problem.

Oh no. The mutation in the C++ code is several calls deep. Porting it directly seems nearly impossible.

The typical solution here — at least, the first thing people suggest to me on Twitter — is to wrap basically everything everywhere in Rc<RefCell<T>>, which gives you something that’s reference-counted (avoiding questions of ownership) and defers borrow checks until runtime (avoiding questions of mutable borrows). But that seems pretty heavy-handed here — not only does RefCell add .borrow() noise anywhere you actually want to interact with the underlying value, but do I really need to refcount these tiny structs that only hold a handful of floats each?

I set out to find a middle ground.

Solution, kind of

I really, really didn’t want to perform serious surgery on this code just to get it to build. I still didn’t know if it worked at all, and now I had to rearrange it without being able to check if I was breaking it further. (This isn’t Rust’s fault; it’s a natural problem with porting between fairly different paradigms.)

So I kind of hacked it into working with minimal changes, producing a grotesque abomination which I’m ashamed to link to. Here’s how!

First, I got rid of the class. It turns out this makes lifetime juggling much easier right off the bat. I’m pretty sure Rust considers everything in a struct to be destroyed simultaneously (though in practice it guarantees it’ll destroy fields in order), which doesn’t leave much wiggle room. Locals within a function, on the other hand, can each have their own distinct lifetimes, which solves the problem of expressing that the borrows won’t outlive the arena.

Speaking of the arena, I solved the mutability problem there by switching to… an arena! The typed-arena crate (a port of a type used within Rust itself, I think) is an allocator — you give it a value, and it gives you back a reference, and the reference is guaranteed to be valid for as long as the arena exists. The method that does this is sneaky and takes &self rather than &mut self, so Rust doesn’t know you’re mutating the arena and won’t complain. (One drawback is that the arena will never free anything you give to it, but that’s not a big problem here.)


My next problem was with mutation. The main loop repeatedly calls possibleIntersection with pairs of segments, which can split either or both segment. Rust definitely doesn’t like that — I’d have to pass in two &muts, both of which are mutable references into the same arena, and I’d have a bunch of immutable references into that arena in the sweep list and elsewhere. This isn’t going to fly.

This is kind of a shame, and is one place where Rust seems a little overzealous. Something like this seems like it ought to be perfectly valid:

1
2
3
4
let mut v = vec![1u32, 2u32];
let a = &mut v[0];
let b = &mut v[1];
// do stuff with a, b

The trouble is, Rust only knows the type signature, which here is something like index_mut(&'a mut self, index: usize) -> &'a T. Nothing about that says that you’re borrowing distinct elements rather than some core part of the type — and, in fact, the above code is only safe because you’re borrowing distinct elements. In the general case, Rust can’t possibly know that. It seems obvious enough from the different indexes, but nothing about the type system even says that different indexes have to return different values. And what if one were borrowed as &mut v[1] and the other were borrowed with v.iter_mut().next().unwrap()?

Anyway, this is exactly where people start to turn to RefCell — if you’re very sure you know better than Rust, then a RefCell will skirt the borrow checker while still enforcing at runtime that you don’t have more than one mutable borrow at a time.

But half the lines in this algorithm examine the endpoints of a segment! I don’t want to wrap the whole thing in a RefCell, or I’ll have to say this everywhere:

1
if segment1.borrow().point.x < segment2.borrow().point.x { ... }

Gross.

But wait — this code only mutates the points themselves in one place. When a segment is split, the original segment becomes the left half, and a new segment is created to be the right half. There’s no compelling need for this; it saves an allocation for the left half, but it’s not critical to the algorithm.

Thus, I settled on a compromise. My segment type now looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
struct SegmentPacket {
    // a bunch of flags and whatnot used in the algorithm
}
struct SweepSegment {
    left_point: MapPoint,
    right_point: MapPoint,
    faces_outwards: bool,
    index: usize,
    order: usize,
    packet: RefCell<SegmentPacket>,
}

I do still need to call .borrow() or .borrow_mut() to get at the stuff in the “packet”, but that’s far less common, so there’s less noise overall. And I don’t need to wrap it in Rc because it’s part of a type that’s allocated in the arena and passed around only via references.


This still leaves me with the problem of how to actually perform the splits.

I’m not especially happy with what I came up with, I don’t know if I can defend it, and I suspect I could do much better. I changed possibleIntersection so that rather than performing splits, it returns the points at which each segment needs splitting, in the form (usize, Option<MapPoint>, Option<MapPoint>). (The usize is used as a flag for calling code and oughta be an enum, but, isn’t yet.)

Now the top-level function is responsible for all arena management, and all is well.

Except, er. possibleIntersection is called multiple times, and I don’t want to copy-paste a dozen lines of split code after each call. I tried putting just that code in its own function, which had the world’s most godawful signature, and that didn’t work because… uh… hm. I can’t remember why, exactly! Should’ve written that down.

I tried a local closure next, but closures capture their environment by reference, so now I had references to a bunch of locals for as long as the closure existed, which meant I couldn’t mutate those locals. Argh. (This seems a little silly to me, since the closure’s references cannot possibly be used for anything if the closure isn’t being called, but maybe I’m missing something. Or maybe this is just a limitation of lifetimes.)

Increasingly desperate, I tried using a macro. But… macros are hygienic, which means that any new name you use inside a macro is different from any name outside that macro. The macro thus could not see any of my locals. Usually that’s good, but here I explicitly wanted the macro to mess with my locals.

I was just about to give up and go live as a hermit in a cabin in the woods, when I discovered something quite incredible. You can define local macros! If you define a macro inside a function, then it can see any locals defined earlier in that function. Perfect!

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
macro_rules! _split_segment (
    ($seg:expr, $pt:expr) => (
        {
            let pt = $pt;
            let seg = $seg;
            // ... waaay too much code ...
        }
    );
);

loop {
    // ...
    // This is possibleIntersection, renamed because Rust rightfully complains about camelCase
    let cross = handle_intersections(Some(segment), maybe_above);
    if let Some(pt) = cross.1 {
        segment = _split_segment!(segment, pt);
    }
    if let Some(pt) = cross.2 {
        maybe_above = Some(_split_segment!(maybe_above.unwrap(), pt));
    }
    // ...
}

(This doesn’t actually quite match the original algorithm, which has one case where a segment can be split twice. I realized that I could just do the left-most split, and a later iteration would perform the other split. I sure hope that’s right, anyway.)

It’s a bit ugly, and I ran into a whole lot of implicit behavior from the C++ code that I had to fix — for example, the segment is sometimes mutated just before it’s split, purely as a shortcut for mutating the left part of the split. But it finally compiles! And runs! And kinda worked, a bit!

Aftermath

I still had a lot of work to do.

For one, this code was designed for intersecting two shapes, not mass-intersecting a big pile of shapes. The basic algorithm doesn’t care about how many polygons you start with — all it sees is segments — but the code for constructing the return value needed some heavy modification.

The biggest change by far? The original code traced each segment once, expecting the result to be only a single shape. I had to change that to trace each side of each segment once, since the vast bulk of the output consists of shapes which share a side. This violated a few assumptions, which I had to hack around.

I also ran into a couple very bad edge cases, spent ages debugging them, then found out that the original algorithm had a subtle workaround that I’d commented out because it was awkward to port but didn’t seem to do anything. Whoops!

The worst was a precision error, where a vertical line could be split on a point not quite actually on the line, which wreaked all kinds of havoc. I worked around that with some tasteful rounding, which is highly dubious but makes the output more appealing to my squishy human brain. (I might switch to the original workaround, but I really dislike that even simple cases can spit out points at 1500.0000000000003. The whole thing is parameterized over the coordinate type, so maybe I could throw a rational type in there and cross my fingers?)

All that done, I finally, finally, after a couple months of intermittent progress, got what I wanted!

This is Doom 2’s MAP01. The black area to the left of center is where the player starts. Gray areas indicate where the player can walk from there, with lighter shades indicating more distant areas, where “distance” is measured by the minimum number of line crossings. Red areas can’t be reached at all.

(Note: large playable chunks of the map, including the exit room, are red. That’s because those areas are behind doors, and this code doesn’t understand doors yet.)

(Also note: The big crescent in the lower-right is also black because I was lazy and looked for the player’s starting sector by checking the bbox, and that sector’s bbox happens to match.)

The code that generated this had to go out of its way to delete all the unreachable zones around solid walls. I think I could modify the algorithm to do that on the fly pretty easily, which would probably speed it up a bit too. Downside is that the algorithm would then be pretty specifically tied to this problem, and not usable for any other kind of polygon intersection, which I would think could come up elsewhere? The modifications would be pretty minor, though, so maybe I could confine them to a closure or something.

Some final observations

It runs surprisingly slowly. Like, multiple seconds. Unless I add --release, which speeds it up by a factor of… some number with multiple digits. Wahoo. Debug mode has a high price, especially with a lot of calls in play.

The current state of this code is on GitHub. Please don’t look at it. I’m very sorry.

Honestly, most of my anguish came not from Rust, but from the original code relying on lots of fairly subtle behavior without bothering to explain what it was doing or even hint that anything unusual was going on. God, I hate C++.

I don’t know if the Rust community can learn from this. I don’t know if I even learned from this. Let’s all just quietly forget about it.

Now I just need to figure this one out…

DomTerm 1.0 released

Post Syndicated from ris original https://lwn.net/Articles/750319/rss

Per Bothner has released DomTerm 1.0. Since DomTerm was covered
here
in January 2016, many features have been added or enhanced. (See
this article
on opensource.com.)
DomTerm is a mostly-xterm-compatible terminal emulator, but the output can
be graphics, rich text, and other html, so it is suitable as a REPL for a
program like gnuplot.
Other major features include screen/tmux-style tiling and detachable
sessions, readline-style input editing (integrated with mouse and
clipboard), and opening an editor when clicking an error message.