Tag Archives: cia

[$] ProofMode: a camera app for verifiable photography

Post Syndicated from corbet original https://lwn.net/Articles/726142/rss

The default apps on a mobile platform like Android are familiar targets for
replacement, especially for developers concerned about security. But while
messaging and voice apps (which can be replaced by Signal and Ostel, for
instance) may be the best known examples, the non-profit Guardian Project has taken up the
cause of improving the security features of the camera app. Its latest
such project is ProofMode, an app
to let users take photos and videos that can be verified as authentic by
third parties.

Sci-Hub Ordered to Pay $15 Million in Piracy Damages

Post Syndicated from Ernesto original https://torrentfreak.com/sci-hub-ordered-to-pay-15-million-in-piracy-damages-170623/

Two years ago, academic publisher Elsevier filed a complaint against Sci-Hub and several related “pirate” sites.

It accused the websites of making academic papers widely available to the public, without permission.

While Sci-Hub is nothing like the average pirate site, it is just as illegal according to Elsevier’s legal team, who obtained a preliminary injunction from a New York District Court last fall.

The injunction ordered Sci-Hub’s founder Alexandra Elbakyan to quit offering access to any Elsevier content. However, this didn’t happen.

Instead of taking Sci-Hub down, the lawsuit achieved the opposite. Sci-Hub grew bigger and bigger up to a point where its users were downloading hundreds of thousands of papers per day.

Although Elbakyan sent a letter to the court earlier, she opted not engage in the US lawsuit any further. The same is true for her fellow defendants, associated with Libgen. As a result, Elsevier asked the court for a default judgment and a permanent injunction which were issued this week.

Following a hearing on Wednesday, the Court awarded Elsevier $15,000,000 in damages, the maximum statutory amount for the 100 copyrighted works that were listed in the complaint. In addition, the injunction, through which Sci-Hub and LibGen lost several domain names, was made permanent.

Sci-Hub founder Alexandra Elbakyan says that even if she wanted to pay the millions of dollars in revenue, she doesn’t have the money to do so.

“The money project received and spent in about six years of its operation do not add up to 15 million,” Elbakyan tells torrentFreak.

“More interesting, Elsevier says: the Sci-Hub activity ’causes irreparable injury to Elsevier, its customers and the public’ and US court agreed. That feels like a perfect crime. If you want to cause an irreparable injury to American public, what do you have to do? Now we know the answer: establish a website where they can read research articles for free,” she adds.

Previously, Elbakyan already confirmed to us that, lawsuit or not, the site is not going anywhere.

“The Sci-Hub will continue as usual. In case of problems with the domain names, users can rely on TOR scihub22266oqcxt.onion,” Elbakyan added.

Sci-Hub is regularly referred to as the “Pirate Bay for science,” and based on the site’s resilience and its response to legal threats, it can certainly live up to this claim.

The Association of American Publishers (AAP) is happy with the outcome of the case.

“As the final judgment shows, the Court has not mistaken illegal activity for a public good,” AAP President and CEO Maria A. Pallante says.

“On the contrary, it has recognized the defendants’ operation for the flagrant and sweeping infringement that it really is and affirmed the critical role of copyright law in furthering scientific research and the public interest.”

Matt McKay, a spokesperson for the International Association of Scientific, Technical and Medical Publishers (STM) in Oxford went even further, telling Nature that the site doesn’t offer any value to the scientific comunity.

“Sci-Hub does not add any value to the scholarly community. It neither fosters scientific advancement nor does it value researchers’ achievements. It is simply a place for someone to go to download stolen content and then leave.”

Hundreds of thousands of academics, who regularly use the site to download papers, might contest this though.

With no real prospect of recouping the damages and an ever-resilient Elbakyan, Elsevier’s legal battle could just be a win on paper. Sci-Hub and Libgen are not going anywhere, it seems, and the lawsuit has made them more popular than ever before.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Kim Dotcom Opposes US’s “Fugitive” Claims at Supreme Court

Post Syndicated from Ernesto original https://torrentfreak.com/kim-dotcom-opposes-uss-fugitive-claims-supreme-court-170622/

megaupload-logoWhen Megaupload and Kim Dotcom were raided five years ago, the authorities seized millions of dollars in cash and other property.

The US government claimed the assets were obtained through copyright crimes so went after the bank accounts, cars, and other seized possessions of the Megaupload defendants.

Kim Dotcom and his colleagues were branded as “fugitives” and the Government won its case. Dotcom’s legal team quickly appealed this verdict, but lost once more at the Fourth Circuit appeals court.

A few weeks ago Dotcom and his former colleagues petitioned the Supreme Court to take on the case.

They don’t see themselves as “fugitives” and want the assets returned. The US Government opposed the request, but according to a new reply filed by Megaupload’s legal team, the US Government ignores critical questions.

The Government has a “vested financial stake” in maintaining the current situation, they write, which allows the authorities to use their “fugitive” claims as an offensive weapon.

“Far from being directed towards persons who have fled or avoided our country while claiming assets in it, fugitive disentitlement is being used offensively to strip foreigners of their assets abroad,” the reply brief (pdf) reads.

According to Dotcom’s lawyers there are several conflicting opinions from lower courts, which should be clarified by the Supreme Court. That Dotcom and his colleagues have decided to fight their extradition in New Zealand, doesn’t warrant the seizure of their assets.

“Absent review, forfeiture of tens of millions of dollars will be a fait accompli without the merits being reached,” they write, adding that this is all the more concerning because the US Government’s criminal case may not be as strong as claimed.

“This is especially disconcerting because the Government’s criminal case is so dubious. When the Government characterizes Petitioners as ‘designing and profiting from a system that facilitated wide-scale copyright infringement,’ it continues to paint a portrait of secondary copyright infringement, which is not a crime.”

The defense team cites several issues that warrant review and urges the Supreme Court to hear the case. If not, the Government will effectively be able to use assets seizures as a pressure tool to urge foreign defendants to come to the US.

“If this stands, the Government can weaponize fugitive disentitlement in order to claim assets abroad,” the reply brief reads.

“It is time for the Court to speak to the Questions Presented. Over the past two decades it has never had a better vehicle to do so, nor is any such vehicle elsewhere in sight,” Dotcom’s lawyers add.

Whether the Supreme Court accepts or denies the case will likely be decided in the weeks to come.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

NSA Insider Security Post-Snowden

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/nsa_insider_sec.html

According to a recently declassified report obtained under FOIA, the NSA’s attempts to protect itself against insider attacks aren’t going very well:

The N.S.A. failed to consistently lock racks of servers storing highly classified data and to secure data center machine rooms, according to the report, an investigation by the Defense Department’s inspector general completed in 2016.

[…]

The agency also failed to meaningfully reduce the number of officials and contractors who were empowered to download and transfer data classified as top secret, as well as the number of “privileged” users, who have greater power to access the N.S.A.’s most sensitive computer systems. And it did not fully implement software to monitor what those users were doing.

In all, the report concluded, while the post-Snowden initiative — called “Secure the Net” by the N.S.A. — had some successes, it “did not fully meet the intent of decreasing the risk of insider threats to N.S.A. operations and the ability of insiders to exfiltrate data.”

Marcy Wheeler comments:

The IG report examined seven of the most important out of 40 “Secure the Net” initiatives rolled out since Snowden began leaking classified information. Two of the initiatives aspired to reduce the number of people who had the kind of access Snowden did: those who have privileged access to maintain, configure, and operate the NSA’s computer systems (what the report calls PRIVACs), and those who are authorized to use removable media to transfer data to or from an NSA system (what the report calls DTAs).

But when DOD’s inspectors went to assess whether NSA had succeeded in doing this, they found something disturbing. In both cases, the NSA did not have solid documentation about how many such users existed at the time of the Snowden leak. With respect to PRIVACs, in June 2013 (the start of the Snowden leak), “NSA officials stated that they used a manually kept spreadsheet, which they no longer had, to identify the initial number of privileged users.” The report offered no explanation for how NSA came to no longer have that spreadsheet just as an investigation into the biggest breach thus far at NSA started. With respect to DTAs, “NSA did not know how many DTAs it had because the manually kept list was corrupted during the months leading up to the security breach.”

There seem to be two possible explanations for the fact that the NSA couldn’t track who had the same kind of access that Snowden exploited to steal so many documents. Either the dog ate their homework: Someone at NSA made the documents unavailable (or they never really existed). Or someone fed the dog their homework: Some adversary made these lists unusable. The former would suggest the NSA had something to hide as it prepared to explain why Snowden had been able to walk away with NSA’s crown jewels. The latter would suggest that someone deliberately obscured who else in the building might walk away with the crown jewels. Obscuring that list would be of particular value if you were a foreign adversary planning on walking away with a bunch of files, such as the set of hacking tools the Shadow Brokers have since released, which are believed to have originated at NSA.

Read the whole thing. Securing against insiders, especially those with technical access, is difficult, but I had assumed the NSA did more post-Snowden.

Three Men Sentenced Following £2.5m Internet Piracy Case

Post Syndicated from Andy original https://torrentfreak.com/three-men-sentenced-following-2-5m-internet-piracy-case-170622/

While legal action against low-level individual file-sharers is extremely rare in the UK, the country continues to pose a risk for those engaged in larger-scale infringement.

That is largely due to the activities of the Police Intellectual Property Crime Unit and private anti-piracy outfits such as the Federation Against Copyright Theft (FACT). Investigations are often a joint effort which can take many years to complete, but the outcomes can often involve criminal sentences.

That was the profile of another Internet piracy case that concluded in London this week. It involved three men from the UK, Eric Brooks, 43, from Bolton, Mark Valentine, 44, from Manchester, and Craig Lloyd, 33, from Wolverhampton.

The case began when FACT became aware of potentially infringing activity back in February 2011. The anti-piracy group then investigated for more than a year before handing the case to police in March 2012.

On July 4, 2012, officers from City of London Police arrested Eric Brooks’ at his home in Bolton following a joint raid with FACT. Computer equipment was seized containing evidence that Brooks had been running a Netherlands-based server hosting more than £100,000 worth of pirated films, music, games, software and ebooks.

According to police, a spreadsheet on Brooks’ computer revealed he had hundreds of paying customers, all recruited from online forums. Using PayPal or utilizing bank transfers, each paid money to access the server. Police mentioned no group or site names in information released this week.

“Enquiries with PayPal later revealed that [Brooks] had made in excess of £500,000 in the last eight years from his criminal business and had in turn defrauded the film and TV industry alone of more than £2.5 million,” police said.

“As his criminal enterprise affected not only the film and TV but the wider entertainment industry including music, games, books and software it is thought that he cost the wider industry an amount much higher than £2.5 million.”

On the same day police arrested Brooks, Mark Valentine’s home in Manchester had a similar unwelcome visit. A day later, Craig Lloyd’s home in Wolverhampton become the third target for police.

Computer equipment was seized from both addresses which revealed that the pair had been paying for access to Brooks’ servers in order to service their own customers.

“They too had used PayPal as a means of taking payment and had earned thousands of pounds from their criminal actions; Valentine gaining £34,000 and Lloyd making over £70,000,” police revealed.

But after raiding the trio in 2012, it took more than four years to charge the men. In a feature common to many FACT cases, all three were charged with Conspiracy to Defraud rather than copyright infringement offenses. All three men pleaded guilty before trial.

On Monday, the men were sentenced at Inner London Crown Court. Brooks was sentenced to 24 months in prison, suspended for 12 months and ordered to complete 140 hours of unpaid work.

Valentine and Lloyd were each given 18 months in prison, suspended for 12 months. Each was ordered to complete 80 hours unpaid work.

Detective Constable Chris Glover, who led the investigation for the City of London Police, welcomed the sentencing.

“The success of this investigation is a result of co-ordinated joint working between the City of London Police and FACT. Brooks, Valentine and Lloyd all thought that they were operating under the radar and doing something which they thought was beyond the controls of law enforcement,” Glover said.

“Brooks, Valentine and Lloyd will now have time in prison to reflect on their actions and the result should act as deterrent for anyone else who is enticed by abusing the internet to the detriment of the entertainment industry.”

While even suspended sentences are a serious matter, none of the men will see the inside of a cell if they meet the conditions of their sentence for the next 12 months. For a case lasting four years involving such large sums of money, that is probably a disappointing result for FACT and the police.

Nevertheless, the men won’t be allowed to enjoy the financial proceeds of their piracy, if indeed any money is left. City of London Police say the trio will be subject to a future confiscation hearing to seize any proceeds of crime.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Is Continuing to Patch Windows XP a Mistake?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/is_continuing_t.html

Last week, Microsoft issued a security patch for Windows XP, a 16-year-old operating system that Microsoft officially no longer supports. Last month, Microsoft issued a Windows XP patch for the vulnerability used in WannaCry.

Is this a good idea? This 2014 essay argues that it’s not:

The zero-day flaw and its exploitation is unfortunate, and Microsoft is likely smarting from government calls for people to stop using Internet Explorer. The company had three ways it could respond. It could have done nothing­ — stuck to its guns, maintained that the end of support means the end of support, and encouraged people to move to a different platform. It could also have relented entirely, extended Windows XP’s support life cycle for another few years and waited for attrition to shrink Windows XP’s userbase to irrelevant levels. Or it could have claimed that this case is somehow “special,” releasing a patch while still claiming that Windows XP isn’t supported.

None of these options is perfect. A hard-line approach to the end-of-life means that there are people being exploited that Microsoft refuses to help. A complete about-turn means that Windows XP will take even longer to flush out of the market, making it a continued headache for developers and administrators alike.

But the option Microsoft took is the worst of all worlds. It undermines efforts by IT staff to ditch the ancient operating system and undermines Microsoft’s assertion that Windows XP isn’t supported, while doing nothing to meaningfully improve the security of Windows XP users. The upside? It buys those users at best a few extra days of improved security. It’s hard to say how that was possibly worth it.

This is a hard trade-off, and it’s going to get much worse with the Internet of Things. Here’s me:

The security of our computers and phones also comes from the fact that we replace them regularly. We buy new laptops every few years. We get new phones even more frequently. This isn’t true for all of the embedded IoT systems. They last for years, even decades. We might buy a new DVR every five or ten years. We replace our refrigerator every 25 years. We replace our thermostat approximately never. Already the banking industry is dealing with the security problems of Windows 95 embedded in ATMs. This same problem is going to occur all over the Internet of Things.

At least Microsoft has security engineers on staff that can write a patch for Windows XP. There will be no one able to write patches for your 16-year-old thermostat and refrigerator, even assuming those devices can accept security patches.

Protect Web Sites & Services Using Rate-Based Rules for AWS WAF

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/protect-web-sites-services-using-rate-based-rules-for-aws-waf/

AWS WAF (Web Application Firewall) helps to protect your application from many different types of application-layer attacks that involve requests that are malicious or malformed. As I showed you when I first wrote about this service (New – AWS WAF), you can define rules that match cross-site scripting, IP address, SQL injection, size, or content constraints:

When incoming requests match rules, actions are invoked. Actions can either allow, block, or simply count matches.

The existing rule model is powerful and gives you the ability to detect and respond to many different types of attacks. It does not, however, allow you to respond to attacks that simply consist of a large number of otherwise valid requests from a particular IP address. These requests might be a web-layer DDoS attack, a brute-force login attempt, or even a partner integration gone awry.

New Rate-Based Rules
Today we are adding Rate-based Rules to WAF, giving you control of when IP addresses are added to and removed from a blacklist, along with the flexibility to handle exceptions and special cases:

Blacklisting IP Addresses – You can blacklist IP addresses that make requests at a rate that exceeds a configured threshold rate.

IP Address Tracking– You can see which IP addresses are currently blacklisted.

IP Address Removal – IP addresses that have been blacklisted are automatically removed when they no longer make requests at a rate above the configured threshold.

IP Address Exemption – You can exempt certain IP addresses from blacklisting by using an IP address whitelist inside of the a rate-based rule. For example, you might want to allow trusted partners to access your site at a higher rate.

Monitoring & Alarming – You can watch and alarm on CloudWatch metrics that are published for each rule.

You can combine new Rate-based Rules with WAF Conditions to implement sophisticated rate-limiting strategies. For example, you could use a Rate-based Rule and a WAF Condition that matches your login pages. This would allow you to impose a modest threshold on your login pages (to avoid brute-force password attacks) and allow a more generous one on your marketing or system status pages.

Thresholds are defined in terms of the number of incoming requests from a single IP address within a 5 minute period. Once this threshold is breached, additional requests from the IP address are blocked until the request rate falls below the threshold.

Using Rate-Based Rules
Here’s how you would define a Rate-based Rule that protects the /login portion of your site. Start by defining a WAF condition that matches the desired string in the URI of the page:

Then use this condition to define a Rate-based Rule (the rate limit is expressed in terms of requests within a 5 minute interval, but the blacklisting goes in to effect as soon as the limit is breached):

With the condition and the rule in place, create a Web ACL (ProtectLoginACL) to bring it all together and to attach it to the AWS resource (a CloudFront distribution in this case):

Then attach the rule (ProtectLogin) to the Web ACL:

The resource is now protected in accord with the rule and the web ACL. You can monitor the associated CloudWatch metrics (ProtectLogin and ProtectLoginACL in this case). You could even create CloudWatch Alarms and use them to fire Lambda functions when a protection threshold is breached. The code could examine the offending IP address and make a complex, business-driven decision, perhaps adding a whitelisting rule that gives an extra-generous allowance to a trusted partner or to a user with a special payment plan.

Available Now
The new, Rate-based Rules are available now and you can start using them today! Rate-based rules are priced the same as Regular rules; see the WAF Pricing page for more info.

Jeff;

MPAA & RIAA Demand Tough Copyright Standards in NAFTA Negotiations

Post Syndicated from Andy original https://torrentfreak.com/mpaa-riaa-demand-tough-copyright-standards-in-nafta-negotiations-170621/

The North American Free Trade Agreement (NAFTA) between the United States, Canada, and Mexico was negotiated more than 25 years ago. With a quarter of a decade of developments to contend with, the United States wants to modernize.

“While our economy and U.S. businesses have changed considerably over that period, NAFTA has not,” the government says.

With this in mind, the US requested comments from interested parties seeking direction for negotiation points. With those comments now in, groups like the MPAA and RIAA have been making their positions known. It’s no surprise that intellectual property enforcement is high on the agenda.

“Copyright is the lifeblood of the U.S. motion picture and television industry. As such, MPAA places high priority on securing strong protection and enforcement disciplines in the intellectual property chapters of trade agreements,” the MPAA writes in its submission.

“Strong IPR protection and enforcement are critical trade priorities for the music industry. With IPR, we can create good jobs, make significant contributions to U.S. economic growth and security, invest in artists and their creativity, and drive technological innovation,” the RIAA notes.

While both groups have numerous demands, it’s clear that each seeks an environment where not only infringers can be held liable, but also Internet platforms and services.

For the RIAA, there is a big focus on the so-called ‘Value Gap’, a phenomenon found on user-uploaded content sites like YouTube that are able to offer infringing content while avoiding liability due to Section 512 of the DMCA.

“Today, user-uploaded content services, which have developed sophisticated on-demand music platforms, use this as a shield to avoid licensing music on fair terms like other digital services, claiming they are not legally responsible for the music they distribute on their site,” the RIAA writes.

“Services such as Apple Music, TIDAL, Amazon, and Spotify are forced to compete with services that claim they are not liable for the music they distribute.”

But if sites like YouTube are exercising their rights while acting legally under current US law, how can partners Canada and Mexico do any better? For the RIAA, that can be achieved by holding them to standards envisioned by the group when the DMCA was passed, not how things have panned out since.

Demanding that negotiators “protect the original intent” of safe harbor, the RIAA asks that a “high-level and high-standard service provider liability provision” is pursued. This, the music group says, should only be available to “passive intermediaries without requisite knowledge of the infringement on their platforms, and inapplicable to services actively engaged in communicating to the public.”

In other words, make sure that YouTube and similar sites won’t enjoy the same level of safe harbor protection as they do today.

The RIAA also requires any negotiated safe harbor provisions in NAFTA to be flexible in the event that the DMCA is tightened up in response to the ongoing safe harbor rules study.

In any event, NAFTA should not “support interpretations that no longer reflect today’s digital economy and threaten the future of legitimate and sustainable digital trade,” the RIAA states.

For the MPAA, Section 512 is also perceived as a problem. While noting that the original intent was to foster a system of shared responsibility between copyright owners and service providers, the MPAA says courts have subsequently let copyright holders down. Like the RIAA, the MPAA also suggests that Canada and Mexico can be held to higher standards.

“We recommend a new approach to this important trade policy provision by moving to high-level language that establishes intermediary liability and appropriate limitations on liability. This would be fully consistent with U.S. law and avoid the same misinterpretations by policymakers and courts overseas,” the MPAA writes.

“In so doing, a modernized NAFTA would be consistent with Trade Promotion Authority’s negotiating objective of ‘ensuring that standards of protection and enforcement keep pace with technological developments’.”

The MPAA also has some specific problems with Mexico, including unauthorized camcording. The Hollywood group says that 85 illicit audio and video recordings of films were linked to Mexican theaters in 2016. However, recording is not currently a criminal offense in Mexico.

Another issue for the MPAA is that criminal sanctions for commercial scale infringement are only available if the infringement is for profit.

“This has hampered enforcement against the above-discussed camcording problem but also against online infringement, such as peer-to-peer piracy, that may be on a scale that is immensely harmful to U.S. rightsholders but nonetheless occur without profit by the infringer,” the MPAA writes.

“The modernized NAFTA like other U.S. bilateral free trade agreements must provide for criminal sanctions against commercial scale infringements without proof of profit motive.”

Also of interest are the MPAA’s complaints against Mexico’s telecoms laws. Unlike in the US and many countries in Europe, Mexico’s ISPs are forbidden to hand out their customers’ personal details to rights holders looking to sue. This, the MPAA says, needs to change.

The submissions from the RIAA and MPAA can be found here and here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Is your product “Powered by Raspberry Pi”?

Post Syndicated from Mike Buffham original https://www.raspberrypi.org/blog/powered-by-raspberry-pi/

One of the most exciting things for us about the growth of the Raspberry Pi community has been the number of companies that have grown up around the platform, and who have chosen to embed our products into their own. While many of these design-ins have been “silent”, a number of people have asked us for a standardised way to indicate that a product contains a Raspberry Pi or a Raspberry Pi Compute Module.

Powered by Raspberry Pi Logo

At the end of last year, we introduced a “Powered by Raspberry Pi” logo to meet this need. It is now included in our trademark rules and brand guidelines, which you can find on our website. Below we’re showing an early example of a “Powered by Raspberry Pi”-branded device, the KUNBUS Revolution Pi industrial PC. It has already made it onto the market, and we think it will inspire you to include our logo on the packaging of your own product.

KUNBUS RevPi
Powered by Raspberry Pi logo on RevPi

Using the “Powered by Raspberry Pi” brand

Adding the “Powered by Raspberry Pi” logo to your packaging design is a great way to remind your customers that a portion of the sale price of your product goes to the Raspberry Pi Foundation and supports our educational work.

As with all things Raspberry Pi, our rules for using this brand are fairly straightforward: the only thing you need to do is to fill out this simple application form. Once you have submitted it, we will review your details and get back to you as soon as possible.

When we approve your application, we will require that you use one of the official “Powered by Raspberry Pi” logos and that you ensure it is at least 30 mm wide. We are more than happy to help you if you have any design queries related to this – just contact us at [email protected]

The post Is your product “Powered by Raspberry Pi”? appeared first on Raspberry Pi.

Court Grants Subpoenas to Unmask ‘TVAddons’ and ‘ZemTV’ Operators

Post Syndicated from Ernesto original https://torrentfreak.com/court-grants-subpoenas-to-unmask-tvaddons-and-zemtv-operators-170621/

Earlier this month we broke the news that third-party Kodi add-on ZemTV and the TVAddons library were being sued in a federal court in Texas.

In a complaint filed by American satellite and broadcast provider Dish Network, both stand accused of copyright infringement, facing up to $150,000 for each offense.

While the allegations are serious, Dish doesn’t know the full identities of the defendants.

To find out more, the company requested a broad range of subpoenas from the court, targeting Amazon, Github, Google, Twitter, Facebook, PayPal, and several hosting providers.

From Dish’s request

This week the court granted the subpoenas, which means that they can be forwarded to the companies in question. Whether that will be enough to identify the people behind ‘TVAddons’ and ‘ZemTV’ remains to be seen, but Dish has cast its net wide.

For example, the subpoena directed at Google covers any type of information that can be used to identify the account holder of [email protected], which is believed to be tied to ZemTV.

The information requested from Google includes IP address logs with session date and timestamps, but also covers “all communications,” including GChat messages from 2014 onwards.

Similarly, Twitter is required to hand over information tied to the accounts of the users “TV Addons” and “shani_08_kodi” as well as other accounts linked to tvaddons.ag and streamingboxes.com. This also applies the various tweets that were sent through the account.

The subpoena specifically mentions “all communications, including ‘tweets’, Twitter sent to or received from each Twitter Account during the time period of February 1, 2014 to present.”

From the Twitter subpoena

Similar subpoenas were granted for the other services, tailored towards the information Dish hopes to find there. For example, the broadcast provider also requests details of each transaction from PayPal, as well as all debits and credits to the accounts.

In some parts, the subpoenas appear to be quite broad. PayPal is asked to reveal information on any account with the credit card statement “Shani,” for example. Similarly, Github is required to hand over information on accounts that are ‘associated’ with the tvaddons.ag domain, which is referenced by many people who are not directly connected to the site.

The service providers in question still have the option to challenge the subpoenas or ask the court for further clarification. A full overview of all the subpoena requests is available here (Exhibit 2 and onwards), including all the relevant details. This also includes several letters to foreign hosting providers.

While Dish still appears to be keen to find out who is behind ‘TVAddons’ and ‘ZemTV,’ not much has been heard from the defendants in question.

ZemTV developer “Shani” shut down his addon soon after the lawsuit was announced, without mentioning it specifically. TVAddons, meanwhile, has been offline for well over a week, without any notice in public about the reason for the prolonged downtime.

The court’s order granting the subpoenas and letters of request is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

In the Works – AWS Region in Hong Kong

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-hong-kong/

Last year we launched new AWS Regions in Canada, India, Korea, the UK (London), and the United States (Ohio), and announced that new regions are coming to France (Paris), China (Ningxia), and Sweden (Stockholm).

Coming to Hong Kong in 2018
Today, I am happy to be able to tell you that we are planning to open up an AWS Region in Hong Kong, in 2018. Hong Kong is a leading international financial center, well known for its service oriented economy. It is rated highly on innovation and for ease of doing business. As an evangelist, I get to visit many great cities in the world, and was lucky to have spent some time in Hong Kong back in 2014 and met a number of awesome customers there. Many of these customers have given us feedback that they wanted a local AWS Region.

This will be the eighth AWS Region in Asia Pacific joining six other Regions there — Singapore, Tokyo, Sydney, Beijing, Seoul, and Mumbai, and an additional Region in China (Ningxia) expected to launch in the coming months. Together, these Regions will provide our customers with a total of 19 Availability Zones (AZs) and allow them to architect highly fault tolerant applications.

Today, our infrastructure comprises 43 Availability Zones across 16 geographic regions worldwide, with another three AWS Regions (and eight Availability Zones) in France, China, and Sweden coming online throughout 2017 and 2018, (see the AWS Global Infrastructure page for more info).

We are looking forward to serving new and existing customers in Hong Kong and working with partners across Asia-Pacific. Of course, the new region will also be open to existing AWS customers who would like to process and store data in Hong Kong. Public sector organizations such as government agencies, educational institutions, and nonprofits in Hong Kong will be able to use this region to store sensitive data locally (the AWS in the Public Sector page has plenty of success stories drawn from our worldwide customer base).

If you are a customer or a partner and have specific questions about this Region, you can contact our Hong Kong team.

Help Wanted
If you are interested in learning more about AWS positions in Hong Kong, please visit the Amazon Jobs site and set the location to Hong Kong.

Jeff;

 

Building Loosely Coupled, Scalable, C# Applications with Amazon SQS and Amazon SNS

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/building-loosely-coupled-scalable-c-applications-with-amazon-sqs-and-amazon-sns/

 
Stephen Liedig, Solutions Architect

 

One of the many challenges professional software architects and developers face is how to make cloud-native applications scalable, fault-tolerant, and highly available.

Fundamental to your project success is understanding the importance of making systems highly cohesive and loosely coupled. That means considering the multi-dimensional facets of system coupling to support the distributed nature of the applications that you are building for the cloud.

By that, I mean addressing not only the application-level coupling (managing incoming and outgoing dependencies), but also considering the impacts of of platform, spatial, and temporal coupling of your systems. Platform coupling relates to the interoperability, or lack thereof, of heterogeneous systems components. Spatial coupling deals with managing components at a network topology level or protocol level. Temporal, or runtime coupling, refers to the ability of a component within your system to do any kind of meaningful work while it is performing a synchronous, blocking operation.

The AWS messaging services, Amazon SQS and Amazon SNS, help you deal with these forms of coupling by providing mechanisms for:

  • Reliable, durable, and fault-tolerant delivery of messages between application components
  • Logical decomposition of systems and increased autonomy of components
  • Creating unidirectional, non-blocking operations, temporarily decoupling system components at runtime
  • Decreasing the dependencies that components have on each other through standard communication and network channels

Following on the recent topic, Building Scalable Applications and Microservices: Adding Messaging to Your Toolbox, in this post, I look at some of the ways you can introduce SQS and SNS into your architectures to decouple your components, and show how you can implement them using C#.

Walkthrough

To illustrate some of these concepts, consider a web application that processes customer orders. As good architects and developers, you have followed best practices and made your application scalable and highly available. Your solution included implementing load balancing, dynamic scaling across multiple Availability Zones, and persisting orders in a Multi-AZ Amazon RDS database instance, as in the following diagram.


In this example, the application is responsible for handling and persisting the order data, as well as dealing with increases in traffic for popular items.

One potential point of vulnerability in the order processing workflow is in saving the order in the database. The business expects that every order has been persisted into the database. However, any potential deadlock, race condition, or network issue could cause the persistence of the order to fail. Then, the order is lost with no recourse to restore the order.

With good logging capability, you may be able to identify when an error occurred and which customer’s order failed. This wouldn’t allow you to “restore” the transaction, and by that stage, your customer is no longer your customer.

As illustrated in the following diagram, introducing an SQS queue helps improve your ordering application. Using the queue isolates the processing logic into its own component and runs it in a separate process from the web application. This, in turn, allows the system to be more resilient to spikes in traffic, while allowing work to be performed only as fast as necessary in order to manage costs.


In addition, you now have a mechanism for persisting orders as messages (with the queue acting as a temporary database), and have moved the scope of your transaction with your database further down the stack. In the event of an application exception or transaction failure, this ensures that the order processing can be retired or redirected to the Amazon SQS Dead Letter Queue (DLQ), for re-processing at a later stage. (See the recent post, Using Amazon SQS Dead-Letter Queues to Control Message Failure, for more information on dead-letter queues.)

Scaling the order processing nodes

This change allows you now to scale the web application frontend independently from the processing nodes. The frontend application can continue to scale based on metrics such as CPU usage, or the number of requests hitting the load balancer. Processing nodes can scale based on the number of orders in the queue. Here is an example of scale-in and scale-out alarms that you would associate with the scaling policy.

Scale-out Alarm

aws cloudwatch put-metric-alarm --alarm-name AddCapacityToCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
--statistic Average --period 300 --threshold 3 --comparison-operator GreaterThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
--evaluation-periods 2 --alarm-actions <arn of the scale-out autoscaling policy>

Scale-in Alarm

aws cloudwatch put-metric-alarm --alarm-name RemoveCapacityFromCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
 --statistic Average --period 300 --threshold 1 --comparison-operator LessThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
 --evaluation-periods 2 --alarm-actions <arn of the scale-in autoscaling policy>

In the above example, use the ApproximateNumberOfMessagesVisible metric to discover the queue length and drive the scaling policy of the Auto Scaling group. Another useful metric is ApproximateAgeOfOldestMessage, when applications have time-sensitive messages and developers need to ensure that messages are processed within a specific time period.

Scaling the order processing implementation

On top of scaling at an infrastructure level using Auto Scaling, make sure to take advantage of the processing power of your Amazon EC2 instances by using as many of the available threads as possible. There are several ways to implement this. In this post, we build a Windows service that uses the BackgroundWorker class to process the messages from the queue.

Here’s a closer look at the implementation. In the first section of the consuming application, use a loop to continually poll the queue for new messages, and construct a ReceiveMessageRequest variable.

public static void PollQueue()
{
    while (_running)
    {
        Task<ReceiveMessageResponse> receiveMessageResponse;

        // Pull messages off the queue
        using (var sqs = new AmazonSQSClient())
        {
            const int maxMessages = 10;  // 1-10

            //Receiving a message
            var receiveMessageRequest = new ReceiveMessageRequest
            {
                // Get URL from Configuration
                QueueUrl = _queueUrl, 
                // The maximum number of messages to return. 
                // Fewer messages might be returned. 
                MaxNumberOfMessages = maxMessages, 
                // A list of attributes that need to be returned with message.
                AttributeNames = new List<string> { "All" },
                // Enable long polling. 
                // Time to wait for message to arrive on queue.
                WaitTimeSeconds = 5 
            };

            receiveMessageResponse = sqs.ReceiveMessageAsync(receiveMessageRequest);
        }

The WaitTimeSeconds property of the ReceiveMessageRequest specifies the duration (in seconds) that the call waits for a message to arrive in the queue before returning a response to the calling application. There are a few benefits to using long polling:

  • It reduces the number of empty responses by allowing SQS to wait until a message is available in the queue before sending a response.
  • It eliminates false empty responses by querying all (rather than a limited number) of the servers.
  • It returns messages as soon any message becomes available.

For more information, see Amazon SQS Long Polling.

After you have returned messages from the queue, you can start to process them by looping through each message in the response and invoking a new BackgroundWorker thread.

// Process messages
if (receiveMessageResponse.Result.Messages != null)
{
    foreach (var message in receiveMessageResponse.Result.Messages)
    {
        Console.WriteLine("Received SQS message, starting worker thread");

        // Create background worker to process message
        BackgroundWorker worker = new BackgroundWorker();
        worker.DoWork += (obj, e) => ProcessMessage(message);
        worker.RunWorkerAsync();
    }
}
else
{
    Console.WriteLine("No messages on queue");
}

The event handler, ProcessMessage, is where you implement business logic for processing orders. It is important to have a good understanding of how long a typical transaction takes so you can set a message VisibilityTimeout that is long enough to complete your operation. If order processing takes longer than the specified timeout period, the message becomes visible on the queue. Other nodes may pick it and process the same order twice, leading to unintended consequences.

Handling Duplicate Messages

In order to manage duplicate messages, seek to make your processing application idempotent. In mathematics, idempotent describes a function that produces the same result if it is applied to itself:

f(x) = f(f(x))

No matter how many times you process the same message, the end result is the same (definition from Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions, Hohpe and Wolf, 2004).

There are several strategies you could apply to achieve this:

  • Create messages that have inherent idempotent characteristics. That is, they are non-transactional in nature and are unique at a specified point in time. Rather than saying “place new order for Customer A,” which adds a duplicate order to the customer, use “place order <orderid> on <timestamp> for Customer A,” which creates a single order no matter how often it is persisted.
  • Deliver your messages via an Amazon SQS FIFO queue, which provides the benefits of message sequencing, but also mechanisms for content-based deduplication. You can deduplicate using the MessageDeduplicationId property on the SendMessage request or by enabling content-based deduplication on the queue, which generates a hash for MessageDeduplicationId, based on the content of the message, not the attributes.
var sendMessageRequest = new SendMessageRequest
{
    QueueUrl = _queueUrl,
    MessageBody = JsonConvert.SerializeObject(order),
    MessageGroupId = Guid.NewGuid().ToString("N"),
    MessageDeduplicationId = Guid.NewGuid().ToString("N")
};
  • If using SQS FIFO queues is not an option, keep a message log of all messages attributes processed for a specified period of time, as an alternative to message deduplication on the receiving end. Verifying the existence of the message in the log before processing the message adds additional computational overhead to your processing. This can be minimized through low latency persistence solutions such as Amazon DynamoDB. Bear in mind that this solution is dependent on the successful, distributed transaction of the message and the message log.

Handling exceptions

Because of the distributed nature of SQS queues, it does not automatically delete the message. Therefore, you must explicitly delete the message from the queue after processing it, using the message ReceiptHandle property (see the following code example).

However, if at any stage you have an exception, avoid handling it as you normally would. The intention is to make sure that the message ends back on the queue, so that you can gracefully deal with intermittent failures. Instead, log the exception to capture diagnostic information, and swallow it.

By not explicitly deleting the message from the queue, you can take advantage of the VisibilityTimeout behavior described earlier. Gracefully handle the message processing failure and make the unprocessed message available to other nodes to process.

In the event that subsequent retries fail, SQS automatically moves the message to the configured DLQ after the configured number of receives has been reached. You can further investigate why the order process failed. Most importantly, the order has not been lost, and your customer is still your customer.

private static void ProcessMessage(Message message)
{
    using (var sqs = new AmazonSQSClient())
    {
        try
        {
            Console.WriteLine("Processing message id: {0}", message.MessageId);

            // Implement messaging processing here
            // Ensure no downstream resource contention (parallel processing)
            // <your order processing logic in here…>
            Console.WriteLine("{0} Thread {1}: {2}", DateTime.Now.ToString("s"), Thread.CurrentThread.ManagedThreadId, message.MessageId);
            
            // Delete the message off the queue. 
            // Receipt handle is the identifier you must provide 
            // when deleting the message.
            var deleteRequest = new DeleteMessageRequest(_queueName, message.ReceiptHandle);
            sqs.DeleteMessageAsync(deleteRequest);
            Console.WriteLine("Processed message id: {0}", message.MessageId);

        }
        catch (Exception ex)
        {
            // Do nothing.
            // Swallow exception, message will return to the queue when 
            // visibility timeout has been exceeded.
            Console.WriteLine("Could not process message due to error. Exception: {0}", ex.Message);
        }
    }
}

Using SQS to adapt to changing business requirements

One of the benefits of introducing a message queue is that you can accommodate new business requirements without dramatically affecting your application.

If, for example, the business decided that all orders placed over $5000 are to be handled as a priority, you could introduce a new “priority order” queue. The way the orders are processed does not change. The only significant change to the processing application is to ensure that messages from the “priority order” queue are processed before the “standard order” queue.

The following diagram shows how this logic could be isolated in an “order dispatcher,” whose only purpose is to route order messages to the appropriate queue based on whether the order exceeds $5000. Nothing on the web application or the processing nodes changes other than the target queue to which the order is sent. The rates at which orders are processed can be achieved by modifying the poll rates and scalability settings that I have already discussed.

Extending the design pattern with Amazon SNS

Amazon SNS supports reliable publish-subscribe (pub-sub) scenarios and push notifications to known endpoints across a wide variety of protocols. It eliminates the need to periodically check or poll for new information and updates. SNS supports:

  • Reliable storage of messages for immediate or delayed processing
  • Publish / subscribe – direct, broadcast, targeted “push” messaging
  • Multiple subscriber protocols
  • Amazon SQS, HTTP, HTTPS, email, SMS, mobile push, AWS Lambda

With these capabilities, you can provide parallel asynchronous processing of orders in the system and extend it to support any number of different business use cases without affecting the production environment. This is commonly referred to as a “fanout” scenario.

Rather than your web application pushing orders to a queue for processing, send a notification via SNS. The SNS messages are sent to a topic and then replicated and pushed to multiple SQS queues and Lambda functions for processing.

As the diagram above shows, you have the development team consuming “live” data as they work on the next version of the processing application, or potentially using the messages to troubleshoot issues in production.

Marketing is consuming all order information, via a Lambda function that has subscribed to the SNS topic, inserting the records into an Amazon Redshift warehouse for analysis.

All of this, of course, is happening without affecting your order processing application.

Summary

While I haven’t dived deep into the specifics of each service, I have discussed how these services can be applied at an architectural level to build loosely coupled systems that facilitate multiple business use cases. I’ve also shown you how to use infrastructure and application-level scaling techniques, so you can get the most out of your EC2 instances.

One of the many benefits of using these managed services is how quickly and easily you can implement powerful messaging capabilities in your systems, and lower the capital and operational costs of managing your own messaging middleware.

Using Amazon SQS and Amazon SNS together can provide you with a powerful mechanism for decoupling application components. This should be part of design considerations as you architect for the cloud.

For more information, see the Amazon SQS Developer Guide and Amazon SNS Developer Guide. You’ll find tutorials on all the concepts covered in this post, and more. To can get started using the AWS console or SDK of your choice visit:

Happy messaging!

Sync vs. Backup vs. Storage

Post Syndicated from Yev original https://www.backblaze.com/blog/sync-vs-backup-vs-storage/

Cloud Sync vs. Cloud Backup vs. Cloud Storage

Google Drive recently announced their new Backup and Sync feature for Google Drive, which allows users to select folders on their computer that they want to back up to their Google Drive account (note: these files count against your Google Drive storage limit). Whenever new backup services are announced, we get a lot of questions so I thought we should take a minute to review the differences in cloud based services.

What is the Cloud? Sync Vs Backup Vs Storage

There is still a lot of confusion in the space about what exactly the “cloud” is and how different services interact with it. When folks use a syncing and sharing service like Dropbox, Box, Google Drive, OneDrive or any of the others, they often assume those are acting as a cloud backup solution as well. Adding to the confusion, cloud storage services are often the backend for backup and sync services as well as standalone services. To help sort this out, we’ll define some of the terms below as they apply to a traditional computer set-up with a bunch of apps and data.

Cloud Sync (ex. Dropbox, iCloud Drive, OneDrive, Box, Google Drive) – these services sync folders on your computer to folders on other machines or to the cloud – allowing users to work from a folder or directory across devices. Typically these services have tiered pricing, meaning you pay for the amount of data you store with the service. If there is data loss, sometimes these services even have a rollback feature, of course only files that are in the synced folders are available to be recovered.

Cloud Backup (ex. Backblaze Cloud Backup, Mozy, Carbonite) – these services work in the background automatically. The user does not need to take any action like setting up specific folders. Backup services typically back up any new or changed data on your computer to another location. Before the cloud took off, that location was primarily a CD or an external hard drive – but as cloud storage became more readily available it became the most popular storage medium. Typically these services have fixed pricing, and if there is a system crash or data loss, all backed up data is available for restore. In addition, these services have rollback features in case there is data loss / accidental file deletion.

Cloud Storage (ex. Backblaze B2, Amazon S3, Microsoft Azure) – these services are where many online backup and syncing and sharing services store data. Cloud storage providers typically serve as the endpoint for data storage. These services typically provide APIs, CLIs, and access points for individuals and developers to tie in their cloud storage offerings directly. These services are priced “per GB” meaning you pay for the amount of storage that you use. Since these services are designed for high-availability and durability, data can live solely on these services – though we still recommend having multiple copies of your data, just in case.

What Should You Use?

Backblaze strongly believes in a 3-2-1 Backup Strategy. A 3-2-1 strategy means having at least 3 total copies of your data, 2 of which are local but on different mediums (e.g. an external hard drive in addition to your computer’s local drive), and at least 1 copy offsite. The best setup is data on your computer, a copy on a hard drive that lives somewhere not inside your computer, and another copy with a cloud backup provider. Backblaze Cloud Backup is a great compliment to other services, like Time Machine, Dropbox, and even the free-tiers of cloud storage services.

What is The Difference Between Cloud Sync and Backup?

Let’s take a look at some sync setups that we see fairly frequently.

Example 1) Users have one folder on their computer that is designated for Dropbox, Google Drive, OneDrive, or one of the other syncing/sharing services. Users save or place data into those directories when they want them to appear on other devices. Often these users are using the free-tier of those syncing and sharing services and only have a few GB of data uploaded in them.

Example 2) Users are paying for extended storage for Dropbox, Google Drive, OneDrive, etc… and use those folders as the “Documents” folder – essentially working out of those directories. Files in that folder are available across devices, however, files outside of that folder (e.g. living on the computer’s desktop or anywhere else) are not synced or stored by the service.

What both examples are missing however is the backup of photos, movies, videos, and the rest of the data on their computer. That’s where cloud backup providers excel, by automatically backing up user data with little or no set-up, and no need for the dragging-and-dropping of files. Backblaze actually scans your hard drive to find all the data, regardless of where it might be hiding. The results are, all the user’s data is kept in the Backblaze cloud and the portion of the data that is synced is also kept in that provider’s cloud – giving the user another layer of redundancy. Best of all, Backblaze will actually back up your Dropbox, iCloud Drive, Google Drive, and OneDrive folders.

Data Recovery

The most important feature to think about is how easy it is to get your data back from all of these services. With sync and share services, retrieving a lot of data, especially if you are in a high-data tier, can be cumbersome and take awhile. Generally, the sync and share services only allow customers to download files over the Internet. If you are trying to download more than a couple gigabytes of data, the process can take time and can be fraught with errors.

With cloud storage services, you can usually only retrieve data over the Internet as well, and you pay for both the storage and the egress of the data, so retrieving a large amount of data can be both expensive and time consuming.

Cloud backup services will enable you to download files over the internet too and can also suffer from long download times. At Backblaze we never want our customers to feel like we’re holding their data hostage, which is why we have a lot of restore options, including our Restore Return Refund policy, which allows people to restore their data via a USB Hard Drive, and then return that drive to us for a refund. Cloud sync providers do not provide this capability.

One popular data recovery use case we’ve seen when a person has a lot of data to restore is to download just the files that are needed immediately, and then order a USB Hard Drive restore for the remaining files that are not as time sensitive. The user gets all their files back in a few days, and their network is spared the download charges.

The bottom line is that all of these services have merit for different use-cases. Have questions about which is best for you? Sound off in the comments below!

The post Sync vs. Backup vs. Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

US Embassy Threatens to Close Domain Registry Over ‘Pirate Bay’ Domain

Post Syndicated from Andy original https://torrentfreak.com/us-embassy-threatens-to-close-domain-registry-over-pirate-bay-domain-170620/

Domains have become an integral part of the piracy wars and no one knows this better than The Pirate Bay.

The site has burned through numerous domains over the years, with copyright holders and authorities successfully pressurizing registries to destabilize the site.

The latest news on this front comes from the Central American country of Costa Rica, where the local domain registry is having problems with the United States government.

The drama is detailed in a letter to ICANN penned by Dr. Pedro León Azofeifa, President of the Costa Rican Academy of Science, which operates NIC Costa Rica, the registry in charge of local .CR domain names.

Azofeifa’s letter is addressed to ICANN board member Thomas Schneider and pulls no punches. It claims that for the past two years the United States Embassy in Costa Rica has been pressuring NIC Costa Rica to take action against a particular domain.

“Since 2015, the United Estates Embassy in Costa Rica, who represents the interests of the United States Department of Commerce, has frequently contacted our organization regarding the domain name thepiratebay.cr,” the letter to ICANN reads.

“These interactions with the United States Embassy have escalated with time and include great pressure since 2016 that is exemplified by several phone calls, emails, and meetings urging our ccTLD to take down the domain, even though this would go against our domain name policies.”

The letter states that following pressure from the US, the Costa Rican Ministry of Commerce carried out an investigation which concluded that not taking down the domain was in line with best practices that only require suspensions following a local court order. That didn’t satisfy the United States though, far from it.

“The representative of the United States Embassy, Mr. Kevin Ludeke, Economic Specialist, who claims to represent the interests of the US Department of
Commerce, has mentioned threats to close our registry, with repeated harassment
regarding our practices and operation policies,” the letter to ICANN reads.

Ludeke is indeed listed on the US Embassy site for Costa Rica. He’s also referenced in a 2008 diplomatic cable leaked previously by Wikileaks. Contacted via email, Ludeke did not immediately respond to TorrentFreak’s request for comment.

Extract from the letter to ICANN

Surprisingly, Azofeifa says the US representative then got personal, making negative comments towards his Executive Director, “based on no clear evidence or statistical data to support his claims, as a way to pressure our organization to take down the domain name without following our current policies.”

Citing the Tunis Agenda for the Information Society of 2005, Azofeifa asserts that “policy authority for Internet-related public policy issues is the sovereign right of the States,” which in Costa Rica’s case means that there must be “a final judgment from the Courts of Justice of the Republic of Costa Rica” before the registry will suspend a domain.

But it seems legal action was not the preferred route of the US Embassy. Demanding that NIC Costa Rica take unilateral action, Mr. Ludeke continued with “pressure and harassment to take down the domain name without its proper process and local court order.”

Azofeifa’s letter to ICANN, which is cc’d to Stafford Fitzgerald Haney, United States Ambassador to Costa Rica and various people in the Costa Rican Ministry of Commerce, concludes with a request for suggestions on how to deal with the matter.

While the response should prove very interesting, none of the parties involved appear to have noticed that ThePirateBay.cr isn’t officially connected to The Pirate Bay

The domain and associated site appeared in the wake of the December 2014 shut down of The Pirate Bay, claiming to be the real deal and even going as far as making fake accounts in the names of famous ‘pirate’ groups including ettv and YIFY.

Today it acts as an unofficial and unaffiliated reverse proxy to The Pirate Bay while presenting the site’s content as its own. It’s also affiliated with a fake KickassTorrents site, Kickass.cd, which to this day claims that it’s a reincarnation of the defunct torrent giant.

But perhaps the most glaring issue in this worrying case is the apparent willingness of the United States to call out Costa Rica for not doing anything about a .CR domain run by third parties, when the real Pirate Bay’s .org domain is under United States’ jurisdiction.

Registered by the Public Interest Registry in Reston, Virginia, ThePirateBay.org is the famous site’s main domain. TorrentFreak asked PIR if anyone from the US government had ever requested action against the domain but at the time of publication, we had received no response.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

[$] User-space access to WMI functions

Post Syndicated from corbet original https://lwn.net/Articles/725725/rss

Windows Management Instrumentation (WMI) is a vaguely defined mechanism for
the control of platform-specific devices; laptop functions like special
buttons, LEDs, and the backlight are often controlled through WMI
interfaces. On Linux, access to WMI functions is restricted to the kernel,
while Windows allows user space to use them as well. A recent proposal to
make WMI functions available to user space in Linux as well spawned a
slow-moving conversation that turned on a couple of interesting questions —
only one of which was anticipated in the proposal itself.

Roku Sales Banned in Mexico Over Piracy Concerns

Post Syndicated from Ernesto original https://torrentfreak.com/roku-sales-banned-in-mexico-over-piracy-concerns-170619/

Online streaming piracy is on the rise and many people use dedicated media players to watch it through their regular TV.

While a lot of attention has been on Kodi, there are other players on the market that allow people to do the same. Roku, for example, has been doing very well too.

Like Kodi, Roku media players don’t offer any pirated content out of the box. In fact, they can be hooked up to a wide variety of legal streaming options including HBO Go, Hulu, and Netflix. Still, there is also a market for third-party pirate channels, outside the Roku Channel Store, which turn the boxes into pirate tools.

This pirate angle has now resulted in a ban on Roku sales in Mexico, according to a report in Milenio.

The ban was issued by the Superior Court of Justice of the City of Mexico, following a complaint from Cablevision. The order in question prohibits stores such as Amazon, Liverpool, El Palacio de Hierro, and Sears from importing and selling the devices.

In addition, the court also instructs banks including Banorte and BBVA Bancomer to stop processing payments from a long list of accounts linked to pirated services on Roku.

The main reason for the order is the availability of pirated content through Roku, but banning the device itself is utterly comprehensive. It would be similar to banning all Android-based devices because certain apps allow users to stream copyrighted content without permission.

Roku

Roku has yet to release an official statement on the court order. TorrentFreak reached out to the company but hadn’t heard back at the time of publication.

It’s clear, however, that streaming players are among the top concerns for copyright holders. Motion Picture Association boss Stan McCoy recently characterized the use of streaming players to access infringing content as “Piracy 3.0.

“If you think of old-fashioned peer-to-peer piracy as 1.0, and then online illegal streaming websites as 2.0, in the audio-visual sector, in particular, we now face challenge number 3.0, which is what I’ll call the challenge of illegal streaming devices,” McCoy said earlier this month.

Unlike the court order in Mexico, however, McCoy stressed that the devices themselves, and software such as Kodi, are ‘probably’ not illegal. However, copyright-infringing pirate add-ons have the capability to turn them into an unprecedented piracy threat.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

New Technique to Hijack Social Media Accounts

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/new_technique_t.html

Access Now has documented it being used against a Twitter user, but it also works against other social media accounts:

With the Doubleswitch attack, a hijacker takes control of a victim’s account through one of several attack vectors. People who have not enabled an app-based form of multifactor authentication for their accounts are especially vulnerable. For instance, an attacker could trick you into revealing your password through phishing. If you don’t have multifactor authentication, you lack a secondary line of defense. Once in control, the hijacker can then send messages and also subtly change your account information, including your username. The original username for your account is now available, allowing the hijacker to register for an account using that original username, while providing different login credentials.

Three news stories.

Comodo DNS Blocks TorrentFreak Over “Hacking and Warez “

Post Syndicated from Ernesto original https://torrentfreak.com/comodo-dns-blocks-torrentfreak-over-hacking-and-warez-170617/

Website blocking has become one of the go-to methods for reducing online copyright infringement.

In addition to court-ordered blockades, various commercial vendors also offer a broad range of blocking tools. This includes Comodo, which offers a free DNS service that keeps people away from dangerous sites.

The service labeled SecureDNS is part of the Comodo Internet Security bundle but can be used by the general public as well, without charge. Just change the DNS settings on your computer or any other device, and you’re ready to go.

“As a leading provider of computer security solutions, Comodo is keenly aware of the dangers that plague the Internet today. SecureDNS helps users keep safe online with its malware domain filtering feature,” the company explains.

Aside from malware and spyware, Comodo also blocks access to sites that offer access to pirated content. Or put differently, they try to do this. But it’s easier said than done.

This week we were alerted to the fact that Comodo blocks direct access to TorrentFreak. Those who try to access our news site get an ominous warning instead, suggesting that we might share pirated content.

“This website has been blocked temporarily because of the following reason(s): Hacking/Warez: Site may offer illegal sharing of copyrighted software or media,” the warning reads, adding that several users also reported the site to be unsafe.

TorrentFreak blocked

People can still access the site by clicking on a big red cross, although that’s something Comodo doesn’t recommend. However, it is quite clear that new readers will be pretty spooked by the alarming message.

We assume that TorrentFreak was added to Comodo’s blocklist by mistake. And while mistakes can happen everywhere, this once again show that overblocking is a serious concern.

We are lucky enough that readers alerted us to the problem, but in other cases, it could easily go unnoticed.

Interestingly, the ‘piracy’ blocklist is not as stringent as the above would suggest. While we replicated the issue, we also checked several other known ‘pirate’ sites including The Pirate Bay, RARBG, GoMovies, and Pubfilm. These could all be accessed through SecureDNS without any warning.

TorrentFreak contacted Comodo for a comment on their curious blocking efforts, but we have yet to hear back from the company. In the meantime, Comodo SecureDNS users may want to consider switching to a more open DNS provider.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.