Tag Archives: domain name

GoDaddy to Suspend ‘Pirate’ Domain Following Music Industry Complaints

Post Syndicated from Andy original https://torrentfreak.com/godaddy-to-suspend-pirate-domain-following-music-industry-complaints-180601/

Most piracy-focused sites online conduct their business with minimal interference from outside parties. In many cases, a heap of DMCA notices filed with Google represents the most visible irritant.

Others, particularly those with large audiences, can find themselves on the end of a web blockade. Mostly court-ordered, blocking measures restrict the ability of Internet users to visit a site due to ISPs restricting traffic.

In some regions, where copyright holders have the means to do so, they choose to tackle a site’s infrastructure instead, which could mean complaints to webhosts or other service providers. At times, this has included domain registries, who are asked to disable domains on copyright grounds.

This is exactly what has happened to Fox-MusicaGratis.com, a Spanish-language music piracy site that incurred the wrath of IFPI member UNIMPRO – the Peruvian Union of Phonographic Producers.

Pirate music, suspended domain

In a process that’s becoming more common in the region, UNIMPRO initially filed a complaint with the Copyright Commission (Comisión de Derecho de Autor (CDA)) which conducted an investigation into the platform’s activities.

“The CDA considered, among other things, the irreparable damage that would have been caused to the legitimate rights owners, taking into account the large number of users who could potentially have visited said website, which was making available endless musical recordings for commercial purposes, without authorization of the holders of rights,” a statement from CDA reads.

The administrative process was carried out locally with the involvement of the National Institute for the Defense of Competition and the Protection of Intellectual Property (Indecopi), an autonomous public body tasked with handling anti-competitive behavior, unfair competition, and intellectual property matters.

Indecopi HQ

The matter was decided in favor of the rightsholders and a subsequent ruling included an instruction for US-based domain name registry GoDaddy to suspend Fox-MusicaGratis.com. According to the copyright protection entity, GoDaddy agreed to comply, to prevent further infringement.

This latest action involving a music piracy site registered with GoDaddy follows on the heels of a similar enforcement process back in March.

Mp3Juices-Download-Free.com, Melodiavip.net, Foxmusica.site and Fulltono.me were all music sites offering MP3 content without copyright holders’ permission. They too were the subject of an UNIMPRO complaint which resulted in orders for GoDaddy to suspend their domains.

In the cases of all five websites, GoDaddy was given the chance to appeal but there is no indication that the company has done so. GoDaddy did not respond to a request for comment.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Replacing macOS Server with Synology NAS

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/replacing-macos-server-with-synology-nas/

Synology NAS boxes backed up to the cloud

Businesses and organizations that rely on macOS server for essential office and data services are facing some decisions about the future of their IT services.

Apple recently announced that it is deprecating a significant portion of essential network services in macOS Server, as they described in a support statement posted on April 24, 2018, “Prepare for changes to macOS Server.” Apple’s note includes:

macOS Server is changing to focus more on management of computers, devices, and storage on your network. As a result, some changes are coming in how Server works. A number of services will be deprecated, and will be hidden on new installations of an update to macOS Server coming in spring 2018.

The note lists the services that will be removed in a future release of macOS Server, including calendar and contact support, Dynamic Host Configuration Protocol (DHCP), Domain Name Services (DNS), mail, instant messages, virtual private networking (VPN), NetInstall, Web server, and the Wiki.

Apple assures users who have already configured any of the listed services that they will be able to use them in the spring 2018 macOS Server update, but the statement ends with links to a number of alternative services, including hosted services, that macOS Server users should consider as viable replacements to the features it is removing. These alternative services are all FOSS (Free and Open-Source Software).

As difficult as this could be for organizations that use macOS server, this is not unexpected. Apple left the server hardware space back in 2010, when Steve Jobs announced the company was ending its line of Xserve rackmount servers, which were introduced in May, 2002. Since then, macOS Server has hardly been a prominent part of Apple’s product lineup. It’s not just the product itself that has lost some luster, but the entire category of SMB office and business servers, which has been undergoing a gradual change in recent years.

Some might wonder how important the news about macOS Server is, given that macOS Server represents a pretty small share of the server market. macOS Server has been important to design shops, agencies, education users, and small businesses that likely have been on Macs for ages, but it’s not a significant part of the IT infrastructure of larger organizations and businesses.

What Comes After macOS Server?

Lovers of macOS Server don’t have to fear having their Mac minis pried from their cold, dead hands quite yet. Installed services will continue to be available. In the fall of 2018, new installations and upgrades of macOS Server will require users to migrate most services to other software. Since many of the services of macOS Server were already open-source, this means that a change in software might not be required. It does mean more configuration and management required from those who continue with macOS Server, however.

Users can continue with macOS Server if they wish, but many will see the writing on the wall and look for a suitable substitute.

The Times They Are A-Changin’

For many people working in organizations, what is significant about this announcement is how it reflects the move away from the once ubiquitous server-based IT infrastructure. Services that used to be centrally managed and office-based, such as storage, file sharing, communications, and computing, have moved to the cloud.

In selecting the next office IT platforms, there’s an opportunity to move to solutions that reflect and support how people are working and the applications they are using both in the office and remotely. For many, this means including cloud-based services in office automation, backup, and business continuity/disaster recovery planning. This includes Software as a Service, Platform as a Service, and Infrastructure as a Service (Saas, PaaS, IaaS) options.

IT solutions that integrate well with the cloud are worth strong consideration for what comes after a macOS Server-based environment.

Synology NAS as a macOS Server Alternative

One solution that is becoming popular is to replace macOS Server with a device that has the ability to provide important office services, but also bridges the office and cloud environments. Using Network-Attached Storage (NAS) to take up the server slack makes a lot of sense. Many customers are already using NAS for file sharing, local data backup, automatic cloud backup, and other uses. In the case of Synology, their operating system, Synology DiskStation Manager (DSM), is Linux based, and integrates the basic functions of file sharing, centralized backup, RAID storage, multimedia streaming, virtual storage, and other common functions.

Synology NAS box

Synology NAS

Since DSM is based on Linux, there are numerous server applications available, including many of the same ones that are available for macOS Server, which shares conceptual roots with Linux as it comes from BSD Unix.

Synology DiskStation Manager Package Center screenshot

Synology DiskStation Manager Package Center

According to Ed Lukacs, COO at 2FIFTEEN Systems Management in Salt Lake City, their customers have found the move from macOS Server to Synology NAS not only painless, but positive. DSM works seamlessly with macOS and has been faster for their customers, as well. Many of their customers are running Adobe Creative Suite and Google G Suite applications, so a workflow that combines local storage, remote access, and the cloud, is already well known to them. Remote users are supported by Synology’s QuickConnect or VPN.

Business continuity and backup are simplified by the flexible storage capacity of the NAS. Synology has built-in backup to Backblaze B2 Cloud Storage with Synology’s Cloud Sync, as well as a choice of a number of other B2-compatible applications, such as Cloudberry, Comet, and Arq.

Customers have been able to get up and running quickly, with only initial data transfers requiring some time to complete. After that, management of the NAS can be handled in-house or with the support of a Managed Service Provider (MSP).

Are You Sticking with macOS Server or Moving to Another Platform?

If you’re affected by this change in macOS Server, please let us know in the comments how you’re planning to cope. Are you using Synology NAS for server services? Please tell us how that’s working for you.

The post Replacing macOS Server with Synology NAS appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

ISP Telenor Will Block The Pirate Bay in Sweden Without a Shot Fired

Post Syndicated from Andy original https://torrentfreak.com/isp-telenor-will-block-the-pirate-bay-in-sweden-without-a-shot-fired-180520/

Back in 2014, Universal Music, Sony Music, Warner Music, Nordisk Film and the Swedish Film Industry filed a lawsuit against Bredbandsbolaget, one of Sweden’s largest ISPs.

The copyright holders asked the Stockholm District Court to order the ISP to block The Pirate Bay and streaming site Swefilmer, claiming that the provider knowingly facilitated access to the pirate platforms and assisted their pirating users.

Soon after the ISP fought back, refusing to block the sites in a determined response to the Court.

“Bredbandsbolaget’s role is to provide its subscribers with access to the Internet, thereby contributing to the free flow of information and the ability for people to reach each other and communicate,” the company said in a statement.

“Bredbandsbolaget does not block content or services based on individual organizations’ requests. There is no legal obligation for operators to block either The Pirate Bay or Swefilmer.”

In February 2015 the parties met in court, with Bredbandsbolaget arguing in favor of the “important principle” that ISPs should not be held responsible for content exchanged over the Internet, in the same way the postal service isn’t responsible for the contents of an envelope.

But with TV companies SVT, TV4 Group, MTG TV, SBS Discovery and C More teaming up with the IFPI alongside Paramount, Disney, Warner and Sony in the case, Bredbandsbolaget would need to pull out all the stops to obtain victory. The company worked hard and initially the news was good.

In November 2015, the Stockholm District Court decided that the copyright holders could not force Bredbandsbolaget to block the pirate sites, ruling that the ISP’s operations did not amount to participation in the copyright infringement offenses carried out by some of its ‘pirate’ subscribers.

However, the case subsequently went to appeal, with the brand new Patent and Market Court of Appeal hearing arguments. In February 2017 it handed down its decision, which overruled the earlier ruling of the District Court and ordered Bredbandsbolaget to implement “technical measures” to prevent its customers accessing the ‘pirate’ sites through a number of domain names and URLs.

With nowhere left to go, Bredbandsbolaget and owner Telenor were left hanging onto their original statement which vehemently opposed site-blocking.

“It is a dangerous path to go down, which forces Internet providers to monitor and evaluate content on the Internet and block websites with illegal content in order to avoid becoming accomplices,” they said.

In March 2017, Bredbandsbolaget blocked The Pirate Bay but said it would not give up the fight.

“We are now forced to contest any future blocking demands. It is the only way for us and other Internet operators to ensure that private players should not have the last word regarding the content that should be accessible on the Internet,” Bredbandsbolaget said.

While it’s not clear whether any additional blocking demands have been filed with the ISP, this week an announcement by Bredbandsbolaget parent company Telenor revealed an unexpected knock-on effect. Seemingly without a single shot being fired, The Pirate Bay will now be blocked by Telenor too.

The background lies in Telenor’s acquisition of Bredbandsbolaget back in 2005. Until this week the companies operated under separate brands but will now merge into one entity.

“Telenor Sweden and Bredbandsbolaget today take the final step on their joint trip and become the same company with the same name. As a result, Telenor becomes a comprehensive provider of broadband, TV and mobile communications,” the company said in a statement this week.

“Telenor Sweden and Bredbandsbolaget have shared both logo and organization for the last 13 years. Today, we take the last step in the relationship and consolidate the companies under the same name.”

Up until this final merger, 600,000 Bredbandsbolaget broadband customers were denied access to The Pirate Bay. Now it appears that Telenor’s 700,000 fiber and broadband customers will be affected too. The new single-brand company says it has decided to block the notorious torrent site across its entire network.

“We have not discontinued Bredbandsbolaget, but we have merged Telenor and Bredbandsbolaget and become one,” the company said.

“When we share the same network, The Pirate Bay is blocked by both Telenor and Bredbandsbolaget and there is nothing we plan to change in the future.”

TorrentFreak contacted the PR departments of both Telenor and Bredbandsbolaget requesting information on why a court order aimed at only the latter’s customers would now affect those of the former too, more than doubling the blockade’s reach. Neither company responded which leaves only speculation as to its motives.

On the one hand, the decision to voluntarily implement an expanded blockade could perhaps be viewed as a little unusual given how much time, effort and money has been invested in fighting web-blockades in Sweden.

On the other, the merger of the companies may present legal difficulties as far as the court order goes and it could certainly cause friction among the customer base of Telenor if some customers could access TPB, and others could not.

In any event, the legal basis for web-blocking on copyright infringement grounds was firmly established last year at the EU level, which means that Telenor would lose any future legal battle, should it decide to dig in its heels. On that basis alone, the decision to block all customers probably makes perfect commercial sense.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Developer Accidentally Makes Available 390,000 ‘Pirated’ eBooks

Post Syndicated from Andy original https://torrentfreak.com/developer-accidentally-makes-available-390000-pirated-ebooks-180509/

Considering the effort it takes to set one up, pirate sites are clearly always intentional. One doesn’t make available hundreds of thousands of potentially infringing works accidentally.

Unless you’re developer Nick Janetakis, that is.

“About 2 years ago I was recording a video course that dealt with setting up HTTPS on a domain name. In all of my courses, I make sure to ‘really’ do it on video so that you can see the entire process from end to end,” Nick wrote this week.

“Back then I used nickjanetakis.com for all of my courses, so I didn’t have a dedicated domain name for the course I was working on.”

So instead, Nick set up an A record to point ssl.nickjanetakis.com to a DigitalOcean droplet (a cloud server) so anyone accessing the sub-domain could access the droplet (and his content) via his sub-domain.

That was all very straightforward and all Nick needed to do was delete the A record after he was done to ensure that he wasn’t pointing to someone else’s IP address when the droplet was eventually allocated to someone else. But he forgot, with some interesting side effects that didn’t come to light until years later.

“I have Google Alerts set up so I get emailed when people link to my site. A few months ago I started to receive an absurd amount of notifications, but I ignored them. I chalked it up to ‘Google is probably on drugs’,” Nick explains.

However, the developer paid more attention when he received an email from a subscriber to his courses who warned that Nick’s site might have been compromised. A Google search revealed a worrying amount of apparently unauthorized eBook content being made available via Nick’s domain.

350,000 items? Whoops! (credit: Nick Janetakis)

Of course, Nick wasn’t distributing any content himself, but as far as Google was concerned, his domain was completely responsible. For confirmation, TorrentFreak looked up Nick’s domain on Google’s Transparency report and found at least nine copyright holders and two reporting organizations complaining of copyright infringement.

“No one from Google contacted me and none of the copyright infringement people reached out to me. I wish they would have,” Nick told us.

The earliest complaint was filed with Google on April 22, 2018, suggesting that the IP address/domain name collision causing the supposed infringement took place fairly recently. From there came a steady flow of reports, but not the tidal wave one might have expected given the volume of results.

Complaints courtesy of LumenDatabase.org

A little puzzled, TorrentFreak asked Nick if he’d managed to find out from DigitalOcean which pirates had been inadvertently using his domain. He said he’d asked, but the company wouldn’t assist.

“I asked DigitalOcean to get the email contact of the person who owned the IP address but they denied me. I just wanted to know for my own sanity,” he says.

With results now dropping off Google very quickly, TF carried out some tests using Google’s cache. None of the tests led us to any recognizable pirate site but something was definitely amiss.

The ‘pirate’ links (which can be found using a ‘site:ssl.nickjanetakis.com’ search in Google) open documents (sample) which contain links to the domain BookFreeNow.com, which looks very much like a pirate site but suggests it will only hand over PDF files after the user joins up, ostensibly for free.

However, experience with this kind of platform tells us that eventually, there would probably be some kind of cost involved, if indirect.



So, after clicking the registration link (or automatically, if you wait a few seconds) we weren’t entirely shocked when we were redirected briefly to an affiliate site that pays generously. From there we were sent to an advert server which caused a MalwareBytes alert, which was enough for us to back right out of there.

While something amazing might have sat behind the doors of BookFreeNow, we suspect that rather than being a regular pirate site, it’s actually set up to give the impression of being one, in order to generate business in other ways.

Certainly, copyright holders are suspicious of it, and have sent numerous complaints to Google.

In any event, Nick Janetakis should be very grateful that his domain is no longer connected to the platform since a basic pirate site, while troublesome, would be much more straightforward to explain. In the meantime, Nick has some helpful tips on how to avoid such a situation in the future.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Pirate IPTV Blocking Case is No Slam Dunk Says Federal Court Judge

Post Syndicated from Andy original https://torrentfreak.com/pirate-iptv-blocking-case-is-no-slam-dunk-says-federal-court-judge-180502/

Last year, Hong Kong-based broadcaster Television Broadcasts Limited (TVB) applied for a blocking injunction against several unauthorized IPTV services.

Under the Copyright Act, the broadcaster asked the Federal Court to order ISPs including Telstra, Optus, Vocus, and TPG plus their subsidiaries to block access to seven Android-based services named as A1, BlueTV, EVPAD, FunTV, MoonBox, Unblock, and hTV5.

Unlike torrent site and streaming portal blocks granted earlier, it soon became clear that this case would present unique difficulties. TVB not only wants Internet locations (URLs, domains, IP addresses) related to the technical operation of the services blocked, but also hosting services akin to Google Play and Apple’s App Store that host the app.

Furthermore, it is far from clear whether China-focused live programming is eligible for copyright protection in Australia. If China had been a party to the 1961 Rome Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organisations, it would receive protection. As it stands, it does not.

That causes complications in respect of Section 115a of the Copyright Act which allows rightsholders to apply for an injunction to have “overseas online locations” blocked if they facilitate access to copyrighted content. Furthermore, the section requires that the “primary purpose” of the location is to infringe copyrights recognized in Australia. If it does not, then there’s no blocking option available.

“If most of what is occurring here is a reproduction of broadcasts that are not protected by copyright, then the primary purpose is not to facilitate copyright infringement,” Justice Nicholas said in April.

This morning TVB returned to Federal Court for a scheduled hearing. The ISPs were a no-show again, leaving the broadcaster’s legal team to battle it out with Justice Nicholas alone. According to details published by ComputerWorld, he isn’t making it easy for the overseas company.

The Judge put it to TVB that “the purpose of this system [the set-top boxes] is to make available a broadcast that’s not copyright protected in this country, in this country,” he said.

“If 10 per cent of the content was infringing content, how could you say the primary purpose is infringing copyright?” the Judge asked.

But despite the Judge’s reservations, TVB believes that the pirate IPTV services clearly infringe its rights, since alongside live programming, the devices also reproduce TVB movies which do receive protection in Australia. However, the company is also getting creative in an effort to sidestep the ‘live TV’ conundrum.

TVB counsel Julian Cooke told the Court that live TVB broadcasts are first reproduced on foreign servers from where they are communicated to set-top devices in Australia with a delay of between one and four minutes. This is a common feature of all pirate IPTV services which potentially calls into question the nature of the ‘live’ broadcasts. The same servers also carry recorded content too, he argued.

“Because the way the system is set up, it compounds itself … in a number of instances, a particular domain name, which we refer to as the portal target domain name, allows a communication path not just to live TV, but it’s also the communication path to other applications such as replay and video on demand,” Cooke said, as quoted by ZDNet.

Cooke told the Court that he wasn’t sure whether the threshold for “primary purpose” was set at 50% of infringing content but noted that the majority of the content available through the boxes is infringing and the nature of the servers is even more pronounced.

“It compounds the submission that the primary purpose of the online location which is the facilitating server is to facilitate the infringement of copyright using that communication path,” he said.

As TF predicted in our earlier coverage, TVB today got creative by highlighting other content that it does receive copyright protection for in Australia. Previously in the UK, the Premier League successfully stated that it owns copyright in the logos presented in a live broadcast.

This morning, Cooke told the court that TVB “literary works” – scripts used on news shows and subtitling services – receive copyright protection in Australia so urged the Court to consider the full package.

“If one had concerns about live TV, one shouldn’t based on the analysis we’ve done … if one adds that live TV infringements together with video on demand together with replay, there could be no doubt that the primary purpose of the online locations is to infringe copyright,” he said.

Due to the apparent complexity of the case, Justice Nicholas reserved his decision, telling TVB that his ruling could take a couple of months after receiving his “close attention.”

Last week, Village Roadshow and several major Hollywood studios won a blocking injunction against a different pirate IPTV service. HD Subs Plus delivers around 600 live premium channels plus hundreds of movies on demand, but the service will now be blocked by ISPs across Australia.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Enhanced Domain Protections for Amazon CloudFront Requests

Post Syndicated from Colm MacCarthaigh original https://aws.amazon.com/blogs/security/enhanced-domain-protections-for-amazon-cloudfront-requests/

Over the coming weeks, we’ll be adding enhanced domain protections to Amazon CloudFront. The short version is this: the new measures are designed to ensure that requests handled by CloudFront are handled on behalf of legitimate domain owners.

Using CloudFront to receive traffic for a domain you aren’t authorized to use is already a violation of our AWS Terms of Service. When we become aware of this type of activity, we deal with it behind the scenes by disabling abusive accounts. Now we’re integrating checks directly into the CloudFront API and Content Distribution service, as well.

Enhanced Protection against Dangling DNS entries
To use CloudFront with your domain, you must configure your domain to point at CloudFront. You may use a traditional CNAME, or an Amazon Route 53 “ALIAS” record.

A problem can arise if you delete your CloudFront distribution, but leave your DNS still pointing at CloudFront, popularly known as a “dangling” DNS entry. Thankfully, this is very rare, as the domain will no longer work, but we occasionally see customers who leave their old domains dormant. This can also happen if you leave this kind of “dangling” DNS entry pointing at other infrastructure you no longer control. For example, if you leave a domain pointing at an IP address that you don’t control, then there is a risk that someone may come along and “claim” traffic destined for your domain.

In an even more rare set of circumstances, an abuser can exploit a subdomain of a domain that you are actively using. For example, if a customer left “images.example.com” dangling and pointing to a deleted CloudFront distribution which is no longer in use, but they still actively use the parent domain “example.com”, then an abuser could come along and register “images.example.com” as an alternative name on their own distribution and claim traffic that they aren’t entitled to. This also means that cookies may be set and intercepted for HTTP traffic potentially including the parent domain. HTTPS traffic remains protected if you’ve removed the certificate associated with the original CloudFront distribution.

Of course, the best fix for this kind of risk is not to leave dangling DNS entries in the first place. Earlier in February, 2018, we added a new warning to our systems. With this warning, if you remove an alternate domain name from a distribution, you are reminded to delete any DNS entries that may still be pointing at CloudFront.

We also have long-standing checks in the CloudFront API that ensure this kind of domain claiming can’t occur when you are using wildcard domains. If you attempt to add *.example.com to your CloudFront distribution, but another account has already registered www.example.com, then the attempt will fail.

With the new enhanced domain protection, CloudFront will now also check your DNS whenever you remove an alternate domain. If we determine that the domain is still pointing at your CloudFront distribution, the API call will fail and no other accounts will be able to claim this traffic in the future.

Enhanced Protection against Domain Fronting
CloudFront will also be soon be implementing enhanced protections against so-called “Domain Fronting”. Domain Fronting is when a non-standard client makes a TLS/SSL connection to a certain name, but then makes a HTTPS request for an unrelated name. For example, the TLS connection may connect to “www.example.com” but then issue a request for “www.example.org”.

In certain circumstances this is normal and expected. For example, browsers can re-use persistent connections for any domain that is listed in the same SSL Certificate, and these are considered related domains. But in other cases, tools including malware can use this technique between completely unrelated domains to evade restrictions and blocks that can be imposed at the TLS/SSL layer.

To be clear, this technique can’t be used to impersonate domains. The clients are non-standard and are working around the usual TLS/SSL checks that ordinary clients impose. But clearly, no customer ever wants to find that someone else is masquerading as their innocent, ordinary domain. Although these cases are also already handled as a breach of our AWS Terms of Service, in the coming weeks we will be checking that the account that owns the certificate we serve for a particular connection always matches the account that owns the request we handle on that connection. As ever, the security of our customers is our top priority, and we will continue to provide enhanced protection against misconfigurations and abuse from unrelated parties.

Interested in additional AWS Security news? Follow the AWS Security Blog on Twitter.

Aussie Federal Court Orders ISPs to Block Pirate IPTV Service

Post Syndicated from Andy original https://torrentfreak.com/aussie-federal-court-orders-isps-to-block-pirate-iptv-service-180427/

After successful applying for ISP blocks against dozens of traditional torrent and streaming portals, Village Roadshow and a coalition of movie studios switched tack last year.

With the threat of pirate subscription IPTV services looming large, Roadshow, Disney, Universal, Warner Bros, Twentieth Century Fox, and Paramount targeted HDSubs+ (also known as PressPlayPlus), a fairly well-known service that provides hundreds of otherwise premium live channels, movies, and sports for a relatively small monthly fee.

The injunction, which was filed last October, targets Australia’s largest ISPs including Telstra, Optus, TPG, and Vocus, plus subsidiaries.

Unlike blocking injunctions targeting regular sites, the studios sought to have several elements of HD Subs+ infrastructure rendered inaccessible, so that its sales platform, EPG (electronic program guide), software (such as an Android and set-top box app), updates, and sundry other services would fail to operate in Australia.

After a six month wait, the Federal Court granted the application earlier today, compelling Australia’s ISPs to block “16 online locations” associated with the HD Subs+ service, rendering its TV services inaccessible Down Under.

“Each respondent must, within 15 business days of service of these orders, take reasonable steps to disable access to the target online locations,” said Justice Nicholas, as quoted by ZDNet.

A small selection of channels in the HDSubs+ package

The ISPs were given flexibility in how to implement the ban, with the Judge noting that DNS blocking, IP address blocking or rerouting, URL blocking, or “any alternative technical means for disabling access”, would be acceptable.

The rightsholders are required to pay a fee of AU$50 fee for each domain they want to block but Village Roadshow says it doesn’t mind doing so, since blocking is in “public interest”. Continuing a pattern established last year, none of the ISPs showed up to the judgment.

A similar IPTV blocking application was filed by Hong Kong-based broadcaster Television Broadcasts Limited (TVB) last year.

TVB wants ISPs including Telstra, Optus, Vocus, and TPG plus their subsidiaries to block access to seven Android-based services named as A1, BlueTV, EVPAD, FunTV, MoonBox, Unblock, and hTV5.

The application was previously heard alongside the HD Subs+ case but will now be handled separately following complications. In April it was revealed that TVB not only wants to block Internet locations related to the technical operation of the service, but also hosting sites that fulfill a role similar to that of Google Play or Apple’s App Store.

TVB wants to have these app marketplaces blocked by Australian ISPs, which would not only render the illicit apps inaccessible to the public but all of the non-infringing ones too.

Justice Nicholas will now have to decide whether the “primary purpose” of these marketplaces is to infringe or facilitate the infringement of TVB’s copyrights. However, there is also a question of whether China-focused live programming has copyright status in Australia. An additional hearing is scheduled for May 2 for these matters to be addressed.

Also on Friday, Foxtel filed yet another blocking application targeting “15 online locations” involving 27 domain names connected to traditional BitTorrent and streaming services.

According to ComputerWorld the injunction targets the same set of ISPs but this time around, Foxtel is trying to save on costs.

The company doesn’t want to have expert witnesses present in court, doesn’t want to stage live demos of websites, and would like to rely on videos and screenshots instead. Foxtel also says that if the ISPs agree, it won’t serve its evidence on them as it has done previously.

The company asked Justice Nicholas to deal with the injunction application “on paper” but he declined, setting a hearing for June 18 but accepting screenshots and videos as evidence.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

How to centralize DNS management in a multi-account environment

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/how-to-centralize-dns-management-in-a-multi-account-environment/

In a multi-account environment where you require connectivity between accounts, and perhaps connectivity between cloud and on-premises workloads, the demand for a robust Domain Name Service (DNS) that’s capable of name resolution across all connected environments will be high.

The most common solution is to implement local DNS in each account and use conditional forwarders for DNS resolutions outside of this account. While this solution might be efficient for a single-account environment, it becomes complex in a multi-account environment.

In this post, I will provide a solution to implement central DNS for multiple accounts. This solution reduces the number of DNS servers and forwarders needed to implement cross-account domain resolution. I will show you how to configure this solution in four steps:

  1. Set up your Central DNS account.
  2. Set up each participating account.
  3. Create Route53 associations.
  4. Configure on-premises DNS (if applicable).

Solution overview

In this solution, you use AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) as a DNS service in a dedicated account in a Virtual Private Cloud (DNS-VPC).

The DNS service included in AWS Managed Microsoft AD uses conditional forwarders to forward domain resolution to either Amazon Route 53 (for domains in the awscloud.com zone) or to on-premises DNS servers (for domains in the example.com zone). You’ll use AWS Managed Microsoft AD as the primary DNS server for other application accounts in the multi-account environment (participating accounts).

A participating account is any application account that hosts a VPC and uses the centralized AWS Managed Microsoft AD as the primary DNS server for that VPC. Each participating account has a private, hosted zone with a unique zone name to represent this account (for example, business_unit.awscloud.com).

You associate the DNS-VPC with the unique hosted zone in each of the participating accounts, this allows AWS Managed Microsoft AD to use Route 53 to resolve all registered domains in private, hosted zones in participating accounts.

The following diagram shows how the various services work together:
 

Diagram showing the relationship between all the various services

Figure 1: Diagram showing the relationship between all the various services

 

In this diagram, all VPCs in participating accounts use Dynamic Host Configuration Protocol (DHCP) option sets. The option sets configure EC2 instances to use the centralized AWS Managed Microsoft AD in DNS-VPC as their default DNS Server. You also configure AWS Managed Microsoft AD to use conditional forwarders to send domain queries to Route53 or on-premises DNS servers based on query zone. For domain resolution across accounts to work, we associate DNS-VPC with each hosted zone in participating accounts.

If, for example, server.pa1.awscloud.com needs to resolve addresses in the pa3.awscloud.com domain, the sequence shown in the following diagram happens:
 

How domain resolution across accounts works

Figure 2: How domain resolution across accounts works

 

  • 1.1: server.pa1.awscloud.com sends domain name lookup to default DNS server for the name server.pa3.awscloud.com. The request is forwarded to the DNS server defined in the DHCP option set (AWS Managed Microsoft AD in DNS-VPC).
  • 1.2: AWS Managed Microsoft AD forwards name resolution to Route53 because it’s in the awscloud.com zone.
  • 1.3: Route53 resolves the name to the IP address of server.pa3.awscloud.com because DNS-VPC is associated with the private hosted zone pa3.awscloud.com.

Similarly, if server.example.com needs to resolve server.pa3.awscloud.com, the following happens:

  • 2.1: server.example.com sends domain name lookup to on-premise DNS server for the name server.pa3.awscloud.com.
  • 2.2: on-premise DNS server using conditional forwarder forwards domain lookup to AWS Managed Microsoft AD in DNS-VPC.
  • 1.2: AWS Managed Microsoft AD forwards name resolution to Route53 because it’s in the awscloud.com zone.
  • 1.3: Route53 resolves the name to the IP address of server.pa3.awscloud.com because DNS-VPC is associated with the private hosted zone pa3.awscloud.com.

Step 1: Set up a centralized DNS account

In previous AWS Security Blog posts, Drew Dennis covered a couple of options for establishing DNS resolution between on-premises networks and Amazon VPC. In this post, he showed how you can use AWS Managed Microsoft AD (provisioned with AWS Directory Service) to provide DNS resolution with forwarding capabilities.

To set up a centralized DNS account, you can follow the same steps in Drew’s post to create AWS Managed Microsoft AD and configure the forwarders to send DNS queries for awscloud.com to default, VPC-provided DNS and to forward example.com queries to the on-premise DNS server.

Here are a few considerations while setting up central DNS:

  • The VPC that hosts AWS Managed Microsoft AD (DNS-VPC) will be associated with all private hosted zones in participating accounts.
  • To be able to resolve domain names across AWS and on-premises, connectivity through Direct Connect or VPN must be in place.

Step 2: Set up participating accounts

The steps I suggest in this section should be applied individually in each application account that’s participating in central DNS resolution.

  1. Create the VPC(s) that will host your resources in participating account.
  2. Create VPC Peering between local VPC(s) in each participating account and DNS-VPC.
  3. Create a private hosted zone in Route 53. Hosted zone domain names must be unique across all accounts. In the diagram above, we used pa1.awscloud.com / pa2.awscloud.com / pa3.awscloud.com. You could also use a combination of environment and business unit: for example, you could use pa1.dev.awscloud.com to achieve uniqueness.
  4. Associate VPC(s) in each participating account with the local private hosted zone.

The next step is to change the default DNS servers on each VPC using DHCP option set:

  1. Follow these steps to create a new DHCP option set. Make sure in the DNS Servers to put the private IP addresses of the two AWS Managed Microsoft AD servers that were created in DNS-VPC:
     
    The "Create DHCP options set" dialog box

    Figure 3: The “Create DHCP options set” dialog box

     

  2. Follow these steps to assign the DHCP option set to your VPC(s) in participating account.

Step 3: Associate DNS-VPC with private hosted zones in each participating account

The next steps will associate DNS-VPC with the private, hosted zone in each participating account. This allows instances in DNS-VPC to resolve domain records created in these hosted zones. If you need them, here are more details on associating a private, hosted zone with VPC on a different account.

  1. In each participating account, create the authorization using the private hosted zone ID from the previous step, the region, and the VPC ID that you want to associate (DNS-VPC).
     
    aws route53 create-vpc-association-authorization –hosted-zone-id <hosted-zone-id> –vpc VPCRegion=<region>,VPCId=<vpc-id>
     
  2. In the centralized DNS account, associate DNS-VPC with the hosted zone in each participating account.
     
    aws route53 associate-vpc-with-hosted-zone –hosted-zone-id <hosted-zone-id> –vpc VPCRegion=<region>,VPCId=<vpc-id>
     

After completing these steps, AWS Managed Microsoft AD in the centralized DNS account should be able to resolve domain records in the private, hosted zone in each participating account.

Step 4: Setting up on-premises DNS servers

This step is necessary if you would like to resolve AWS private domains from on-premises servers and this task comes down to configuring forwarders on-premise to forward DNS queries to AWS Managed Microsoft AD in DNS-VPC for all domains in the awscloud.com zone.

The steps to implement conditional forwarders vary by DNS product. Follow your product’s documentation to complete this configuration.

Summary

I introduced a simplified solution to implement central DNS resolution in a multi-account environment that could be also extended to support DNS resolution between on-premise resources and AWS. This can help reduce operations effort and the number of resources needed to implement cross-account domain resolution.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Directory Service forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

New .BOT gTLD from Amazon

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-bot-gtld-from-amazon/

Today, I’m excited to announce the launch of .BOT, a new generic top-level domain (gTLD) from Amazon. Customers can use .BOT domains to provide an identity and portal for their bots. Fitness bots, slack bots, e-commerce bots, and more can all benefit from an easy-to-access .BOT domain. The phrase “bot” was the 4th most registered domain keyword within the .COM TLD in 2016 with more than 6000 domains per month. A .BOT domain allows customers to provide a definitive internet identity for their bots as well as enhancing SEO performance.

At the time of this writing .BOT domains start at $75 each and must be verified and published with a supported tool like: Amazon Lex, Botkit Studio, Dialogflow, Gupshup, Microsoft Bot Framework, or Pandorabots. You can expect support for more tools over time and if your favorite bot framework isn’t supported feel free to contact us here: [email protected].

Below, I’ll walk through the experience of registering and provisioning a domain for my bot, whereml.bot. Then we’ll look at setting up the domain as a hosted zone in Amazon Route 53. Let’s get started.

Registering a .BOT domain

First, I’ll head over to https://amazonregistry.com/bot, type in a new domain, and click magnifying class to make sure my domain is available and get taken to the registration wizard.

Next, I have the opportunity to choose how I want to verify my bot. I build all of my bots with Amazon Lex so I’ll select that in the drop down and get prompted for instructions specific to AWS. If I had my bot hosted somewhere else I would need to follow the unique verification instructions for that particular framework.

To verify my Lex bot I need to give the Amazon Registry permissions to invoke the bot and verify it’s existence. I’ll do this by creating an AWS Identity and Access Management (IAM) cross account role and providing the AmazonLexReadOnly permissions to that role. This is easily accomplished in the AWS Console. Be sure to provide the account number and external ID shown on the registration page.

Now I’ll add read only permissions to our Amazon Lex bots.

I’ll give my role a fancy name like DotBotCrossAccountVerifyRole and a description so it’s easy to remember why I made this then I’ll click create to create the role and be transported to the role summary page.

Finally, I’ll copy the ARN from the created role and save it for my next step.

Here I’ll add all the details of my Amazon Lex bot. If you haven’t made a bot yet you can follow the tutorial to build a basic bot. I can refer to any alias I’ve deployed but if I just want to grab the latest published bot I can pass in $LATEST as the alias. Finally I’ll click Validate and proceed to registering my domain.

Amazon Registry works with a partner EnCirca to register our domains so we’ll select them and optionally grab Site Builder. I know how to sling some HTML and Javascript together so I’ll pass on the Site Builder side of things.

 

After I click continue we’re taken to EnCirca’s website to finalize the registration and with any luck within a few minutes of purchasing and completing the registration we should receive an email with some good news:

Alright, now that we have a domain name let’s find out how to host things on it.

Using Amazon Route53 with a .BOT domain

Amazon Route 53 is a highly available and scalable DNS with robust APIs, healthchecks, service discovery, and many other features. I definitely want to use this to host my new domain. The first thing I’ll do is navigate to the Route53 console and create a hosted zone with the same name as my domain.


Great! Now, I need to take the Name Server (NS) records that Route53 created for me and use EnCirca’s portal to add these as the authoritative nameservers on the domain.

Now I just add my records to my hosted zone and I should be able to serve traffic! Way cool, I’ve got my very own .bot domain for @WhereML.

Next Steps

  • I could and should add to the security of my site by creating TLS certificates for people who intend to access my domain over TLS. Luckily with AWS Certificate Manager (ACM) this is extremely straightforward and I’ve got my subdomains and root domain verified in just a few clicks.
  • I could create a cloudfront distrobution to front an S3 static single page application to host my entire chatbot and invoke Amazon Lex with a cognito identity right from the browser.

Randall

Registrars Suspend 11 Pirate Site Domains, 89 More in the Crosshairs

Post Syndicated from Andy original https://torrentfreak.com/registrars-suspend-11-pirate-site-domains-89-more-in-the-crosshairs-180423/

In addition to website blocking which is running rampant across dozens of countries right now, targeting the domains of pirate sites is considered to be a somewhat effective anti-piracy tool.

The vast majority of websites are found using a recognizable name so when they become inaccessible, site operators have to work quickly to get the message out to fans. That can mean losing visitors, at least in the short term, and also contributes to the rise of copy-cat sites that may not have users’ best interests at heart.

Nevertheless, crime-fighting has always been about disrupting the ability of the enemy to do business so with this in mind, authorities in India began taking advice from the UK’s Police Intellectual Property Crime Unit (PIPCU) a couple of years ago.

After studying the model developed by PIPCU, India formed its Digital Crime Unit (DCU), which follows a multi-stage plan.

Initially, pirate sites and their partners are told to cease-and-desist. Next, complaints are filed with advertisers, who are asked to stop funding site activities. Service providers and domain registrars also receive a written complaint from the DCU, asking them to suspend services to the sites in question.

Last July, the DCU earmarked around 9,000 sites where pirated content was being made available. From there, 1,300 were placed on a shortlist for targeted action. Precisely how many have been contacted thus far is unclear but authorities are now reporting success.

According to local reports, the Maharashtra government’s Digital Crime Unit has managed to have 11 pirate site domains suspended following complaints from players in the entertainment industry.

As is often the case (and to avoid them receiving even more attention) the sites in question aren’t being named but according to Brijesh Singh, special Inspector General of Police in Maharashtra, the sites had a significant number of visitors.

Their domain registrars were sent a notice under Section 149 of the Code Of Criminal Procedure, which grants police the power to take preventative action when a crime is suspected. It’s yet to be confirmed officially but it seems likely that pirate sites utilizing local registrars were targeted by the authorities.

“Responding to our notice, the domain names of all these websites, that had a collective viewership of over 80 million, were suspended,” Singh said.

Laxman Kamble, a police inspector attached to the state government’s Cyber Cell, said the pilot project was launched after the government received complaints from Viacom and Star but back in January there were reports that the MPAA had also become involved.

Using the model pioneered by London’s PIPCU, 19 parameters were applied to list of pirate sites in order to place them on the shortlist. They are reported to include the type of content being uploaded, downloaded, and the number of downloads overall.

Kamble reports that a further 89 websites, that have domains registered abroad but are very popular in India, are now being targeted. Whether overseas registrars will prove as compliant will remain to be seen. After booking initial success, even PIPCU itself experienced problems keeping up the momentum with registrars.

In 2014, information obtained by TorrentFreak following a Freedom of Information request revealed that only five out of 70 domain registrars had complied with police requests to suspend domains.

A year later, PIPCU confirmed that suspending pirate domain names was no longer a priority for them after ICANN ruled that registrars don’t have to suspend domain names without a valid court order.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Amazon ECS Service Discovery

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/amazon-ecs-service-discovery/

Amazon ECS now includes integrated service discovery. This makes it possible for an ECS service to automatically register itself with a predictable and friendly DNS name in Amazon Route 53. As your services scale up or down in response to load or container health, the Route 53 hosted zone is kept up to date, allowing other services to lookup where they need to make connections based on the state of each service. You can see a demo of service discovery in an imaginary social networking app over at: https://servicediscovery.ranman.com/.

Service Discovery


Part of the transition to microservices and modern architectures involves having dynamic, autoscaling, and robust services that can respond quickly to failures and changing loads. Your services probably have complex dependency graphs of services they rely on and services they provide. A modern architectural best practice is to loosely couple these services by allowing them to specify their own dependencies, but this can be complicated in dynamic environments as your individual services are forced to find their own connection points.

Traditional approaches to service discovery like consul, etcd, or zookeeper all solve this problem well, but they require provisioning and maintaining additional infrastructure or installation of agents in your containers or on your instances. Previously, to ensure that services were able to discover and connect with each other, you had to configure and run your own service discovery system or connect every service to a load balancer. Now, you can enable service discovery for your containerized services in the ECS console, AWS CLI, or using the ECS API.

Introducing Amazon Route 53 Service Registry and Auto Naming APIs

Amazon ECS Service Discovery works by communicating with the Amazon Route 53 Service Registry and Auto Naming APIs. Since we haven’t talked about it before on this blog, I want to briefly outline how these Route 53 APIs work. First, some vocabulary:

  • Namespaces – A namespace specifies a domain name you want to route traffic to (e.g. internal, local, corp). You can think of it as a logical boundary between which services should be able to discover each other. You can create a namespace with a call to the aws servicediscovery create-private-dns-namespace command or in the ECS console. Namespaces are roughly equivalent to hosted zones in Route 53. A namespace contains services, our next vocabulary word.
  • Service – A service is a specific application or set of applications in your namespace like “auth”, “timeline”, or “worker”. A service contains service instances.
  • Service Instance – A service instance contains information about how Route 53 should respond to DNS queries for a resource.

Route 53 provides APIs to create: namespaces, A records per task IP, and SRV records per task IP + port.

When we ask Route 53 for something like: worker.corp we should get back a set of possible IPs that could fulfill that request. If the application we’re connecting to exposes dynamic ports then the calling application can easily query the SRV record to get more information.

ECS service discovery is built on top of the Route 53 APIs and manages all of the underlying API calls for you. Now that we understand how the service registry, works lets take a look at the ECS side to see service discovery in action.

Amazon ECS Service Discovery

Let’s launch an application with service discovery! First, I’ll create two task definitions: “flask-backend” and “flask-worker”. Both are simple AWS Fargate tasks with a single container serving HTTP requests. I’ll have flask-backend ask worker.corp to do some work and I’ll return the response as well as the address Route 53 returned for worker. Something like the code below:

@app.route("/")
namespace = os.getenv("namespace")
worker_host = "worker" + namespace
def backend():
    r = requests.get("http://"+worker_host)
    worker = socket.gethostbyname(worker_host)
    return "Worker Message: {]\nFrom: {}".format(r.content, worker)

 

Now, with my containers and task definitions in place, I’ll create a service in the console.

As I move to step two in the service wizard I’ll fill out the service discovery section and have ECS create a new namespace for me.

I’ll also tell ECS to monitor the health of the tasks in my service and add or remove them from Route 53 as needed. Then I’ll set a TTL of 10 seconds on the A records we’ll use.

I’ll repeat those same steps for my “worker” service and after a minute or so most of my tasks should be up and running.

Over in the Route 53 console I can see all the records for my tasks!

We can use the Route 53 service discovery APIs to list all of our available services and tasks and programmatically reach out to each one. We could easily extend to any number of services past just backend and worker. I’ve created a simple demo of an imaginary social network with services like “auth”, “feed”, “timeline”, “worker”, “user” and more here: https://servicediscovery.ranman.com/. You can see the code used to run that page on github.

Available Now
Amazon ECS service discovery is available now in US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland). AWS Fargate is currently only available in US East (N. Virginia). When you use ECS service discovery, you pay for the Route 53 resources that you consume, including each namespace that you create, and for the lookup queries your services make. Container level health checks are provided at no cost. For more information on pricing check out the documentation.

Please let us know what you’ll be building or refactoring with service discovery either in the comments or on Twitter!

Randall

 

P.S. Every blog post I write is made with a tremendous amount of help from numerous AWS colleagues. To everyone that helped build service discovery across all of our teams – thank you :)!

New DDoS Reflection-Attack Variant

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/new_ddos_reflec.html

This is worrisome:

DDoS vandals have long intensified their attacks by sending a small number of specially designed data packets to publicly available services. The services then unwittingly respond by sending a much larger number of unwanted packets to a target. The best known vectors for these DDoS amplification attacks are poorly secured domain name system resolution servers, which magnify volumes by as much as 50 fold, and network time protocol, which increases volumes by about 58 times.

On Tuesday, researchers reported attackers are abusing a previously obscure method that delivers attacks 51,000 times their original size, making it by far the biggest amplification method ever used in the wild. The vector this time is memcached, a database caching system for speeding up websites and networks. Over the past week, attackers have started abusing it to deliver DDoSes with volumes of 500 gigabits per second and bigger, DDoS mitigation service Arbor Networks reported in a blog post.

Cloudflare blog post. BoingBoing post.

EDITED TO ADD (3/9): Brian Krebs covered this.

Improve the Operational Efficiency of Amazon Elasticsearch Service Domains with Automated Alarms Using Amazon CloudWatch

Post Syndicated from Veronika Megler original https://aws.amazon.com/blogs/big-data/improve-the-operational-efficiency-of-amazon-elasticsearch-service-domains-with-automated-alarms-using-amazon-cloudwatch/

A customer has been successfully creating and running multiple Amazon Elasticsearch Service (Amazon ES) domains to support their business users’ search needs across products, orders, support documentation, and a growing suite of similar needs. The service has become heavily used across the organization.  This led to some domains running at 100% capacity during peak times, while others began to run low on storage space. Because of this increased usage, the technical teams were in danger of missing their service level agreements.  They contacted me for help.

This post shows how you can set up automated alarms to warn when domains need attention.

Solution overview

Amazon ES is a fully managed service that delivers Elasticsearch’s easy-to-use APIs and real-time analytics capabilities along with the availability, scalability, and security that production workloads require.  The service offers built-in integrations with a number of other components and AWS services, enabling customers to go from raw data to actionable insights quickly and securely.

One of these other integrated services is Amazon CloudWatch. CloudWatch is a monitoring service for AWS Cloud resources and the applications that you run on AWS. You can use CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources.

CloudWatch collects metrics for Amazon ES. You can use these metrics to monitor the state of your Amazon ES domains, and set alarms to notify you about high utilization of system resources.  For more information, see Amazon Elasticsearch Service Metrics and Dimensions.

While the metrics are automatically collected, the missing piece is how to set alarms on these metrics at appropriate levels for each of your domains. This post includes sample Python code to evaluate the current state of your Amazon ES environment, and to set up alarms according to AWS recommendations and best practices.

There are two components to the sample solution:

  • es-check-cwalarms.py: This Python script checks the CloudWatch alarms that have been set, for all Amazon ES domains in a given account and region.
  • es-create-cwalarms.py: This Python script sets up a set of CloudWatch alarms for a single given domain.

The sample code can also be found in the amazon-es-check-cw-alarms GitHub repo. The scripts are easy to extend or combine, as described in the section “Extensions and Adaptations”.

Assessing the current state

The first script, es-check-cwalarms.py, is used to give an overview of the configurations and alarm settings for all the Amazon ES domains in the given region. The script takes the following parameters:

python es-checkcwalarms.py -h
usage: es-checkcwalarms.py [-h] [-e ESPREFIX] [-n NOTIFY] [-f FREE][-p PROFILE] [-r REGION]
Checks a set of recommended CloudWatch alarms for Amazon Elasticsearch Service domains (optionally, those beginning with a given prefix).
optional arguments:
  -h, --help   		show this help message and exit
  -e ESPREFIX, --esprefix ESPREFIX	Only check Amazon Elasticsearch Service domains that begin with this prefix.
  -n NOTIFY, --notify NOTIFY    List of CloudWatch alarm actions; e.g. ['arn:aws:sns:xxxx']
  -f FREE, --free FREE  Minimum free storage (MB) on which to alarm
  -p PROFILE, --profile PROFILE     IAM profile name to use
  -r REGION, --region REGION       AWS region for the domain. Default: us-east-1

The script first identifies all the domains in the given region (or, optionally, limits them to the subset that begins with a given prefix). It then starts running a set of checks against each one.

The script can be run from the command line or set up as a scheduled Lambda function. For example, for one customer, it was deemed appropriate to regularly run the script to check that alarms were correctly set for all domains. In addition, because configuration changes—cluster size increases to accommodate larger workloads being a common change—might require updates to alarms, this approach allowed the automatic identification of alarms no longer appropriately set as the domain configurations changed.

The output shown below is the output for one domain in my account.

Starting checks for Elasticsearch domain iotfleet , version is 53
Iotfleet Automated snapshot hour (UTC): 0
Iotfleet Instance configuration: 1 instances; type:m3.medium.elasticsearch
Iotfleet Instance storage definition is: 4 GB; free storage calced to: 819.2 MB
iotfleet Desired free storage set to (in MB): 819.2
iotfleet WARNING: Not using VPC Endpoint
iotfleet WARNING: Does not have Zone Awareness enabled
iotfleet WARNING: Instance count is ODD. Best practice is for an even number of data nodes and zone awareness.
iotfleet WARNING: Does not have Dedicated Masters.
iotfleet WARNING: Neither index nor search slow logs are enabled.
iotfleet WARNING: EBS not in use. Using instance storage only.
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-ClusterStatus.yellow-Alarm ClusterStatus.yellow
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-ClusterStatus.red-Alarm ClusterStatus.red
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-CPUUtilization-Alarm CPUUtilization
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-JVMMemoryPressure-Alarm JVMMemoryPressure
iotfleet WARNING: Missing alarm!! ('ClusterIndexWritesBlocked', 'Maximum', 60, 5, 'GreaterThanOrEqualToThreshold', 1.0)
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-AutomatedSnapshotFailure-Alarm AutomatedSnapshotFailure
iotfleet Alarm: Threshold does not match: Test-Elasticsearch-iotfleet-FreeStorageSpace-Alarm Should be:  819.2 ; is 3000.0

The output messages fall into the following categories:

  • System overview, Informational: The Amazon ES version and configuration, including instance type and number, storage, automated snapshot hour, etc.
  • Free storage: A calculation for the appropriate amount of free storage, based on the recommended 20% of total storage.
  • Warnings: best practices that are not being followed for this domain. (For more about this, read on.)
  • Alarms: An assessment of the CloudWatch alarms currently set for this domain, against a recommended set.

The script contains an array of recommended CloudWatch alarms, based on best practices for these metrics and statistics. Using the array allows alarm parameters (such as free space) to be updated within the code based on current domain statistics and configurations.

For a given domain, the script checks if each alarm has been set. If the alarm is set, it checks whether the values match those in the array esAlarms. In the output above, you can see three different situations being reported:

  • Alarm ok; definition matches. The alarm set for the domain matches the settings in the array.
  • Alarm: Threshold does not match. An alarm exists, but the threshold value at which the alarm is triggered does not match.
  • WARNING: Missing alarm!! The recommended alarm is missing.

All in all, the list above shows that this domain does not have a configuration that adheres to best practices, nor does it have all the recommended alarms.

Setting up alarms

Now that you know that the domains in their current state are missing critical alarms, you can correct the situation.

To demonstrate the script, set up a new domain named “ver”, in us-west-2. Specify 1 node, and a 10-GB EBS disk. Also, create an SNS topic in us-west-2 with a name of “sendnotification”, which sends you an email.

Run the second script, es-create-cwalarms.py, from the command line. This script creates (or updates) the desired CloudWatch alarms for the specified Amazon ES domain, “ver”.

python es-create-cwalarms.py -r us-west-2 -e test -c ver -n "['arn:aws:sns:us-west-2:xxxxxxxxxx:sendnotification']"
EBS enabled: True type: gp2 size (GB): 10 No Iops 10240  total storage (MB)
Desired free storage set to (in MB): 2048.0
Creating  Test-Elasticsearch-ver-ClusterStatus.yellow-Alarm
Creating  Test-Elasticsearch-ver-ClusterStatus.red-Alarm
Creating  Test-Elasticsearch-ver-CPUUtilization-Alarm
Creating  Test-Elasticsearch-ver-JVMMemoryPressure-Alarm
Creating  Test-Elasticsearch-ver-FreeStorageSpace-Alarm
Creating  Test-Elasticsearch-ver-ClusterIndexWritesBlocked-Alarm
Creating  Test-Elasticsearch-ver-AutomatedSnapshotFailure-Alarm
Successfully finished creating alarms!

As with the first script, this script contains an array of recommended CloudWatch alarms, based on best practices for these metrics and statistics. This approach allows you to add or modify alarms based on your use case (more on that below).

After running the script, navigate to Alarms on the CloudWatch console. You can see the set of alarms set up on your domain.

Because the “ver” domain has only a single node, cluster status is yellow, and that alarm is in an “ALARM” state. It’s already sent a notification that the alarm has been triggered.

What to do when an alarm triggers

After alarms are set up, you need to identify the correct action to take for each alarm, which depends on the alarm triggered. For ideas, guidance, and additional pointers to supporting documentation, see Get Started with Amazon Elasticsearch Service: Set CloudWatch Alarms on Key Metrics. For information about common errors and recovery actions to take, see Handling AWS Service Errors.

In most cases, the alarm triggers due to an increased workload. The likely action is to reconfigure the system to handle the increased workload, rather than reducing the incoming workload. Reconfiguring any backend store—a category of systems that includes Elasticsearch—is best performed when the system is quiescent or lightly loaded. Reconfigurations such as setting zone awareness or modifying the disk type cause Amazon ES to enter a “processing” state, potentially disrupting client access.

Other changes, such as increasing the number of data nodes, may cause Elasticsearch to begin moving shards, potentially impacting search performance on these shards while this is happening. These actions should be considered in the context of your production usage. For the same reason I also do not recommend running a script that resets all domains to match best practices.

Avoid the need to reconfigure during heavy workload by setting alarms at a level that allows a considered approach to making the needed changes. For example, if you identify that each weekly peak is increasing, you can reconfigure during a weekly quiet period.

While Elasticsearch can be reconfigured without being quiesced, it is not a best practice to automatically scale it up and down based on usage patterns. Unlike some other AWS services, I recommend against setting a CloudWatch action that automatically reconfigures the system when alarms are triggered.

There are other situations where the planned reconfiguration approach may not work, such as low or zero free disk space causing the domain to reject writes. If the business is dependent on the domain continuing to accept incoming writes and deleting data is not an option, the team may choose to reconfigure immediately.

Extensions and adaptations

You may wish to modify the best practices encoded in the scripts for your own environment or workloads. It’s always better to avoid situations where alerts are generated but routinely ignored. All alerts should trigger a review and one or more actions, either immediately or at a planned date. The following is a list of common situations where you may wish to set different alarms for different domains:

  • Dev/test vs. production
    You may have a different set of configuration rules and alarms for your dev environment configurations than for test. For example, you may require zone awareness and dedicated masters for your production environment, but not for your development domains. Or, you may not have any alarms set in dev. For test environments that mirror your potential peak load, test to ensure that the alarms are appropriately triggered.
  • Differing workloads or SLAs for different domains
    You may have one domain with a requirement for superfast search performance, and another domain with a heavy ingest load that tolerates slower search response. Your reaction to slow response for these two workloads is likely to be different, so perhaps the thresholds for these two domains should be set at a different level. In this case, you might add a “max CPU utilization” alarm at 100% for 1 minute for the fast search domain, while the other domain only triggers an alarm when the average has been higher than 60% for 5 minutes. You might also add a “free space” rule with a higher threshold to reflect the need for more space for the heavy ingest load if there is danger that it could fill the available disk quickly.
  • “Normal” alarms versus “emergency” alarms
    If, for example, free disk space drops to 25% of total capacity, an alarm is triggered that indicates action should be taken as soon as possible, such as cleaning up old indexes or reconfiguring at the next quiet period for this domain. However, if free space drops below a critical level (20% free space), action must be taken immediately in order to prevent Amazon ES from setting the domain to read-only. Similarly, if the “ClusterIndexWritesBlocked” alarm triggers, the domain has already stopped accepting writes, so immediate action is needed. In this case, you may wish to set “laddered” alarms, where one threshold causes an alarm to be triggered to review the current workload for a planned reconfiguration, but a different threshold raises a “DefCon 3” alarm that immediate action is required.

The sample scripts provided here are a starting point, intended for you to adapt to your own environment and needs.

Running the scripts one time can identify how far your current state is from your desired state, and create an initial set of alarms. Regularly re-running these scripts can capture changes in your environment over time and adjusting your alarms for changes in your environment and configurations. One customer has set them up to run nightly, and to automatically create and update alarms to match their preferred settings.

Removing unwanted alarms

Each CloudWatch alarm costs approximately $0.10 per month. You can remove unwanted alarms in the CloudWatch console, under Alarms. If you set up a “ver” domain above, remember to remove it to avoid continuing charges.

Conclusion

Setting CloudWatch alarms appropriately for your Amazon ES domains can help you avoid suboptimal performance and allow you to respond to workload growth or configuration issues well before they become urgent. This post gives you a starting point for doing so. The additional sleep you’ll get knowing you don’t need to be concerned about Elasticsearch domain performance will allow you to focus on building creative solutions for your business and solving problems for your customers.

Enjoy!


Additional Reading

If you found this post useful, be sure to check out Analyzing Amazon Elasticsearch Service Slow Logs Using Amazon CloudWatch Logs Streaming and Kibana and Get Started with Amazon Elasticsearch Service: How Many Shards Do I Need?

 


About the Author

Dr. Veronika Megler is a senior consultant at Amazon Web Services. She works with our customers to implement innovative big data, AI and ML projects, helping them accelerate their time-to-value when using AWS.

 

 

 

Free Electrons becomes Bootlin

Post Syndicated from jake original https://lwn.net/Articles/746345/rss

Longtime embedded Linux development company Free Electrons has just changed its name to Bootlin due to a trademark dispute (with “FREE SAS, a French telecom operator, known as the owner of the free.fr website“). It is possible that Free Electrons may lose access to its “free-electrons.com” domain name as part of the dispute, so links to the many resources that Free Electrons hosts (including documentation and conference videos) should be updated to use “bootlin.com”. “The services we offer are different, we target a different audience (professionals instead of individuals), and most of our communication efforts are in English, to reach an international audience. Therefore Michael Opdenacker and Free Electrons’ management believe that there is no risk of confusion between Free Electrons and FREE SAS.

However, FREE SAS has filed in excess of 100 oppositions and District Court actions against trademarks or name containing “free”. In view of the resources needed to fight this case, Free Electrons has decided to change name without waiting for the decision of the District Court.

This will allow us to stay focused on our projects rather than exhausting ourselves fighting a long legal battle.”

Skygofree: New Government Malware for Android

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/skygofree_new_g.html

Kaspersky Labs is reporting on a new piece of sophisticated malware:

We observed many web landing pages that mimic the sites of mobile operators and which are used to spread the Android implants. These domains have been registered by the attackers since 2015. According to our telemetry, that was the year the distribution campaign was at its most active. The activities continue: the most recently observed domain was registered on October 31, 2017. Based on our KSN statistics, there are several infected individuals, exclusively in Italy.

Moreover, as we dived deeper into the investigation, we discovered several spyware tools for Windows that form an implant for exfiltrating sensitive data on a targeted machine. The version we found was built at the beginning of 2017, and at the moment we are not sure whether this implant has been used in the wild.

It seems to be Italian. Ars Technica speculates that it is related to Hacking Team:

That’s not to say the malware is perfect. The various versions examined by Kaspersky Lab contained several artifacts that provide valuable clues about the people who may have developed and maintained the code. Traces include the domain name h3g.co, which was registered by Italian IT firm Negg International. Negg officials didn’t respond to an email requesting comment for this post. The malware may be filling a void left after the epic hack in 2015 of Hacking Team, another Italy-based developer of spyware.

BoingBoing post.

12 B2 Power Tips for Experts and Developers

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/advanced-cloud-storage-tips/

B2 Tips for Pros
If you’ve been using B2 Cloud Storage for a while, you probably think you know all that you can do with it. But do you?

We’ve put together a list of blazing power tips for experts and developers that will take you to the next level. Take a look below.

If you’re new to B2, we have a list of power tips for you, too.
Visit 12 Power Tips for New B2 Users.
Backblaze logo

1    Manage File Versions

Use Lifecycle Rules on a Bucket to set how many days to keep files that are no longer the current version. This is a great way to manage the amount of space your B2 account is using.

Backblaze logo

2    Easily Stay on Top of Your B2 Account Limits

Set usage caps and get text/email alerts for your B2 account when you approach limits that you define.

Backblaze logo

3    Bring on Your Big Files

You can upload files as large as 10TB to B2.

Backblaze logo

4    You Can Use FedEx to Get Your Data into B2

If you have over 20TB of data, you can use Backblaze’s Fireball hard disk array to load large volumes of data directly into your B2 account. We ship a Fireball to you and you ship it back.

Backblaze logo

5    You Have Command-Line Control of All B2 Functions

You have complete control over B2 using our command line tool that is available for Macintosh, Windows, and Linux.

Backblaze logo

6    You Can Use Your Own Domain Name To Front a Public B2 Bucket

You can create a vanity URL for your B2 account.

Backblaze logo

7    See What’s Happening in Your Account with Graphical Reports

You can view graphical reports summarizing your B2 usage — transactions, downloads, averages, data stored — in your B2 account dashboard.

Backblaze logo

8    Create a B2 SDK

You can build your own B2 SDK for JVM-based or JVM-compatible languages using our B2 Java SDK on Github.

Backblaze logo

9    B2’s API is Easy to Use

B2’s API is similar to, but simpler than Amazon’s S3 API, making it super easy for developers to integrate with B2 Cloud Storage.

Backblaze logo

10    View Code Examples To Get Your B2 Project Started

The B2 API is well documented and has code examples for cURL, Java, Python, Swift, Ruby, C#, and PHP. For example, here’s how to create a B2 Bucket.

Backblaze logo

11    Developers can set the B2 part size as low as 5 MB

When working with large files, the minimum file part size can be set as low as 5MB or as high as 5GB. This gives developers the ability to maximize the throughput of B2 data uploads and downloads. See Large Files and Downloading for more developer tips.

Backblaze logo

12    Your App or Device Can Work with B2, as well

Your B2 integration can be listed on Backblaze’s website. Visit Submit an Integration to get started.

Want to Learn More About B2?

You can find more information on B2 on our website and in our help pages.

The post 12 B2 Power Tips for Experts and Developers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.