Tag Archives: DNS

CloudFrunt – Identify Misconfigured CloudFront Domains

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/05/cloudfrunt-identify-misconfigured-cloudfront-domains/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

CloudFrunt – Identify Misconfigured CloudFront Domains

CloudFrunt is a Python-based tool for identifying misconfigured CloudFront domains, it uses DNS and looks for CNAMEs which may be allowed to be associated with CloudFront distributions. This effectively allows for domain hijacking.

How CloudFrunt Works For Misconfigured CloudFront

CloudFront is a Content Delivery Network (CDN) provided by Amazon Web Services (AWS). CloudFront users create “distributions” that serve content from specific sources (an S3 bucket, for example).

Each CloudFront distribution has a unique endpoint for users to point their DNS records to (ex.

Read the rest of CloudFrunt – Identify Misconfigured CloudFront Domains now! Only available at Darknet.

Security updates for Thursday

Post Syndicated from ris original https://lwn.net/Articles/754145/rss

Security updates have been issued by Arch Linux (freetype2, libraw, and powerdns), CentOS (389-ds-base and kernel), Debian (php5, prosody, and wavpack), Fedora (ckeditor, fftw, flac, knot-resolver, patch, perl, and perl-Dancer2), Mageia (cups, flac, graphicsmagick, libcdio, libid3tag, and nextcloud), openSUSE (apache2), Oracle (389-ds-base and kernel), Red Hat (389-ds-base and flash-plugin), Scientific Linux (389-ds-base), Slackware (firefox and wget), SUSE (xen), and Ubuntu (wget).

Developer Accidentally Makes Available 390,000 ‘Pirated’ eBooks

Post Syndicated from Andy original https://torrentfreak.com/developer-accidentally-makes-available-390000-pirated-ebooks-180509/

Considering the effort it takes to set one up, pirate sites are clearly always intentional. One doesn’t make available hundreds of thousands of potentially infringing works accidentally.

Unless you’re developer Nick Janetakis, that is.

“About 2 years ago I was recording a video course that dealt with setting up HTTPS on a domain name. In all of my courses, I make sure to ‘really’ do it on video so that you can see the entire process from end to end,” Nick wrote this week.

“Back then I used nickjanetakis.com for all of my courses, so I didn’t have a dedicated domain name for the course I was working on.”

So instead, Nick set up an A record to point ssl.nickjanetakis.com to a DigitalOcean droplet (a cloud server) so anyone accessing the sub-domain could access the droplet (and his content) via his sub-domain.

That was all very straightforward and all Nick needed to do was delete the A record after he was done to ensure that he wasn’t pointing to someone else’s IP address when the droplet was eventually allocated to someone else. But he forgot, with some interesting side effects that didn’t come to light until years later.

“I have Google Alerts set up so I get emailed when people link to my site. A few months ago I started to receive an absurd amount of notifications, but I ignored them. I chalked it up to ‘Google is probably on drugs’,” Nick explains.

However, the developer paid more attention when he received an email from a subscriber to his courses who warned that Nick’s site might have been compromised. A Google search revealed a worrying amount of apparently unauthorized eBook content being made available via Nick’s domain.

350,000 items? Whoops! (credit: Nick Janetakis)

Of course, Nick wasn’t distributing any content himself, but as far as Google was concerned, his domain was completely responsible. For confirmation, TorrentFreak looked up Nick’s domain on Google’s Transparency report and found at least nine copyright holders and two reporting organizations complaining of copyright infringement.

“No one from Google contacted me and none of the copyright infringement people reached out to me. I wish they would have,” Nick told us.

The earliest complaint was filed with Google on April 22, 2018, suggesting that the IP address/domain name collision causing the supposed infringement took place fairly recently. From there came a steady flow of reports, but not the tidal wave one might have expected given the volume of results.

Complaints courtesy of LumenDatabase.org

A little puzzled, TorrentFreak asked Nick if he’d managed to find out from DigitalOcean which pirates had been inadvertently using his domain. He said he’d asked, but the company wouldn’t assist.

“I asked DigitalOcean to get the email contact of the person who owned the IP address but they denied me. I just wanted to know for my own sanity,” he says.

With results now dropping off Google very quickly, TF carried out some tests using Google’s cache. None of the tests led us to any recognizable pirate site but something was definitely amiss.

The ‘pirate’ links (which can be found using a ‘site:ssl.nickjanetakis.com’ search in Google) open documents (sample) which contain links to the domain BookFreeNow.com, which looks very much like a pirate site but suggests it will only hand over PDF files after the user joins up, ostensibly for free.

However, experience with this kind of platform tells us that eventually, there would probably be some kind of cost involved, if indirect.



So, after clicking the registration link (or automatically, if you wait a few seconds) we weren’t entirely shocked when we were redirected briefly to an affiliate site that pays generously. From there we were sent to an advert server which caused a MalwareBytes alert, which was enough for us to back right out of there.

While something amazing might have sat behind the doors of BookFreeNow, we suspect that rather than being a regular pirate site, it’s actually set up to give the impression of being one, in order to generate business in other ways.

Certainly, copyright holders are suspicious of it, and have sent numerous complaints to Google.

In any event, Nick Janetakis should be very grateful that his domain is no longer connected to the platform since a basic pirate site, while troublesome, would be much more straightforward to explain. In the meantime, Nick has some helpful tips on how to avoid such a situation in the future.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Do You Take Your VPN Security Seriously?

Post Syndicated from Ernesto original https://torrentfreak.com/do-you-take-your-vpn-security-seriously-180506/

In recent years there has been a massive boom in VPN usage, spurred on by security breaches and privacy leaks.

While prospective VPN users pay a lot of attention to the various policies VPN providers have when it comes to logging or leak protection, the user’s own responsibility is often entirely ignored.

When there’s a leak of sorts, such as the common WebRTC, Ipv6, DNS or torrent client leaks, people are quick to point their finger at the VPN provider, even though they could have easily prevented issues themselves.

It’s clear that a good VPN provider should do everything in its power to prevent leaks. At the very minimum, they should inform users about possible risks. Better yet, they should regularly test for vulnerabilities.

Still, VPN users themselves can also take a more proactive approach. The problem is that many people don’t take their own VPN security very seriously.

After signing up at a VPN service, many assume that they are perfectly protected. Aside from checking whether their IP-address has changed, they expend very little effort to make sure that this is the case.

What new VPN users should do instead is a series of VPN leak tests. Not just one, but at least a couple. Also, this should be repeated on all devices and in all browsers that are used, just to make sure.

It would also be smart to redo these tests on a regular basis, as devices and applications change. If there are any problems, fix them, with or without help from the VPN provider.

Aside from testing how leak-safe the setup is, VPN users might want to read the documentation and setup guides their VPN service provides. What is the most secure protocol? Does the software have built-in leak protection? What about a kill-switch?

If you use a custom VPN application offered by the provider it may come in with built-in leak protection, but that’s not always the case.

Also, some providers offer these features but don’t have them enabled by default, as it may lead to various connectivity issues. Others leave it up to the user to secure their browsers and apps. These are all things that should be taken into account.

If there are any leaks, let your VPN provider know. They should fix them if they can, after all.

Similarly, torrent users should not forget to test if their torrent client is setup correctly, and test for leaks there as well. This is easily overlooked by many.

While checking for leaks is crucial, things get even complicated when it comes to anonymity.

Some people are extremely focused on choosing a “zero log” VPN to maintain their privacy, but then use the same VPN to log in to Google, Twitter, Facebook and other services. This links the VPN address to their personal account, creating extensive logs there. And that without mentioning the other privacy-sensitive and tracking data these services collect and store.

While most are not too worried about that, it shows that full privacy or anonymity is hard to accomplish, even if a VPN is secure.

The bottom line is, however, that both VPNs and their users should be vigilant. VPN providers should take responsibility to prevent or warn against possible leaks, but people should remember that a “zero-log” VPN really is worthless if the user hasn’t set it up correctly, or uses it the wrong way.

Do I leak offers a comprehensive and independent VPN leak test, but Google should be able to find dozens more.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

MPAA-Seized Popcorn Time Domain Now Redirects to Pirate Site

Post Syndicated from Ernesto original https://torrentfreak.com/mpaa-seized-popcorn-time-domain-now-redirects-to-pirate-site-180503/

Four years ago Popcorn Time took the Internet by storm.

The software amassed millions of users by offering BitTorrent-powered streaming in an easy-to-use Netflix-style interface.

While the original developers shut down their project after a few months, following pressure from Hollywood, others forked the application and took over.

PopcornTime.io swiftly became the main Popcorn Time fork. The spin-off soon had millions of users and updates were pushed out on a regular basis. At the end of 2015, however, this fork also disappeared from the web.

The MPAA took credit for the fall announcing that it had filed a lawsuit against several people in Canada. In response to these legal threats, several key developers backed out.

Soon after, the MPAA also assumed control of the main domain name, ensuring that it could not fall into the wrong hands.

This worked well, initially, but this week we noticed that PopcornTime.io is active again. The domain now links to the pirate streaming site Stream.cr, which welcomes its new visitors with a special message.

Redirection landing page

“Notice: If you’re looking for Popcorn Time(App) for it’s P2P torrent streaming, it’s over at popcorntime.sh. Otherwise, if you’re looking for streaming. Welcome to StreamCR!” a message on the site reads.

This is odd, considering that the PopcornTime.io domain name is still registered to the MPAA.

Popcorntime.io Whois

Adding to the intrigue is the fact that the PopcornTime.io domain registrar is listed as MarkMonitor, which is a well-known brand protection company, often used to prevent domain troubles.

“Protect your critical assets by partnering with a corporate-only domain registrar who has a strong security culture and is committed to providing the most secure and reliable solution in the industry,” MarkMonitor writes

However, since PopcornTime.io now links to a pirate site, something clearly went wrong.

It’s hard to say with certainty what happened. A likely option is that the domain’s nameservers, which point to DNS Made Easy, were not configured properly and that the people behind Stream.cr used that oversight to redirect the domain to their own site.

TorrentFreak spoke to a source unrelated to this case who says he was previously able to redirect traffic from a domain that was seized by the MPAA, simply by adding it to his own DNS Made Easy account. That worked, until the nameservers were updated to MarkMonitor’s DNS servers.

Whether the fault, in this instance, lies with the MPAA, MarkMonitor, or another party is hard to say without further details.

In any case, the MPAA is not going to be happy with the end result, and neither is MarkMonitor. The Stream.cr operators, meanwhile, are probably celebrating and they can enjoy the free traffic while it lasts.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Sci-Hub ‘Pirate Bay For Science’ Security Certs Revoked by Comodo

Post Syndicated from Andy original https://torrentfreak.com/sci-hub-pirate-bay-for-science-security-certs-revoked-by-comodo-ca-180503/

Sci-Hub is often referred to as the “Pirate Bay of Science”. Like its namesake, it offers masses of unlicensed content for free, mostly against the wishes of copyright holders.

While The Pirate Bay will index almost anything, Sci-Hub is dedicated to distributing tens of millions of academic papers and articles, something which has turned itself into a target for publishing giants like Elsevier.

Sci-Hub and its Kazakhstan-born founder Alexandra Elbakyan have been under sustained attack for several years but more recently have been fending off an unprecedented barrage of legal action initiated by the American Chemical Society (ACS), a leading source of academic publications in the field of chemistry.

After winning a default judgment for $4.8 million in copyright infringement damages last year, ACS was further granted a broad injunction.

It required various third-party services (including domain registries, hosting companies and search engines) to stop facilitating access to the site. This plunged Sci-Hub into a game of domain whac-a-mole, one that continues to this day.

Determined to head Sci-Hub off at the pass, ACS obtained additional authority to tackle the evasive site and any new domains it may register in the future.

While Sci-Hub has been hopping around domains for a while, this week a new development appeared on the horizon. Visitors to some of the site’s domains were greeted with errors indicating that the domains’ security certificates had been revoked.

Tests conducted by TorrentFreak revealed clear revocations on Sci-Hub.hk and Sci-Hub.nz, both of which returned the error ‘NET::ERR_CERT_REVOKED’.

Certificate revoked

These certificates were first issued and then revoked by Comodo CA, the world’s largest certification authority. TF contacted the company who confirmed that it had been forced to take action against Sci-Hub.

“In response to a court order against Sci-Hub, Comodo CA has revoked four certificates for the site,” Jonathan Skinner, Director, Global Channel Programs at Comodo CA informed TorrentFreak.

“By policy Comodo CA obeys court orders and the law to the full extent of its ability.”

Comodo refused to confirm any additional details, including whether these revocations were anything to do with the current ACS injunction. However, Susan R. Morrissey, Director of Communications at ACS, told TorrentFreak that the revocations were indeed part of ACS’ legal action against Sci-Hub.

“[T]he action is related to our continuing efforts to protect ACS’ intellectual property,” Morrissey confirmed.

Sci-Hub operates multiple domains (an up-to-date list is usually available on Wikipedia) that can be switched at any time. At the time of writing the domain sci-hub.ga currently returns ‘ERR_SSL_VERSION_OR_CIPHER_MISMATCH’ while .CN and .GS variants both have Comodo certificates that expired last year.

When TF first approached Comodo earlier this week, Sci-Hub’s certificates with the company hadn’t been completely wiped out. For example, the domain https://sci-hub.tw operated perfectly, with an active and non-revoked Comodo certificate.

Still in the game…but not for long

By Wednesday, however, the domain was returning the now-familiar “revoked” message.

These domain issues are the latest technical problems to hit Sci-Hub as a result of the ACS injunction. In February, Cloudflare terminated service to several of the site’s domains.

“Cloudflare will terminate your service for the following domains sci-hub.la, sci-hub.tv, and sci-hub.tw by disabling our authoritative DNS in 24 hours,” Cloudflare told Sci-Hub.

While ACS has certainly caused problems for Sci-Hub, the platform is extremely resilient and remains online.

The domains https://sci-hub.is and https://sci-hub.nu are fully operational with certificates issued by Let’s Encrypt, a free and open certificate authority supported by the likes of Mozilla, EFF, Chrome, Private Internet Access, and other prominent tech companies.

It’s unclear whether these certificates will be targeted in the future but Sci-Hub doesn’t appear to be in the mood to back down.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

IoT Inspector Tool from Princeton

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/iot_inspector_t.html

Researchers at Princeton University have released IoT Inspector, a tool that analyzes the security and privacy of IoT devices by examining the data they send across the Internet. They’ve already used the tool to study a bunch of different IoT devices. From their blog post:

Finding #3: Many IoT Devices Contact a Large and Diverse Set of Third Parties

In many cases, consumers expect that their devices contact manufacturers’ servers, but communication with other third-party destinations may not be a behavior that consumers expect.

We have found that many IoT devices communicate with third-party services, of which consumers are typically unaware. We have found many instances of third-party communications in our analyses of IoT device network traffic. Some examples include:

  • Samsung Smart TV. During the first minute after power-on, the TV talks to Google Play, Double Click, Netflix, FandangoNOW, Spotify, CBS, MSNBC, NFL, Deezer, and Facebook­even though we did not sign in or create accounts with any of them.
  • Amcrest WiFi Security Camera. The camera actively communicates with cellphonepush.quickddns.com using HTTPS. QuickDDNS is a Dynamic DNS service provider operated by Dahua. Dahua is also a security camera manufacturer, although Amcrest’s website makes no references to Dahua. Amcrest customer service informed us that Dahua was the original equipment manufacturer.

  • Halo Smoke Detector. The smart smoke detector communicates with broker.xively.com. Xively offers an MQTT service, which allows manufacturers to communicate with their devices.

  • Geeni Light Bulb. The Geeni smart bulb communicates with gw.tuyaus.com, which is operated by TuYa, a China-based company that also offers an MQTT service.

We also looked at a number of other devices, such as Samsung Smart Camera and TP-Link Smart Plug, and found communications with third parties ranging from NTP pools (time servers) to video storage services.

Their first two findings are that “Many IoT devices lack basic encryption and authentication” and that “User behavior can be inferred from encrypted IoT device traffic.” No surprises there.

Boingboing post.

Related: IoT Hall of Shame.

Danish Traffic to Pirate Sites Increases 67% in Just a Year

Post Syndicated from Andy original https://torrentfreak.com/danish-traffic-to-pirate-sites-increases-67-in-just-a-year-180501/

For close to 20 years, rightsholders have tried to stem the tide of mainstream Internet piracy. Yet despite increasingly powerful enforcement tools, infringement continues on a grand scale.

While the problem is global, rightsholder groups often zoom in on their home turf, to see how the fight is progressing locally. Covering Denmark, the Rights Alliance Data Report 2017 paints a fairly pessimistic picture.

Published this week, the industry study – which uses SimilarWeb and MarkMonitor data – finds that Danes visited 2,000 leading pirate sites 596 million times in 2017. That represents a 67% increase over the 356 million visits to unlicensed platforms made by citizens during 2016.

The report notes that, at least in part, this explosive growth can be attributed to mobile-compatible sites and services, which make it easier than ever to consume illicit content on the move, as well as at home.

In a sea of unauthorized streaming sites, Rights Alliance highlights one platform above all the others as a particularly bad influence in 2017 – 123movies (also known as GoMovies and GoStream, among others).

“The popularity of this service rose sharply in 2017 from 40 million visits in 2016 to 175 million visits in 2017 – an increase of 337 percent, of which most of the traffic originates from mobile devices,” the report notes.

123movies recently announced its closure but before that the platform was subjected to web-blocking in several jurisdictions.

Rights Alliance says that Denmark has one of the most effective blocking systems in the world but that still doesn’t stop huge numbers of people from consuming pirate content from sites that aren’t yet blocked.

“Traffic to infringing sites is overwhelming, and therefore blocking a few sites merely takes the top of the illegal activities,” Rights Alliance chief Maria Fredenslund informs TorrentFreak.

“Blocking is effective by stopping 75% of traffic to blocked sites but certainly, an upscaled effort is necessary.”

Rights Alliance also views the promotion of legal services as crucial to its anti-piracy strategy so when people visit a blocked site, they’re also directed towards legitimate platforms.

“That is why we are working at the moment with Denmark’s Ministry of Culture and ISPs on a campaign ‘Share With Care 2′ which promotes legal services e.g. by offering a search function for legal services which will be placed in combination with the signs that are put on blocked websites,” the anti-piracy group notes.

But even with such measures in place, the thirst for unlicensed content is great. In 2017 alone, 500 of the most popular films and TV shows were downloaded from P2P networks like BitTorrent more than 15 million times from Danish IP addresses, that’s up from 11.9 million in 2016.

Given the dramatic rise in visits to pirate sites overall, the suggestion is that plenty of consumers are still getting through. Rights Alliance says that the number of people being restricted is also hampered by people who don’t use their ISP’s DNS service, which is the method used to block sites in Denmark.

Additionally, interest in VPNs and similar anonymization and bypass-capable technologies is on the increase. Between 3.5% and 5% of Danish Internet users currently use a VPN, a number that’s expected to go up. Furthermore, Rights Alliance reports greater interest in “closed” pirate communities.

“The data is based on closed [BitTorrent] networks. We also address the challenges with private communities on Facebook and other [social media] platforms,” Fredenslund explains.

“Due to the closed doors of these platforms it is not possible for us to say anything precisely about the amount of infringing activities there. However, we receive an increasing number of notices from our members who discover that their products are distributed illegally and also we do an increased monitoring of these platforms.”

But while more established technologies such as torrents and regular web-streaming continue in considerable volumes, newer IPTV-style services accessible via apps and dedicated platforms are also gaining traction.

“The volume of visitors to these services’ websites has been sharply rising in 2017 – an increase of 84 percent from January to December,” Rights Alliance notes.

“Even though the number of visitors does not say anything about actual consumption, as users usually only visit pages one time to download the program, the number gives an indication that the interest in IPTV is increasing.”

To combat this growth market, Rights Alliance says it wants to establish web-blockades against sites hosting the software applications.

Also on the up are visits to platforms offering live sports illegally. In 2017, Danish IP addresses made 2.96 million visits to these services, corresponding to almost 250,000 visits per month and representing an annual increase of 28%.

Rights Alliance informs TF that in future a ‘live’ blocking mechanism similar to the one used by the Premier League in the UK could be deployed in Denmark.

“We already have a dynamic blocking system, and we see an increasing demand for illegal TV products, so this could be a natural next step,” Fredenslund explains.

Another small but perhaps significant detail is how users are accessing pirate sites. According to the report, large volumes of people are now visiting platforms directly, with more than 50% doing so in preference to referrals from search engines such as Google.

In terms of deterrence, the Rights Alliance report sticks to the tried-and-tested approaches seen so often in the anti-piracy arena.

Firstly, the group notes that it’s increasingly encountering people who are paying for legal services such as Netflix and Spotify so believe that allows them to grab something extra from a pirate site. However, in common with similar organizations globally, the group counters that pirate sites can serve malware or have other nefarious business interests behind the scenes, so people should stay away.

Whether significant volumes will heed this advice will remain to be seen but if a 67% increase last year is any predictor of the future, piracy is here to stay – and then some. Rights Alliance says it is ready for the challenge but will need some assistance to achieve its goals.

“As it is evident from the traffic data, criminal activities are not something that we, private companies (right holders in cooperation with ISPs), can handle alone,” Fredenslund says.

“Therefore, we are very pleased that DK Government recently announced that the IP taskforce which was set down as a trial period has now been made permanent. In that regard it is important and necessary that the police will also obtain the authority to handle blocking of massively infringing websites. Police do not have the authority to carry out blocking as it is today.”

The full report is available here (Danish, pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

MyEtherWallet DNS Hack Causes 17 Million USD User Loss

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/04/myetherwallet-dns-hack-causes-17-million-usd-user-loss/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

MyEtherWallet DNS Hack Causes 17 Million USD User Loss

Big news in the crypto scene this week was that the MyEtherWallet DNS Hack that occured managed to collect about $17 Million USD worth of Ethereum in just a few hours.

The hack itself could have been MUCH bigger as it actually involved compromising 1300 Amazon AWS Route 53 DNS IP addresses, fortunately though only MEW was targetted resulting in the damage being contained in the cryptosphere (as far as we know anyway).

Read the rest of MyEtherWallet DNS Hack Causes 17 Million USD User Loss now! Only available at Darknet.

ISP Sued For Breaching User Privacy After Blocking Pirate Sites

Post Syndicated from Andy original https://torrentfreak.com/isp-sued-for-breaching-user-privacy-after-blocking-pirate-sites-180428/

After hinting at moves to curb online piracy last month, on April 13 the Japanese government announced
emergency measures to target websites hosting pirated manga, anime and other types of content.

In common with dozens of counterparts around the world, the government said it favored site-blocking as the first line of defense. However, with no specific legislation to fall back on, authorities asked local ISPs if they’d come along for the ride voluntarily. On Monday, the Nippon Telegraph and Telephone Corp. (NTT) announced that it would.

“We have taken short-term emergency measures until legal systems on site-blocking are implemented,” NTT in a statement.

NTT Communications Corp., NTT Docomo Inc. and NTT Plala Inc., said they would target three sites highlighted by the government – Mangamura, AniTube! and MioMio – which together have a huge following in Japan.

The service providers added that at least in the short-term, they would prevent access to the sites using DNS blocking and would restrict access to other sites if requested to do so by the government. But, just a few days on, NTT is already facing problems.

Lawyer Yuichi Nakazawa has now launched legal action against NTT, demanding that the corporation immediately ends its site-blocking operations.

The complaint, filed at the Tokyo District Court, notes that the lawyer uses an Internet connection provided by NTT. Crucially, it also states that in order to block access to the sites in question, NTT would need to spy on customers’ Internet connections to find out if they’re trying to access the banned sites.

The lawyer informs TorrentFreak that the ISP’s decision prompted him into action.

“NTT’s decision was made arbitrarily on the site without any legal basis. No matter how legitimate the objective of copyright infringement is, it is very dangerous,” Nakazawa explains.

“I felt that ‘freedom,’ which is an important value of the Internet, was threatened. Actually, when the interruption of communications had begun, the company thought it would be impossible to reverse the situation, so I filed a lawsuit at this stage.”

Breaches of privacy could present a significant problem under Japanese law. The Telecommunications Business Act guarantees privacy of communications and prevents censorship, as does Article 21 of the Constitution.

“The secrecy of communications being handled by a telecommunications carrier shall not be violated,” the Telecommunications Business Act states, adding that “no communications being handled by a telecommunications carrier shall be censored.”

The Constitution is also clear, stating that “no censorship shall be maintained, nor shall the secrecy of any means of communication be violated.”

For his part, lawyer Yuichi Nakazawa is also concerned that his contract with the ISP is being breached.

“There is an Internet connection agreement between me and NTT. I am a customer of NTT. There is no provision in the contract between me and NTT to allow arbitrary interruption of communications,” he explains.

Nakazawa doesn’t appear to be against site-blocking per se, he’s just concerned that relevant laws and agreements are being broken.

“It is necessary to restrict sites of pirated publications but that does not mean you can do anything,” Nakazawa said, as quoted by Mainichi. “We should have sufficient discussions for an appropriate measure, including revising the law.”

The question of whether site-blocking does indeed represent an invasion of privacy will probably come down to how the ISP implements it and how that is interpreted by the courts.

A source familiar with the situation told TF that spying on user connections is clearly a problem but the deployment of an outer network firewall rule that simply prevents traffic passing through might be viewed differently.

Such a rule would provide no secret or private information that wasn’t already available to the ISP when the customer requested a banned site through a web browser, although it still falls foul of the “no censorship” requirements of both the Constitution and Telecommunications Business Act.

NTT Communications has declined to comment on the lawsuit but says it had no plans to backtrack on plans to block the sites. Earlier this week, SoftBank Corp., another ISP considering a blockade, expressed concerns that site-blocking has the potential to infringe secrecy of communications rules.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Enhanced Domain Protections for Amazon CloudFront Requests

Post Syndicated from Colm MacCarthaigh original https://aws.amazon.com/blogs/security/enhanced-domain-protections-for-amazon-cloudfront-requests/

Over the coming weeks, we’ll be adding enhanced domain protections to Amazon CloudFront. The short version is this: the new measures are designed to ensure that requests handled by CloudFront are handled on behalf of legitimate domain owners.

Using CloudFront to receive traffic for a domain you aren’t authorized to use is already a violation of our AWS Terms of Service. When we become aware of this type of activity, we deal with it behind the scenes by disabling abusive accounts. Now we’re integrating checks directly into the CloudFront API and Content Distribution service, as well.

Enhanced Protection against Dangling DNS entries
To use CloudFront with your domain, you must configure your domain to point at CloudFront. You may use a traditional CNAME, or an Amazon Route 53 “ALIAS” record.

A problem can arise if you delete your CloudFront distribution, but leave your DNS still pointing at CloudFront, popularly known as a “dangling” DNS entry. Thankfully, this is very rare, as the domain will no longer work, but we occasionally see customers who leave their old domains dormant. This can also happen if you leave this kind of “dangling” DNS entry pointing at other infrastructure you no longer control. For example, if you leave a domain pointing at an IP address that you don’t control, then there is a risk that someone may come along and “claim” traffic destined for your domain.

In an even more rare set of circumstances, an abuser can exploit a subdomain of a domain that you are actively using. For example, if a customer left “images.example.com” dangling and pointing to a deleted CloudFront distribution which is no longer in use, but they still actively use the parent domain “example.com”, then an abuser could come along and register “images.example.com” as an alternative name on their own distribution and claim traffic that they aren’t entitled to. This also means that cookies may be set and intercepted for HTTP traffic potentially including the parent domain. HTTPS traffic remains protected if you’ve removed the certificate associated with the original CloudFront distribution.

Of course, the best fix for this kind of risk is not to leave dangling DNS entries in the first place. Earlier in February, 2018, we added a new warning to our systems. With this warning, if you remove an alternate domain name from a distribution, you are reminded to delete any DNS entries that may still be pointing at CloudFront.

We also have long-standing checks in the CloudFront API that ensure this kind of domain claiming can’t occur when you are using wildcard domains. If you attempt to add *.example.com to your CloudFront distribution, but another account has already registered www.example.com, then the attempt will fail.

With the new enhanced domain protection, CloudFront will now also check your DNS whenever you remove an alternate domain. If we determine that the domain is still pointing at your CloudFront distribution, the API call will fail and no other accounts will be able to claim this traffic in the future.

Enhanced Protection against Domain Fronting
CloudFront will also be soon be implementing enhanced protections against so-called “Domain Fronting”. Domain Fronting is when a non-standard client makes a TLS/SSL connection to a certain name, but then makes a HTTPS request for an unrelated name. For example, the TLS connection may connect to “www.example.com” but then issue a request for “www.example.org”.

In certain circumstances this is normal and expected. For example, browsers can re-use persistent connections for any domain that is listed in the same SSL Certificate, and these are considered related domains. But in other cases, tools including malware can use this technique between completely unrelated domains to evade restrictions and blocks that can be imposed at the TLS/SSL layer.

To be clear, this technique can’t be used to impersonate domains. The clients are non-standard and are working around the usual TLS/SSL checks that ordinary clients impose. But clearly, no customer ever wants to find that someone else is masquerading as their innocent, ordinary domain. Although these cases are also already handled as a breach of our AWS Terms of Service, in the coming weeks we will be checking that the account that owns the certificate we serve for a particular connection always matches the account that owns the request we handle on that connection. As ever, the security of our customers is our top priority, and we will continue to provide enhanced protection against misconfigurations and abuse from unrelated parties.

Interested in additional AWS Security news? Follow the AWS Security Blog on Twitter.

Aussie Federal Court Orders ISPs to Block Pirate IPTV Service

Post Syndicated from Andy original https://torrentfreak.com/aussie-federal-court-orders-isps-to-block-pirate-iptv-service-180427/

After successful applying for ISP blocks against dozens of traditional torrent and streaming portals, Village Roadshow and a coalition of movie studios switched tack last year.

With the threat of pirate subscription IPTV services looming large, Roadshow, Disney, Universal, Warner Bros, Twentieth Century Fox, and Paramount targeted HDSubs+ (also known as PressPlayPlus), a fairly well-known service that provides hundreds of otherwise premium live channels, movies, and sports for a relatively small monthly fee.

The injunction, which was filed last October, targets Australia’s largest ISPs including Telstra, Optus, TPG, and Vocus, plus subsidiaries.

Unlike blocking injunctions targeting regular sites, the studios sought to have several elements of HD Subs+ infrastructure rendered inaccessible, so that its sales platform, EPG (electronic program guide), software (such as an Android and set-top box app), updates, and sundry other services would fail to operate in Australia.

After a six month wait, the Federal Court granted the application earlier today, compelling Australia’s ISPs to block “16 online locations” associated with the HD Subs+ service, rendering its TV services inaccessible Down Under.

“Each respondent must, within 15 business days of service of these orders, take reasonable steps to disable access to the target online locations,” said Justice Nicholas, as quoted by ZDNet.

A small selection of channels in the HDSubs+ package

The ISPs were given flexibility in how to implement the ban, with the Judge noting that DNS blocking, IP address blocking or rerouting, URL blocking, or “any alternative technical means for disabling access”, would be acceptable.

The rightsholders are required to pay a fee of AU$50 fee for each domain they want to block but Village Roadshow says it doesn’t mind doing so, since blocking is in “public interest”. Continuing a pattern established last year, none of the ISPs showed up to the judgment.

A similar IPTV blocking application was filed by Hong Kong-based broadcaster Television Broadcasts Limited (TVB) last year.

TVB wants ISPs including Telstra, Optus, Vocus, and TPG plus their subsidiaries to block access to seven Android-based services named as A1, BlueTV, EVPAD, FunTV, MoonBox, Unblock, and hTV5.

The application was previously heard alongside the HD Subs+ case but will now be handled separately following complications. In April it was revealed that TVB not only wants to block Internet locations related to the technical operation of the service, but also hosting sites that fulfill a role similar to that of Google Play or Apple’s App Store.

TVB wants to have these app marketplaces blocked by Australian ISPs, which would not only render the illicit apps inaccessible to the public but all of the non-infringing ones too.

Justice Nicholas will now have to decide whether the “primary purpose” of these marketplaces is to infringe or facilitate the infringement of TVB’s copyrights. However, there is also a question of whether China-focused live programming has copyright status in Australia. An additional hearing is scheduled for May 2 for these matters to be addressed.

Also on Friday, Foxtel filed yet another blocking application targeting “15 online locations” involving 27 domain names connected to traditional BitTorrent and streaming services.

According to ComputerWorld the injunction targets the same set of ISPs but this time around, Foxtel is trying to save on costs.

The company doesn’t want to have expert witnesses present in court, doesn’t want to stage live demos of websites, and would like to rely on videos and screenshots instead. Foxtel also says that if the ISPs agree, it won’t serve its evidence on them as it has done previously.

The company asked Justice Nicholas to deal with the injunction application “on paper” but he declined, setting a hearing for June 18 but accepting screenshots and videos as evidence.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

How to centralize DNS management in a multi-account environment

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/how-to-centralize-dns-management-in-a-multi-account-environment/

In a multi-account environment where you require connectivity between accounts, and perhaps connectivity between cloud and on-premises workloads, the demand for a robust Domain Name Service (DNS) that’s capable of name resolution across all connected environments will be high.

The most common solution is to implement local DNS in each account and use conditional forwarders for DNS resolutions outside of this account. While this solution might be efficient for a single-account environment, it becomes complex in a multi-account environment.

In this post, I will provide a solution to implement central DNS for multiple accounts. This solution reduces the number of DNS servers and forwarders needed to implement cross-account domain resolution. I will show you how to configure this solution in four steps:

  1. Set up your Central DNS account.
  2. Set up each participating account.
  3. Create Route53 associations.
  4. Configure on-premises DNS (if applicable).

Solution overview

In this solution, you use AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) as a DNS service in a dedicated account in a Virtual Private Cloud (DNS-VPC).

The DNS service included in AWS Managed Microsoft AD uses conditional forwarders to forward domain resolution to either Amazon Route 53 (for domains in the awscloud.com zone) or to on-premises DNS servers (for domains in the example.com zone). You’ll use AWS Managed Microsoft AD as the primary DNS server for other application accounts in the multi-account environment (participating accounts).

A participating account is any application account that hosts a VPC and uses the centralized AWS Managed Microsoft AD as the primary DNS server for that VPC. Each participating account has a private, hosted zone with a unique zone name to represent this account (for example, business_unit.awscloud.com).

You associate the DNS-VPC with the unique hosted zone in each of the participating accounts, this allows AWS Managed Microsoft AD to use Route 53 to resolve all registered domains in private, hosted zones in participating accounts.

The following diagram shows how the various services work together:
 

Diagram showing the relationship between all the various services

Figure 1: Diagram showing the relationship between all the various services

 

In this diagram, all VPCs in participating accounts use Dynamic Host Configuration Protocol (DHCP) option sets. The option sets configure EC2 instances to use the centralized AWS Managed Microsoft AD in DNS-VPC as their default DNS Server. You also configure AWS Managed Microsoft AD to use conditional forwarders to send domain queries to Route53 or on-premises DNS servers based on query zone. For domain resolution across accounts to work, we associate DNS-VPC with each hosted zone in participating accounts.

If, for example, server.pa1.awscloud.com needs to resolve addresses in the pa3.awscloud.com domain, the sequence shown in the following diagram happens:
 

How domain resolution across accounts works

Figure 2: How domain resolution across accounts works

 

  • 1.1: server.pa1.awscloud.com sends domain name lookup to default DNS server for the name server.pa3.awscloud.com. The request is forwarded to the DNS server defined in the DHCP option set (AWS Managed Microsoft AD in DNS-VPC).
  • 1.2: AWS Managed Microsoft AD forwards name resolution to Route53 because it’s in the awscloud.com zone.
  • 1.3: Route53 resolves the name to the IP address of server.pa3.awscloud.com because DNS-VPC is associated with the private hosted zone pa3.awscloud.com.

Similarly, if server.example.com needs to resolve server.pa3.awscloud.com, the following happens:

  • 2.1: server.example.com sends domain name lookup to on-premise DNS server for the name server.pa3.awscloud.com.
  • 2.2: on-premise DNS server using conditional forwarder forwards domain lookup to AWS Managed Microsoft AD in DNS-VPC.
  • 1.2: AWS Managed Microsoft AD forwards name resolution to Route53 because it’s in the awscloud.com zone.
  • 1.3: Route53 resolves the name to the IP address of server.pa3.awscloud.com because DNS-VPC is associated with the private hosted zone pa3.awscloud.com.

Step 1: Set up a centralized DNS account

In previous AWS Security Blog posts, Drew Dennis covered a couple of options for establishing DNS resolution between on-premises networks and Amazon VPC. In this post, he showed how you can use AWS Managed Microsoft AD (provisioned with AWS Directory Service) to provide DNS resolution with forwarding capabilities.

To set up a centralized DNS account, you can follow the same steps in Drew’s post to create AWS Managed Microsoft AD and configure the forwarders to send DNS queries for awscloud.com to default, VPC-provided DNS and to forward example.com queries to the on-premise DNS server.

Here are a few considerations while setting up central DNS:

  • The VPC that hosts AWS Managed Microsoft AD (DNS-VPC) will be associated with all private hosted zones in participating accounts.
  • To be able to resolve domain names across AWS and on-premises, connectivity through Direct Connect or VPN must be in place.

Step 2: Set up participating accounts

The steps I suggest in this section should be applied individually in each application account that’s participating in central DNS resolution.

  1. Create the VPC(s) that will host your resources in participating account.
  2. Create VPC Peering between local VPC(s) in each participating account and DNS-VPC.
  3. Create a private hosted zone in Route 53. Hosted zone domain names must be unique across all accounts. In the diagram above, we used pa1.awscloud.com / pa2.awscloud.com / pa3.awscloud.com. You could also use a combination of environment and business unit: for example, you could use pa1.dev.awscloud.com to achieve uniqueness.
  4. Associate VPC(s) in each participating account with the local private hosted zone.

The next step is to change the default DNS servers on each VPC using DHCP option set:

  1. Follow these steps to create a new DHCP option set. Make sure in the DNS Servers to put the private IP addresses of the two AWS Managed Microsoft AD servers that were created in DNS-VPC:
     
    The "Create DHCP options set" dialog box

    Figure 3: The “Create DHCP options set” dialog box

     

  2. Follow these steps to assign the DHCP option set to your VPC(s) in participating account.

Step 3: Associate DNS-VPC with private hosted zones in each participating account

The next steps will associate DNS-VPC with the private, hosted zone in each participating account. This allows instances in DNS-VPC to resolve domain records created in these hosted zones. If you need them, here are more details on associating a private, hosted zone with VPC on a different account.

  1. In each participating account, create the authorization using the private hosted zone ID from the previous step, the region, and the VPC ID that you want to associate (DNS-VPC).
     
    aws route53 create-vpc-association-authorization –hosted-zone-id <hosted-zone-id> –vpc VPCRegion=<region>,VPCId=<vpc-id>
     
  2. In the centralized DNS account, associate DNS-VPC with the hosted zone in each participating account.
     
    aws route53 associate-vpc-with-hosted-zone –hosted-zone-id <hosted-zone-id> –vpc VPCRegion=<region>,VPCId=<vpc-id>
     

After completing these steps, AWS Managed Microsoft AD in the centralized DNS account should be able to resolve domain records in the private, hosted zone in each participating account.

Step 4: Setting up on-premises DNS servers

This step is necessary if you would like to resolve AWS private domains from on-premises servers and this task comes down to configuring forwarders on-premise to forward DNS queries to AWS Managed Microsoft AD in DNS-VPC for all domains in the awscloud.com zone.

The steps to implement conditional forwarders vary by DNS product. Follow your product’s documentation to complete this configuration.

Summary

I introduced a simplified solution to implement central DNS resolution in a multi-account environment that could be also extended to support DNS resolution between on-premise resources and AWS. This can help reduce operations effort and the number of resources needed to implement cross-account domain resolution.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Directory Service forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

No, Ray Ozzie hasn’t solved crypto backdoors

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/04/no-ray-ozzie-hasnt-solved-crypto.html

According to this Wired article, Ray Ozzie may have a solution to the crypto backdoor problem. No, he hasn’t. He’s only solving the part we already know how to solve. He’s deliberately ignoring the stuff we don’t know how to solve. We know how to make backdoors, we just don’t know how to secure them.

The vault doesn’t scale

Yes, Apple has a vault where they’ve successfully protected important keys. No, it doesn’t mean this vault scales. The more people and the more often you have to touch the vault, the less secure it becomes. We are talking thousands of requests per day from 100,000 different law enforcement agencies around the world. We are unlikely to protect this against incompetence and mistakes. We are definitely unable to secure this against deliberate attack.

A good analogy to Ozzie’s solution is LetsEncrypt for getting SSL certificates for your website, which is fairly scalable, using a private key locked in a vault for signing hundreds of thousands of certificates. That this scales seems to validate Ozzie’s proposal.

But at the same time, LetsEncrypt is easily subverted. LetsEncrypt uses DNS to verify your identity. But spoofing DNS is easy, as was recently shown in the recent BGP attack against a cryptocurrency. Attackers can create fraudulent SSL certificates with enough effort. We’ve got other protections against this, such as discovering and revoking the SSL bad certificate, so while damaging, it’s not catastrophic.

But with Ozzie’s scheme, equivalent attacks would be catastrophic, as it would lead to unlocking the phone and stealing all of somebody’s secrets.

In particular, consider what would happen if LetsEncrypt’s certificate was stolen (as Matthew Green points out). The consequence is that this would be detected and mass revocations would occur. If Ozzie’s master key were stolen, nothing would happen. Nobody would know, and evildoers would be able to freely decrypt phones. Ozzie claims his scheme can work because SSL works — but then his scheme includes none of the many protections necessary to make SSL work.

What I’m trying to show here is that in a lab, it all looks nice and pretty, but when attacked at scale, things break down — quickly. We have so much experience with failure at scale that we can judge Ozzie’s scheme as woefully incomplete. It’s not even up to the standard of SSL, and we have a long list of SSL problems.

Cryptography is about people more than math

We have a mathematically pure encryption algorithm called the “One Time Pad”. It can’t ever be broken, provably so with mathematics.

It’s also perfectly useless, as it’s not something humans can use. That’s why we use AES, which is vastly less secure (anything you encrypt today can probably be decrypted in 100 years). AES can be used by humans whereas One Time Pads cannot be. (I learned the fallacy of One Time Pad’s on my grandfather’s knee — he was a WW II codebreaker who broke German messages trying to futz with One Time Pads).

The same is true with Ozzie’s scheme. It focuses on the mathematical model but ignores the human element. We already know how to solve the mathematical problem in a hundred different ways. The part we don’t know how to secure is the human element.

How do we know the law enforcement person is who they say they are? How do we know the “trusted Apple employee” can’t be bribed? How can the law enforcement agent communicate securely with the Apple employee?

You think these things are theoretical, but they aren’t. Consider financial transactions. It used to be common that you could just email your bank/broker to wire funds into an account for such things as buying a house. Hackers have subverted that, intercepting messages, changing account numbers, and stealing millions. Most banks/brokers require additional verification before doing such transfers.

Let me repeat: Ozzie has only solved the part we already know how to solve. He hasn’t addressed these issues that confound us.

We still can’t secure security, much less secure backdoors

We already know how to decrypt iPhones: just wait a year or two for somebody to discover a vulnerability. FBI claims it’s “going dark”, but that’s only for timely decryption of phones. If they are willing to wait a year or two a vulnerability will eventually be found that allows decryption.

That’s what’s happened with the “GrayKey” device that’s been all over the news lately. Apple is fixing it so that it won’t work on new phones, but it works on old phones.

Ozzie’s solution is based on the assumption that iPhones are already secure against things like GrayKey. Like his assumption “if Apple already has a vault for private keys, then we have such vaults for backdoor keys”, Ozzie is saying “if Apple already had secure hardware/software to secure the phone, then we can use the same stuff to secure the backdoors”. But we don’t really have secure vaults and we don’t really have secure hardware/software to secure the phone.

Again, to stress this point, Ozzie is solving the part we already know how to solve, but ignoring the stuff we don’t know how to solve. His solution is insecure for the same reason phones are already insecure.

Locked phones aren’t the problem

Phones are general purpose computers. That means anybody can install an encryption app on the phone regardless of whatever other security the phone might provide. The police are powerless to stop this. Even if they make such encryption crime, then criminals will still use encryption.

That leads to a strange situation that the only data the FBI will be able to decrypt is that of people who believe they are innocent. Those who know they are guilty will install encryption apps like Signal that have no backdoors.

In the past this was rare, as people found learning new apps a barrier. These days, apps like Signal are so easy even drug dealers can figure out how to use them.

We know how to get Apple to give us a backdoor, just pass a law forcing them to. It may look like Ozzie’s scheme, it may be something more secure designed by Apple’s engineers. Sure, it will weaken security on the phone for everyone, but those who truly care will just install Signal. But again we are back to the problem that Ozzie’s solving the problem we know how to solve while ignoring the much larger problem, that of preventing people from installing their own encryption.

The FBI isn’t necessarily the problem

Ozzie phrases his solution in terms of U.S. law enforcement. Well, what about Europe? What about Russia? What about China? What about North Korea?

Technology is borderless. A solution in the United States that allows “legitimate” law enforcement requests will inevitably be used by repressive states for what we believe would be “illegitimate” law enforcement requests.

Ozzie sees himself as the hero helping law enforcement protect 300 million American citizens. He doesn’t see himself what he really is, the villain helping oppress 1.4 billion Chinese, 144 million Russians, and another couple billion living in oppressive governments around the world.

Conclusion

Ozzie pretends the problem is political, that he’s created a solution that appeases both sides. He hasn’t. He’s solved the problem we already know how to solve. He’s ignored all the problems we struggle with, the problems we claim make secure backdoors essentially impossible. I’ve listed some in this post, but there are many more. Any famous person can create a solution that convinces fawning editors at Wired Magazine, but if Ozzie wants to move forward he’s going to have to work harder to appease doubting cryptographers.

New .BOT gTLD from Amazon

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-bot-gtld-from-amazon/

Today, I’m excited to announce the launch of .BOT, a new generic top-level domain (gTLD) from Amazon. Customers can use .BOT domains to provide an identity and portal for their bots. Fitness bots, slack bots, e-commerce bots, and more can all benefit from an easy-to-access .BOT domain. The phrase “bot” was the 4th most registered domain keyword within the .COM TLD in 2016 with more than 6000 domains per month. A .BOT domain allows customers to provide a definitive internet identity for their bots as well as enhancing SEO performance.

At the time of this writing .BOT domains start at $75 each and must be verified and published with a supported tool like: Amazon Lex, Botkit Studio, Dialogflow, Gupshup, Microsoft Bot Framework, or Pandorabots. You can expect support for more tools over time and if your favorite bot framework isn’t supported feel free to contact us here: [email protected].

Below, I’ll walk through the experience of registering and provisioning a domain for my bot, whereml.bot. Then we’ll look at setting up the domain as a hosted zone in Amazon Route 53. Let’s get started.

Registering a .BOT domain

First, I’ll head over to https://amazonregistry.com/bot, type in a new domain, and click magnifying class to make sure my domain is available and get taken to the registration wizard.

Next, I have the opportunity to choose how I want to verify my bot. I build all of my bots with Amazon Lex so I’ll select that in the drop down and get prompted for instructions specific to AWS. If I had my bot hosted somewhere else I would need to follow the unique verification instructions for that particular framework.

To verify my Lex bot I need to give the Amazon Registry permissions to invoke the bot and verify it’s existence. I’ll do this by creating an AWS Identity and Access Management (IAM) cross account role and providing the AmazonLexReadOnly permissions to that role. This is easily accomplished in the AWS Console. Be sure to provide the account number and external ID shown on the registration page.

Now I’ll add read only permissions to our Amazon Lex bots.

I’ll give my role a fancy name like DotBotCrossAccountVerifyRole and a description so it’s easy to remember why I made this then I’ll click create to create the role and be transported to the role summary page.

Finally, I’ll copy the ARN from the created role and save it for my next step.

Here I’ll add all the details of my Amazon Lex bot. If you haven’t made a bot yet you can follow the tutorial to build a basic bot. I can refer to any alias I’ve deployed but if I just want to grab the latest published bot I can pass in $LATEST as the alias. Finally I’ll click Validate and proceed to registering my domain.

Amazon Registry works with a partner EnCirca to register our domains so we’ll select them and optionally grab Site Builder. I know how to sling some HTML and Javascript together so I’ll pass on the Site Builder side of things.

 

After I click continue we’re taken to EnCirca’s website to finalize the registration and with any luck within a few minutes of purchasing and completing the registration we should receive an email with some good news:

Alright, now that we have a domain name let’s find out how to host things on it.

Using Amazon Route53 with a .BOT domain

Amazon Route 53 is a highly available and scalable DNS with robust APIs, healthchecks, service discovery, and many other features. I definitely want to use this to host my new domain. The first thing I’ll do is navigate to the Route53 console and create a hosted zone with the same name as my domain.


Great! Now, I need to take the Name Server (NS) records that Route53 created for me and use EnCirca’s portal to add these as the authoritative nameservers on the domain.

Now I just add my records to my hosted zone and I should be able to serve traffic! Way cool, I’ve got my very own .bot domain for @WhereML.

Next Steps

  • I could and should add to the security of my site by creating TLS certificates for people who intend to access my domain over TLS. Luckily with AWS Certificate Manager (ACM) this is extremely straightforward and I’ve got my subdomains and root domain verified in just a few clicks.
  • I could create a cloudfront distrobution to front an S3 static single page application to host my entire chatbot and invoke Amazon Lex with a cognito identity right from the browser.

Randall

Japan ISP Says it Will Voluntarily Block Pirate Sites as Major Portal Disappears

Post Syndicated from Andy original https://torrentfreak.com/japan-isp-says-it-will-voluntarily-block-pirate-sites-as-major-portal-disappears-180424/

Speaking at a news conference during March, Japan’s Chief Cabinet Secretary Yoshihide Suga said that the government was considering measures to prohibit access to pirate sites. The country’s manga and anime industries were treasures worth protecting, Suga said.

“The damage is getting worse. We are considering the possibilities of all measures including site blocking. I would like to take countermeasures as soon as possible under the cooperation of the relevant ministries and agencies,” he added.

But with no specific legislation that allows for site-blocking, particularly not on copyright infringement grounds, it appeared that Japan might face an uphill struggle. Indeed, the country’s constitution supports freedom of speech and expressly forbids censorship. Earlier this month, however, matters quickly began to progress.

On Friday April 13, the government said it would introduce an emergency measure to target websites hosting pirated manga, anime and other types of content. It would not force ISPs to comply with its blocking requests but would simply ask for their assistance instead.

The aim was to establish cooperation in advance of an expansion of legislation later this year which was originally introduced to tackle the menace of child pornography.

“Our country’s content industry could be denied a future if manga artists and other creators are robbed of proceeds that should go to them,” said Prime Minister Shinzo Abe.

The government didn’t have to wait long for a response. The Nippon Telegraph and Telephone Corp. (NTT) announced yesterday that it will begin blocking access to sites that provide unauthorized access to copyrighted content.

“We have taken short-term emergency measures until legal systems on site-blocking are implemented,” NTT in a statement.

NTT Communications Corp., NTT Docomo Inc. and NTT Plala Inc., will block access to three sites previously identified by the government – Mangamura, AniTube! and MioMio which have a particularly large following in Japan.

NTT said that it will also restrict access to other sites if requested to do so by the government. The company added that at least in the short-term, it will prevent access to the sites using DNS blocking.

While Anitube and MioMio will be blocked in due course, Mangamura has already disappeared from the Internet. The site was reportedly attracting 100 million visits per month but on April 17 went offline following an apparent voluntary shutdown by its administrators.

AnimeNewsNetwork notes that a news program on NHK dedicated to Mangamura aired last Wednesday. A second episode will reportedly focus on the site’s administrators which NHK claims can be traced back to the United States, Ukraine, and other regions. Whether this exposé played a part in the site’s closure is unclear but that kind of publicity is rarely welcome in the piracy scene.

To date, just three sites have been named by the government as particularly problematic but it’s now promising to set up a consultation on a further response. A bill will also be submitted to parliament to target sites that promote links to content hosted elsewhere, an activity which is not illegal under current law.

Two other major access providers in Japan, KDDI Corp. and SoftBank Corp., have told local media that their plans to block pirate sites have not yet been finalized.

“The fact that neglecting the situation of infringement of copyright etc. cannot be overlooked is recognized and it is recognized as an important problem to be addressed urgently,” Softbank said in a statement.

“However, since there is concern that blocking infringes secrecy of communications, we need careful discussion. We would like to collaborate with industry organizations involved in telecommunications and consider measures that can be taken from various viewpoints, such as laws, institutions, and operation methods.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

RDS for Oracle: Extending Outbound Network Access to use SSL/TLS

Post Syndicated from Surya Nallu original https://aws.amazon.com/blogs/architecture/rds-for-oracle-extending-outbound-network-access-to-use-ssltls/

In December 2016, we launched the Outbound Network Access functionality for Amazon RDS for Oracle, enabling customers to use their RDS for Oracle database instances to communicate with external web endpoints using the utl_http and utl tcp packages, and sending emails through utl_smtp. We extended the functionality by adding the option of using custom DNS servers, allowing such outbound network accesses to make use of any DNS server a customer chooses to use. These releases enabled HTTP, TCP and SMTP communication originating out of RDS for Oracle instances – limited to non-secure (non-SSL) mediums.

To overcome the limitation over SSL connections, we recently published a whitepaper, that guides through the process of creating customized Oracle wallet bundles on your RDS for Oracle instances. By making use of such wallets, you can now extend the Outbound Network Access capability to have external communications happen over secure (SSL/TLS) connections. This opens up new use cases for your RDS for Oracle instances.

With the right set of certificates imported into your RDS for Oracle instances (through Oracle wallets), your database instances can now:

  • Communicate with a HTTPS endpoint: Using utl_http, access a resource such as https://status.aws.amazon.com/robots.txt
  • Download files from Amazon S3 securely: Using a presigned URL from Amazon S3, you can now download any file over SSL
  • Extending Oracle Database links to use SSL: Database links between RDS for Oracle instances can now use SSL as long as the instances have the SSL option installed
  • Sending email over SMTPS:
    • You can now integrate with Amazon SES to send emails from your database instances and any other generic SMTPS with which the provider can be integrated

These are just a few high-level examples of new use cases that have opened up with the whitepaper. As a reminder, always ensure to have best security practices in place when making use of Outbound Network Access (detailed in the whitepaper).

About the Author

Surya Nallu is a Software Development Engineer on the Amazon RDS for Oracle team.

Oblivious DNS

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/oblivious_dns.html

Interesting idea:

…we present Oblivious DNS (ODNS), which is a new design of the DNS ecosystem that allows current DNS servers to remain unchanged and increases privacy for data in motion and at rest. In the ODNS system, both the client is modified with a local resolver, and there is a new authoritative name server for .odns. To prevent an eavesdropper from learning information, the DNS query must be encrypted; the client generates a request for www.foo.com, generates a session key k, encrypts the requested domain, and appends the TLD domain .odns, resulting in {www.foo.com}k.odns. The client forwards this, with the session key encrypted under the .odns authoritative server’s public key ({k}PK) in the “Additional Information” record of the DNS query to the recursive resolver, which then forwards it to the authoritative name server for .odns. The authoritative server decrypts the session key with his private key, and then subsequently decrypts the requested domain with the session key. The authoritative server then forwards the DNS request to the appropriate name server, acting as a recursive resolver. While the name servers see incoming DNS requests, they do not know which clients they are coming from; additionally, an eavesdropper cannot connect a client with her corresponding DNS queries.

News article.