Tag Archives: announcement

Russia’s Largest Torrent Site Celebrates 13 Years Online in a Chinese Restaurant

Post Syndicated from Andy original https://torrentfreak.com/russias-largest-torrent-site-celebrates-13-years-online-in-a-chinese-restaurant-170923/

For most torrent fans around the world, The Pirate Bay is the big symbol of international defiance. Over the years the site has fought, avoided, and snubbed its nose at dozens of battles, yet still remains online today.

But there is another site, located somewhere in the east, that has been online for nearly as long, has millions more registered members, and has proven just as defiant.

RuTracker, for those who haven’t yet found it, is a Russian-focused treasure trove of both local and international content. For many years the site was frequented only by native speakers but with the wonders of tools like Google Translate, anyone can use the site at the flick of the switch. When people are struggling to find content, it’s likely that RuTracker has it.

This position has attracted the negative attention of a wide range of copyright holders and thanks to legislation introduced during 2013, the site is now subject to complete blocking in Russia. In fact, RuTracker has proven so stubborn to copyright holder demands, it is now permanently blocked in the region by all ISPs.

Surprisingly, especially given the enthusiasm for blockades among copyright holders, this doesn’t seem to have dampened demand for the site’s services. According to SimiliarWeb, against all the odds the site is still pulling in around 90 million visitors per month. But the impressive stats don’t stop there.

Impressive stats for a permanently blocked site

This week, RuTracker celebrates its 13th birthday, a relative lifetime for a site that has been front and center of Russia’s most significant copyright battles, trouble which doesn’t look like stopping anytime soon.

Back in 2010, for example, RU-Center, Russia’s largest domain name registrar and web-hosting provider, pulled the plug on the site’s former Torrents.ru domain. The Director of Public Relations at RU-Center said that the domain had been blocked on the orders of the Investigative Division of the regional prosecutor’s office in Moscow. The site never got its domain back but carried on regardless, despite the setbacks.

Back then the site had around 4,000,000 members but now, seven years on, its ranks have swelled to a reported 15,382,907. According to figures published by the site this week, 778,317 of those members signed up this year during a period the site was supposed to be completely inaccessible. Needless to say, its operators remain defiant.

“Today we celebrate the 13th anniversary of our tracker, which is the largest Russian (and not only) -language media library on this planet. A tracker strangely banished in the country where most of its audience is located – in Russia,” a site announcement reads.

“But, despite the prohibitions, with all these legislative obstacles, with all these technical difficulties, we see that our tracker still exists and is successfully developing. And we still believe that the library should be open and free for all, and not be subject to censorship or a victim of legislative and executive power lobbied by the monopolists of the media industry.”

It’s interesting to note the tone of the RuTracker announcement. On any other day it could’ve been written by the crew of The Pirate Bay who, in their prime, loved to stick a finger or two up to the copyright lobby and then rub their noses in it. For the team at RuTracker, that still appears to be one of the main goals.

Like The Pirate Bay but unlike many of the basic torrent indexers that have sprung up in recent years, RuTracker relies on users to upload its content. They certainly haven’t been sitting back. RuTracker reveals that during the past year and despite all the problems, users uploaded a total of 171,819 torrents – on average, 470 torrents per day.

Interestingly, the content most uploaded to the site also points to the growing internationalization of RuTracker. During the past year, the NBA / NCAA section proved most popular, closely followed by non-Russian rock music and NHL games. Non-Russian movies accounted for almost 2,000 fresh torrents in just 12 months.

“It is thanks to you this tracker lives!” the site’s operators informed the users.

“It is thanks to you that it was, is, and, for sure, will continue to offer the most comprehensive, diverse and, most importantly, quality content in the Russian Internet. You stayed with us when the tracker lost its original name: torrents.ru. You stayed with us when access to a new name was blocked in Russia: rutracker.org. You stayed with us when [the site’s trackers] were blocked. We will stay with you as long as you need us!”

So as RuTracker plans for another year online, all that remains is to celebrate its 13th birthday in style. That will be achieved tonight when every adult member of RuTracker is invited to enjoy Chinese meal at the Tian Jin Chinese Restaurant in St. Petersburg.

Turn up early, seating is limited.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Say Hello to the New Atlassian

Post Syndicated from Chris De Santis original https://www.anchor.com.au/blog/2017/09/hello-new-atlassian/

Who is Atlassian?

Atlassian is an Australian IT company that develops enterprise software, with its best-known products being its issue-tracking app, Jira, and team collaboration and wiki product, Confluence.

In December 2015, Atlassian went public and made their initial public offering (IPO) under the symbol TEAM, valuing them at $4.37 billion. In summary, they big.

What happened?

A facelift

It’s a nice sunny day in Sydney in mid-September of 2017, and Atlassian, after 15 years of consistency, has rebranded, changing their look and feel for a brighter and funner one, compared to the dreary previous look.New Atlassian Branding VideoIt’s a hell of a lot simpler and, as they show in the above video, it’s going to be used with a lot more creativity and flair in mind—it’s flexible in a sense that they can use it in a lot more ways than before, with a lot more colours than before.

Atlassian Logo ComparisonThe blues they’re using now work super-well with the logos on a white background, whereas the white logos on their new champion, brand colour blue can go both ways: some can see it as a bold, daring step which is quite attractive, while others can see it as off-putting and not very user-friendly.

New Atlassian Logo Versions

What’s it all mean?

Symbolism

In his announcement blog, Atlassian Co-Founder & Co-CEO, Mike Cannon-Brookes, mentions that the branding change reflects their newly-shifted focus on the concept of teamwork. He continues to explain that their previous logo depicted the sky-holding Greek titan Atlas and symbolised legendary service and support. But, while it has become renown, they’re shifting their focus on the concept of teamwork—why focus on something you’ve already done right, right?

Atlassian Logo EvolutionThe new logo contains more symbolism than meets the eye, as can be interpreted as:

  • Two people high-fiving
  • A mountain to scale
  • The letter “A” (seen as two pillars reinforcing each other)
Product logos

Atlassian has created and acquired many products in their adventure so far, and they all seemed to have a similar art style, but something always felt off about their consistency. Well, needless to say, this was addressed with Atlassian’s very own “identity system”, which is a pretty cool term for a consistent logo-look for 14+ products, to fit them under one brand.

New Atlassian Product LogosThe result is a set of unique marks that “still feel very related to each other”. Whereas, I also see a new set of “unknown” Pokémon.

Typeface

New Atlassian TypefaceTo add a cherry on top, Atlassian will be using their own custom-made typeface called Charlie Sans, specifically designed to balance legibility with personality–that’s probably the best way to describe it. Otherwise, I’d say, out of purely-constructive criticism, that there isn’t much difference between itself and any of the other staple fonts; i.e. Arial, Verdana, etc. Then again, I’m not a professional designer.

It doesn’t look as distinct as their previous typeface, but, to be fair, it does look very slick next to the new product logos.

Well…

What do you think about it all?

 

Image credits: Atlassian

The post Say Hello to the New Atlassian appeared first on AWS Managed Services by Anchor.

Inside the MPAA, Netflix & Amazon Global Anti-Piracy Alliance

Post Syndicated from Andy original https://torrentfreak.com/inside-the-mpaa-netflix-amazon-global-anti-piracy-alliance-170918/

The idea of collaboration in the anti-piracy arena isn’t new but an announcement this summer heralded what is destined to become the largest project the entertainment industry has ever seen.

The Alliance for Creativity and Entertainment (ACE) is a coalition of 30 companies that reads like a who’s who of the global entertainment market. In alphabetical order its members are:

Amazon, AMC Networks, BBC Worldwide, Bell Canada and Bell Media, Canal+ Group, CBS Corporation, Constantin Film, Foxtel, Grupo Globo, HBO, Hulu, Lionsgate, Metro-Goldwyn-Mayer (MGM), Millennium Media, NBCUniversal, Netflix, Paramount Pictures, SF Studios, Sky, Sony Pictures Entertainment, Star India, Studio Babelsberg, STX Entertainment, Telemundo, Televisa, Twentieth Century Fox, Univision Communications Inc., Village Roadshow, The Walt Disney Company, and Warner Bros. Entertainment Inc.

The aim of the project is clear. Instead of each company considering its anti-piracy operations as a distinct island, ACE will bring them all together while presenting a united front to decision and lawmakers. At the core of the Alliance will be the MPAA.

“ACE, with its broad coalition of creators from around the world, is designed, specifically, to leverage the best possible resources to reduce piracy,”
outgoing MPAA chief Chris Dodd said in June.

“For decades, the MPAA has been the gold standard for antipiracy enforcement. We are proud to provide the MPAA’s worldwide antipiracy resources and the deep expertise of our antipiracy unit to support ACE and all its initiatives.”

Since then, ACE and its members have been silent on the project. Today, however, TorrentFreak can pull back the curtain, revealing how the agreement between the companies will play out, who will be in control, and how much the scheme will cost.

Power structure: Founding Members & Executive Committee Members

Netflix, Inc., Amazon Studios LLC, Paramount Pictures Corporation, Sony Pictures Entertainment, Inc., Twentieth Century Fox Film Corporation, Universal City Studios LLC, Warner Bros. Entertainment Inc., and Walt Disney Studios Motion Pictures, are the ‘Founding Members’ (Governing Board) of ACE.

These companies are granted full voting rights on ACE business, including the approval of initiatives and public policy, anti-piracy strategy, budget-related matters, plus approval of legal action. Not least, they’ll have the power to admit or expel ACE members.

All actions taken by the Governing Board (never to exceed nine members) need to be approved by consensus, with each Founding Member able to vote for or against decisions. Members are also allowed to abstain but one persistent objection will be enough to stop any matter being approved.

The second tier – ‘Executive Committee Members’ – is comprised of all the other companies in the ACE project (as listed above, minus the Governing Board). These companies will not be allowed to vote on ACE initiatives but can present ideas and strategies. They’ll also be allowed to suggest targets for law enforcement action while utilizing the MPAA’s anti-piracy resources.

Rights of all members

While all members of ACE can utilize the alliance’s resources, none are barred from simultaneously ‘going it alone’ on separate anti-piracy initiatives. None of these strategies and actions need approval from the Founding Members, provided they’re carried out in a company’s own name and at its own expense.

Information obtained by TorrentFreak indicates that the MPAA also reserves the right to carry out anti-piracy actions in its own name or on behalf of its member studios. The pattern here is different, since the MPAA’s global anti-piracy resources are the same resources being made available to the ACE alliance and for which members have paid to share.

Expansion of ACE

While ACE membership is already broad, the alliance is prepared to take on additional members, providing certain criteria are met. Crucially, any prospective additions must be owners or producers of movies and/or TV shows. The Governing Board will then vet applicants to ensure that they meet the criteria for acceptance as a new Executive Committee Members.

ACE Operations

The nine Governing Board members will meet at least four times a year, with each nominating a senior executive to serve as its representative. The MPAA’s General Counsel will take up the position of non-voting member of the Governing Board and will chair its meetings.

Matters to be discussed include formulating and developing the alliance’s ‘Global Anti-Piracy Action Plan’ and approving and developing the budget. ACE will also form an Anti-Piracy Working Group, which is scheduled to meet at least once a month.

On a daily basis, the MPAA and its staff will attend to the business of the ACE alliance. The MPAA will carry out its own work too but when presenting to outside third parties, it will clearly state which “hat” it is currently wearing.

Much deliberation has taken place over who should be the official spokesperson for ACE. Documents obtained by TF suggest that the MPAA planned to hire a consulting firm to find a person for the role, seeking a professional with international experience who had never been previously been connected with the MPAA.

They appear to have settled on Zoe Thorogood, who previously worked for British Prime Minister David Cameron.

Money, money, money

Of course, the ACE program isn’t going to fund itself, so all members are required to contribute to the operation. The MPAA has opened a dedicated bank account under its control specifically for the purpose, with members contributing depending on status.

Founding/Governing Board Members will be required to commit $5m each annually. However, none of the studios that are MPAA members will have to hand over any cash, since they already fund the MPAA, whose anti-piracy resources ACE is built.

“Each Governing Board Member will contribute annual dues in an amount equal to $5 million USD. Payment of dues shall be made bi-annually in equal shares, payable at
the beginning of each six (6) month period,” the ACE agreement reads.

“The contribution of MPAA personnel, assets and resources…will constitute and be considered as full payment of each MPAA Member Studio’s Governing Board dues.”

That leaves just Netflix and Amazon paying the full amount of $5m in cash each.

From each company’s contribution, $1m will be paid into legal trust accounts allocated to each Governing Board member. If ACE-agreed litigation and legal expenses exceed that amount for the year, members will be required to top up their accounts to cover their share of the costs.

For the remaining 21 companies on the Executive Committee, annual dues are $200,000 each, to be paid in one installment at the start of the financial year – $4.2m all in. Of all dues paid by all members from both tiers, half will be used to boost anti-piracy resources, over and above what the MPAA will spend on the same during 2017.

“Fifty percent (50%) of all dues received from Global Alliance Members other than
the MPAA Member Studios…shall, as agreed by the Governing Board, be used (a) to increase the resources spent on online antipiracy over and above….the amount of MPAA’s 2017 Content Protection Department budget for online antipiracy initiatives/operations,” an internal ACE document reads.

Intellectual property

As the project moves forward, the Alliance expects to gain certain knowledge and experience. On the back of that, the MPAA hopes to grow its intellectual property portfolio.

“Absent written agreement providing otherwise, any and all data, intellectual property, copyrights, trademarks, or know-how owned and/or contributed to the Global Alliance by MPAA, or developed or created by the MPAA or the Global Alliance during the Term of this Charter, shall remain and/or become the exclusive property of the MPAA,” the ACE agreement reads.

That being said, all Governing Board Members will also be granted “perpetual, irrevocable, non-exclusive licenses” to use the same under certain rules, even in the event they leave the ACE initiative.

Terms and extensions

Any member may withdraw from the Alliance at any point, but there will be no refunds. Additionally, any financial commitment previously made to litigation will have to be honored by the member.

The ACE agreement has an initial term of two years but Governing Board Members will meet not less than three months before it is due to expire to vote on any extension.

To be continued……

With the internal structure of ACE now revealed, all that remains is to discover the contents of the initiative’s ‘Global Anti-Piracy Action Plan’. To date, that document has proven elusive but with an operation of such magnitude, future leaks are a distinct possibility.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Kodi ‘Trademark Troll’ Has Interesting Views on Co-Opting Other People’s Work

Post Syndicated from Andy original https://torrentfreak.com/kodi-trademark-troll-has-interesting-views-on-co-opting-other-peoples-work-170917/

The Kodi team, operating under the XBMC Foundation, announced last week that a third-party had registered the Kodi trademark in Canada and was using it for their own purposes.

That person was Geoff Gavora, who had previously been in communication with the Kodi team, expressing how important the software was to his sales.

“We had hoped, given the positive nature of his past emails, that perhaps he was doing this for the benefit of the Foundation. We learned, unfortunately, that this was not the case,” XBMC Foundation President Nathan Betzen said.

According to the Kodi team, Gavora began delisting Amazon ads placed by companies selling Kodi-enabled products, based on infringement of Gavora’s trademark rights.

“[O]nly Gavora’s hardware can be sold, unless those companies pay him a fee to stay on the store,” Betzen explained.

Predictably, Gavora’s move is being viewed as highly controversial, not least since he’s effectively claiming licensing rights in Canada over what should be a free and open source piece of software. TF obtained one of the notices Amazon sent to a seller of a Kodi-enabled device in Canada, following a complaint from Gavora.

Take down Kodi from Amazon, or pay Gavora

So who is Geoff Gavora and what makes him tick? Thanks to a 2016 interview with Ali Salman of the Rapid Growth Podcast, we have a lot of information from the horse’s mouth.

It all began in 2011, when Gavora began jailbreaking Apple TVs, loading them with XBMC, and selling them to friends.

“I did it as a joke, for beer money from my friends,” Gavora told Salman.

“I’d do it for $25 to $50 and word of mouth spread that I was doing this so we could load on this media center to watch content and online streams from it.”

Intro to the interview with Ali Salman

Soon, however, word of mouth caused the business to grow wings, Gavora claims.

“So they started telling people and I start telling people it’s $50, and then I got so busy so I start telling people it’s $75. I’m getting too busy with my work and with this. And it got to the point where I was making more jailbreaking these Apple TVs than I was at my career, and I wasn’t very happy at my career at that time.”

Jailbreaking was supposed to be a side thing to tide Gavora over until another job came along, but he had a problem – he didn’t come from a technical background. Nevertheless, what Gavora did have was a background in marketing and with a decent knowledge of how to succeed in customer service, he majored on that front.

Gavora had come to learn that while people wanted his devices, they weren’t very good at operating XBMC (Kodi’s former name) which he’d loaded onto them. With this in mind, he began offering web support and phone support via a toll-free line.

“I started receiving calls from New York, Dallas, and then Australia, Hong Kong. Everyone around the world was calling me and saying ‘we hear there’s some kid in Calgary, some young child, who’s offering tech support for the Apple TV’,” Gavora said.

But with things apparently going well, a wrench was soon thrown into the works when Apple released the third variant of its Apple TV and Gavorra was unable to jailbreak it. This prompted him to market his own Linux-based set-top device and his business, Raw-Media, grew from there.

While it seems likely that so-called ‘Raw Boxes’ were doing reasonably well with consumers, what was the secret of their success? Podcast host Salman asked Gavora for his ‘networking party 10-second pitch’, and the Canadian was happy to oblige.

“I get this all the time actually. I basically tell people that I sell a box that gives them free TV and movies,” he said.

This was met with laughter from the host, to which Gavora added, “That’s sort of the three-second pitch and everyone’s like ‘Oh, tell me more’.”

“Who doesn’t like free TV, come on?” Salman responded. “Yeah exactly,” Gavora said.

The image below, taken from a January 2016 YouTube unboxing video, shows one of the products sold by Gavora’s company.

Raw-Media Kodi Box packaging (note Kodi logo)

Bearing in mind the offer of free movies and TV, the tagline on the box, “Stop paying for things you don’t want to watch, watch more free tv!” initially looks quite provocative. That being said, both the device and Kodi are perfectly capable of playing plenty of legal content from free sources, so there’s no problem there.

What is surprising, however, is that the unboxing video shows the device being booted up, apparently already loaded with infamous third-party Kodi addons including PrimeWire, Genesis, Icefilms, and Navi-X.

The unboxing video showing the Kodi setup

Given that Gavora has registered the Kodi trademark in Canada and prints the official logo on his packaging, this runs counter to the official Kodi team’s aggressive stance towards boxes ready-configured with what they categorize as banned addons. Matters are compounded when one visits the product support site.

As seen in the image below, Raw-Media devices are delivered with a printed card in the packaging informing people where to get the after-sales services Gavora says he built his business upon. The cards advise people to visit No-Issue.ca, a site setup to offer text and video-based support to set-top box buyers.

No-Issue.ca (which is hosted on the same server as raw-media.ca and claimed officially as a sister site here) now redirects to No-Issue.is, as per a 2016 announcement. It has a fairly bland forum but the connected tutorial videos, found on No Issue’s YouTube channel, offer a lot more spice.

Registered under Gavora’s online nickname Gombeek (which is also used on the official Kodi forums), the channel is full of videos detailing how to install and use a wide range of addons.

The No-issue YouTube Channel tutorials

But while supplying tutorial videos is one thing, providing the actual software addons is another. Surprisingly, No-Issue does that too. Filed away under the URL http://solved.no-issue.is/ is a Kodi repository which distributes a wide range of addons, including many that specialize in infringing content, according to the Kodi team.

The No-Issue repository

A source familiar with Raw-Media’s devices informs TF that they’re no longer delivered with addons installed. However, tools hosted on No-Issue.is automate the installation process for the customer, with unlisted YouTube Videos (1,2) providing the instructions.

XBMC Foundation President Nathan Betzen says that situation isn’t ideal.

“If that really is his repo it is disappointing to see that Gavora is charging a fee or outright preventing the sale of boxes with Kodi installed that do not include infringing add-ons, while at the same time he is distributing boxes himself that do include the infringing add-ons like this,” Betzen told TF.

While the legality of this type of service is yet to be properly tested in Canada and may yet emerge as entirely permissible under local law, Gavora himself previously described his business as operating in a gray area.

“If I could go back in time four years, I would’ve been more aggressive in the beginning because there was a lot of uncertainty being in a gray market business about how far I could push it,” he said.

“I really shouldn’t say it’s a gray market because everything I do is completely above board, I just felt it was more gray market so I was a bit scared,” he added.

But, legality aside (which will be determined in due course through various cases 1,2), the situation is still problematic when it comes to the Kodi trademark.

The official Kodi team indicate they don’t want to be associated with any kind of questionable addon or even tutorials for the same. Nevertheless, several of the addons installed by No-Issue (including PrimeWire, cCloud TV, Genesis, Icefilms, MoviesHD, MuchMovies and Navi-X, to name a few), are present on the Kodi team’s official ban list.

The fact remains, however, that Gavora successfully registered the trademark in Canada (one month later it was transferred to a brand new company at the same address), and Kodi now have no control over the situation in the country, short of a settlement or some kind of legal action.

Kodi matters aside, though, we get more insight into Gavora’s attitudes towards intellectual property after learning that he studied gemology and jewelry at school. He’s a long-standing member of jewelry discussion forum Ganoskin.com (his profile links to Gavora.com, a domain Gavora owns, as per information supplied by Amazon).

Things get particularly topical in a 2006 thread titled “When your work gets ripped“. The original poster asked how people feel when their jewelry work gets copied and Gavora made his opinions known.

“I think that what most people forget to remember is that when a piece from Tiffany’s or Cartier is ripped off or copied they don’t usually just copy the work, they will stamp it with their name as well,” Gavora said.

“This is, in fact, fraud and they are deceiving clients into believing they are purchasing genuine Tiffany’s or Cartier pieces. The client is in fact more interested in purchasing from an artist than they are the piece. Laying claim to designs (unless a symbol or name is involved) is outrageous.”

Unless that ‘design’ is called Kodi, of course, then it’s possible to claim it as your own through an administrative process and begin demanding licensing fees from the public. That being said, Gavora does seem to flip back and forth a little, later suggesting that being copied is sometimes ok.

“If someone copies your design and produces it under their own name, I think one should be honored and revel in the fact that your design is successful and has caused others to imitate it and grow from it,” he wrote.

“I look forward to the day I see one of my original designs copied, that is the day I will know my design is a success.”

From their public statements, this opinion isn’t shared by the Kodi team in respect of their product. Despite the Kodi name, software and logo being all their own work, they now find themselves having to claw back rights in Canada, in order to keep the product free in the region. For now, however, that seems like a difficult task.

TorrentFreak wrote to Gavora and asked him why he felt the need to register the Kodi trademark, but we received no response. That means we didn’t get the chance to ask him why he’s taking down Amazon listings for other people’s devices, or about something else that came up in the podcast.

“My biggest weakness, I guess, is that I’m too ethical about how I do my business,” he said, referring to how he deals with customers.

Only time will tell how that philosophy will affect Gavora’s attitudes to trademarks and people’s desire not to be charged for using free, open source software.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

MetalKettle Addon Repository Vulnerable After GitHub ‘Takeover’

Post Syndicated from Ernesto original https://torrentfreak.com/metalkettle-after-github-takeover-170915/

A few weeks ago MetalKettle, one of the most famous Kodi addon developers of recent times, decided to call it quits.

Worried about potential legal risks, he saw no other option than to halt all development of third-party Kodi addons.

Soon after this announcement, the developer proceeded to remove the GitHub account which was used to distribute his addons. However, he didn’t realize that this might not have been the best decision.

As it turns out, GitHub allows outsiders to re-register names of deleted accounts. While this might not be a problem in most cases, it can be disastrous when the accounts are connected to Kodi add-ons that are constantly pinging for new updates.

In essence, it means that the person who registered the Github account can load content onto the boxes of people who still have the MetalKettle repo installed. Quite a dangerous prospect, something MetalKettle realizes as well.

“Someone has re-registered metalkettle on github. So in theory could pollute any devices with the repo still installed,” he warned on Twitter.

“Warning : if any users have a metalkettle repo installed on their systems or within a build – please delete ASAP,” he added.

MetalKettle warning

It’s not clear what the intentions of the new MetalKettle user are on GitHub, if he or she has any at all. But, people should be very cautious and probably remove it from their systems.

The real MetalKettle, meanwhile, alerted TVAddons to the situation and they have placed the repository on their Indigo blacklist of banned software. This effectively disables the repository on devices with Indigo installed.

GitHub on their turn may want to reconsider their removal policy. Perhaps it’s smarter to not make old usernames available for registration, at least not for a while, as it’s clearly a vulnerability.

This is also shown by another Kodi repo controversy that appeared earlier today. Another GitHub account that was reportedly deleted earlier, resurfaced today pushing a new version of the Exodus addon and other sources.

According to some, the GitHub account is operated by the original Exodus developers and perfectly safe, but others warn that the name was reregistered in bad faith.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Earns Department of Defense Impact Level 5 Provisional Authorization

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/aws-earns-department-of-defense-impact-level-5-provisional-authorization/

AWS GovCloud (US) Region image

The Defense Information Systems Agency (DISA) has granted the AWS GovCloud (US) Region an Impact Level 5 (IL5) Department of Defense (DoD) Cloud Computing Security Requirements Guide (CC SRG) Provisional Authorization (PA) for six core services. This means that AWS’s DoD customers and partners can now deploy workloads for Controlled Unclassified Information (CUI) exceeding IL4 and for unclassified National Security Systems (NSS).

We have supported sensitive Defense community workloads in the cloud for more than four years, and this latest IL5 authorization is complementary to our FedRAMP High Provisional Authorization that covers 18 services in the AWS GovCloud (US) Region. Our customers now have the flexibility to deploy any range of IL 2, 4, or 5 workloads by leveraging AWS’s services, attestations, and certifications. For example, when the US Air Force needed compute scale to support the Next Generation GPS Operational Control System Program, they turned to AWS.

In partnership with a certified Third Party Assessment Organization (3PAO), an independent validation was conducted to assess both our technical and nontechnical security controls to confirm that they meet the DoD’s stringent CC SRG standards for IL5 workloads. Effective immediately, customers can begin leveraging the IL5 authorization for the following six services in the AWS GovCloud (US) Region:

AWS has been a long-standing industry partner with DoD, federal-agency customers, and private-sector customers to enhance cloud security and policy. We continue to collaborate on the DoD CC SRG, Defense Acquisition Regulation Supplement (DFARS) and other government requirements to ensure that policy makers enact policies to support next-generation security capabilities.

In an effort to reduce the authorization burden of our DoD customers, we’ve worked with DISA to port our assessment results into an easily ingestible format by the Enterprise Mission Assurance Support Service (eMASS) system. Additionally, we undertook a separate effort to empower our industry partners and customers to efficiently solve their compliance, governance, and audit challenges by launching the AWS Customer Compliance Center, a portal providing a breadth of AWS-specific compliance and regulatory information.

We look forward to providing sustained cloud security and compliance support at scale for our DoD customers and adding additional services within the IL5 authorization boundary. See AWS Services in Scope by Compliance Program for updates. To request access to AWS’s DoD security and authorization documentation, contact AWS Sales and Business Development. For a list of frequently asked questions related to AWS DoD SRG compliance, see the AWS DoD SRG page.

To learn more about the announcement in this post, tune in for the AWS Automating DoD SRG Impact Level 5 Compliance in AWS GovCloud (US) webinar on October 11, 2017, at 11:00 A.M. Pacific Time.

– Chris Gile, Senior Manager, AWS Public Sector Risk & Compliance

 

 

Strategies for Backing Up Windows Computers

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/strategies-for-backing-up-windows-computers/

Windows 7, Windows 8, Windows 10 logos

There’s a little company called Apple making big announcements this week, but about 45% of you are on Windows machines, so we thought it would be a good idea to devote a blog post today to Windows users and the options they have for backing up Windows computers.

We’ll be talking about the various options for backing up Windows desktop OS’s 7, 8, and 10, and Windows servers. We’ve written previously about this topic in How to Back Up Windows, and Computer Backup Options, but we’ll be covering some new topics and ways to combine strategies in this post. So, if you’re a Windows user looking for shelter from all the Apple hoopla, welcome to our Apple Announcement Day Windows Backup Day post.

Windows laptop

First, Let’s Talk About What We Mean by Backup

This might seem to our readers like an unneeded appetizer on the way to the main course of our post, but we at Backblaze know that people often mean very different things when they use backup and related terms. Let’s start by defining what we mean when we say backup, cloud storage, sync, and archive.

Backup
A backup is an active copy of the system or files that you are using. It is distinguished from an archive, which is the storing of data that is no longer in active use. Backups fall into two main categories: file and image. File backup software will back up whichever files you designate by either letting you include files you wish backed up or by excluding files you don’t want backed up, or both. An image backup, sometimes called a disaster recovery backup or a system clone, is useful if you need to recreate your system on a new drive or computer.
The first backup generally will be a full backup of all files. After that, the backup will be incremental, meaning that only files that have been changed since the full backup will be added. Often, the software will keep changed versions of the files for some period of time, so you can maintain a number of previous revisions of your files in case you wish to return to something in an earlier version of your file.
The destination for your backup could be another drive on your computer, an attached drive, a network-attached drive (NAS), or the cloud.
Cloud Storage
Cloud storage vendors supply data storage just as a utility company supplies power, gas, or water. Cloud storage can be used for data backups, but it can also be used for data archives, application data, records, or libraries of photos, videos, and other media.
You contract with the service for storing any type of data, and the storage location is available to you via the internet. Cloud storage providers generally charge by some combination of data ingress, egress, and the amount of data stored.
Sync
File sync is useful for files that you wish to have access to from different places or computers, or for files that you wish to share with others. While sync has its uses, it has limitations for keeping files safe and how much it could cost you to store large amounts of data. As opposed to backup, which keeps revision of files, sync is designed to keep two or more locations exactly the same. Sync costs are based on how much data you sync and can get expensive for large amounts of data.
Archive
A data archive is for data that is no longer in active use but needs to be saved, and may or may not ever be retrieved again. In old-style storage parlance, it is called cold storage. An archive could be stored with a cloud storage provider, or put on a hard drive or flash drive that you disconnect and put in the closet, or mail to your brother in Idaho.

What’s the Best Strategy for Backing Up?

Now that we’ve got our terminology clear, let’s talk backup strategies for Windows.

At Backblaze, we advocate the 3-2-1 strategy for safeguarding your data, which means that you should maintain three copies of any valuable data — two copies stored locally and one stored remotely. I follow this strategy at home by working on the active data on my Windows 10 desktop computer (copy one), which is backed up to a Drobo RAID device attached via USB (copy two), and backing up the desktop to Backblaze’s Personal Backup in the cloud (copy three). I also keep an image of my primary disk on a separate drive and frequently update it using Windows 10’s image tool.

I use Dropbox for sharing specific files I am working on that I might wish to have access to when I am traveling or on another computer. Once my subscription with Dropbox expires, I’ll use the latest release of Backblaze that has individual file preview with sharing built-in.

Before you decide which backup strategy will work best for your situation, you’ll need to ask yourself a number of questions. These questions include where you wish to store your backups, whether you wish to supply your own storage media, whether the backups will be manual or automatic, and whether limited or unlimited data storage will work best for you.

Strategy 1 — Back Up to a Local or Attached Drive

The first copy of the data you are working on is often on your desktop or laptop. You can create a second copy of your data on another drive or directory on your computer, or copy the data to a drive directly attached to your computer, such as via USB.

external hard drive and RAID NAS devices

Windows has built-in tools for both file and image level backup. Depending on which version of Windows you use, these tools are called Backup and Restore, File History, or Image. These tools enable you to set a schedule for automatic backups, which ensures that it is done regularly. You also have the choice to use Windows Explorer (aka File Explorer) to manually copy files to another location. Some external disk drives and USB Flash Drives come with their own backup software, and other backup utilities are available for free or for purchase.

Windows Explorer File History screenshot

This is a supply-your-own media solution, meaning that you need to have a hard disk or other medium available of sufficient size to hold all your backup data. When a disk becomes full, you’ll need to add a disk or swap out the full disk to continue your backups.

We’ve written previously on this strategy at Should I use an external drive for backup?

Strategy 2 — Back Up to a Local Area Network (LAN)

Computers, servers, and network-attached-storage (NAS) on your local network all can be used for backing up data. Microsoft’s built-in backup tools can be used for this job, as can any utility that supports network protocols such as NFS or SMB/CIFS, which are common protocols that allow shared access to files on a network for Windows and other operatings systems. There are many third-party applications available as well that provide extensive options for managing and scheduling backups and restoring data when needed.

NAS cloud

Multiple computers can be backed up to a single network-shared computer, server, or NAS, which also could then be backed up to the cloud, which rounds out a nice backup strategy, because it covers both local and remote copies of your data. System images of multiple computers on the LAN can be included in these backups if desired.

Again, you are managing the backup media on the local network, so you’ll need to be sure you have sufficient room on the destination drives to store all your backup data.

Strategy 3 — Back Up to Detached Drive at Another Location

You may have have read our recent blog post, Getting Data Archives Out of Your Closet, in which we discuss the practice of filling hard drives and storing them in a closet. Of course, to satisfy the off-site backup guideline, these drives would need to be stored in a closet that’s in a different geographical location than your main computer. If you’re willing to do all the work of copying the data to drives and transporting them to another location, this is a viable option.

stack of hard drives

The only limitation to the amount of backup data is the number of hard drives you are willing to purchase — and maybe the size of your closet.

Strategy 4 — Back Up to the Cloud

Backing up to the cloud has become a popular option for a number of reasons. Internet speeds have made moving large amounts of data possible, and not having to worry about supplying the storage media simplifies choices for users. Additionally, cloud vendors implement features such as data protection, deduplication, and encryption as part of their services that make cloud storage reliable, secure, and efficient. Unlimited cloud storage for data from a single computer is a popular option.

A backup vendor likely will provide a software client that runs on your computer and backs up your data to the cloud in the background while you’re doing other things, such as Backblaze Personal Backup, which has clients for Windows computers, Macintosh computers, and mobile apps for both iOS and Android. For restores, Backblaze users can download one or all of their files for free from anywhere in the world. Optionally, a 128 GB flash drive or 4 TB drive can be overnighted to the customer, with a refund available if the drive is returned.

Storage Pod in the cloud

Backblaze B2 Cloud Storage is an option for those who need capabilities beyond Backblaze’s Personal Backup. B2 provides cloud storage that is priced based on the amount of data the customer uses, and is suitable for long-term data storage. B2 supports integrations with NAS devices, as well as Windows, Macintosh, and Linux computers and servers.

Services such as BackBlaze B2 are often called Cloud Object Storage or IaaS (Infrastructure as a Service), because they provide a complete solution for storing all types of data in partnership with vendors who integrate various solutions for working with B2. B2 has its own API (Application Programming Interface) and CLI (Command-line Interface) to work with B2, but B2 becomes even more powerful when paired with any one of a number of other solutions for data storage and management provided by third parties who offer both hardware and software solutions.

Backing Up Windows Servers

Windows Servers are popular workstations for some users, and provide needed network services for others. They also can be used to store backups from other computers on the network. They, in turn, can be backed up to attached drives or the cloud. While our Personal Backup client doesn’t support Windows servers, our B2 Cloud Storage has a number of integrations with vendors who supply software or hardware for storing data both locally and on B2. We’ve written a number of blog posts and articles that address these solutions, including How to Back Up your Windows Server with B2 and CloudBerry.

Sometimes the Best Strategy is to Mix and Match

The great thing about computers, software, and networks is that there is an endless number of ways to combine them. Our users and hardware and software partners are ingenious in configuring solutions that save data locally, copy it to an attached or network drive, and then store it to the cloud.

image of cloud backup

Among our B2 partners, Synology, CloudBerry Archiware, QNAP, Morro Data, and GoodSync have integrations that allow their NAS devices to store and retrieve data to and from B2 Cloud Storage. For a drag-and-drop experience on the desktop, take a look at CyberDuck, MountainDuck, and Dropshare, which provide users with an easy and interactive way to store and use data in B2.

If you’d like to explore more options for combining software, hardware, and cloud solutions, we invite you to browse the integrations for our many B2 partners.

Have Questions?

Windows versions, tools, and backup terminology all can be confusing, and we know how hard it can be to make sense of all of it. If there’s something we haven’t addressed here, or if you have a question or contribution, please let us know in the comments.

And happy Windows Backup Day! (Just don’t tell Apple.)

The post Strategies for Backing Up Windows Computers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Perfect 10 Takes Giganews to Supreme Court, Says It’s Worse Than Megaupload

Post Syndicated from Andy original https://torrentfreak.com/perfect-10-takes-giganews-supreme-court-says-worse-megaupload-170906/

Adult publisher Perfect 10 has developed a reputation for being a serial copyright litigant.

Over the years the company targeted a number of high-profile defendants, including Google, Amazon, Mastercard, and Visa. Around two dozen of Perfect 10’s lawsuits ended in cash settlements and defaults, in the publisher’s favor.

Perhaps buoyed by this success, the company went after Usenet provider Giganews but instead of a company willing to roll over, Perfect 10 found a highly defensive and indeed aggressive opponent. The initial copyright case filed by Perfect 10 alleged that Giganews effectively sold access to Perfect 10 content but things went badly for the publisher.

In November 2014, the U.S. District Court for the Central District of California found that Giganews was not liable for the infringing activities of its users. Perfect 10 was ordered to pay Giganews $5.6m in attorney’s fees and costs. Perfect 10 lost again at the Court of Appeals for the Ninth Circuit.

As a result of these failed actions, Giganews is owned millions by Perfect 10 but the publisher has thus far refused to pay up. That resulted in Giganews filing a $20m lawsuit, accusing Perfect 10 and President Dr. Norman Zada of fraud.

With all this litigation boiling around in the background and Perfect 10 already bankrupt as a result, one might think the story would be near to a conclusion. That doesn’t seem to be the case. In a fresh announcement, Perfect 10 says it has now appealed its case to the US Supreme Court.

“This is an extraordinarily important case, because for the first time, an appellate court has allowed defendants to copy and sell movies, songs, images, and other copyrighted works, without permission or payment to copyright holders,” says Zada.

“In this particular case, evidence was presented that defendants were copying and selling access to approximately 25,000 terabytes of unlicensed movies, songs, images, software, and magazines.”

Referencing an Amicus brief previously filed by the RIAA which described Giganews as “blatant copyright pirates,” Perfect 10 accuses the Ninth Circuit of allowing Giganews to copy and sell trillions of dollars of other people’s intellectual property “because their copying and selling was done in an automated fashion using a computer.”

Noting that “everything is done via computer” these days and with an undertone that the ruling encouraged others to infringe, Perfect 10 says there are now 88 companies similar to Giganews which rely on the automation defense to commit infringement – even involving content owned by people in the US Government.

“These exploiters of other people’s property are fearless. They are copying and selling access to pirated versions of pretty much every movie ever made, including films co-produced by treasury secretary Steven Mnuchin,” Nada says.

“You would think the justice department would do something to protect the viability of this nation’s movie and recording studios, as unfettered piracy harms jobs and tax revenues, but they have done nothing.”

But Zada doesn’t stop at blaming Usenet services, the California District Court, the Ninth Circuit, and the United States Department of Justice for his problems – Congress is to blame too.

“Copyright holders have nowhere to turn other than the Federal courts, whose judges are ridiculously overworked. For years, Congress has failed to provide the Federal courts with adequate funding. As a result, judges can make mistakes,” he adds.

For Zada, those mistakes are particularly notable, particularly since at least one other super high-profile company was shut down in the most aggressive manner possible for allegedly being involved in less piracy than Giganews.

Pointing to the now-infamous Megaupload case, Perfect 10 notes that the Department of Justice completely shut that operation down, filing charges of criminal copyright infringement against Kim Dotcom and seizing $175 million “for selling access to movies and songs which they did not own.”

“Perfect 10 provided evidence that [Giganews] offered more than 200 times as many full length movies as did megaupload.com. But our evidence fell on deaf ears,” Zada complains.

In contrast, Perfect 10 adds, a California District Court found that Giganews had done nothing wrong, allowed it to continue copying and selling access to Perfect 10’s content, and awarded the Usenet provider $5.63m in attorneys fees.

“Prior to this case, no court had ever awarded fees to an alleged infringer, unless they were found to either own the copyrights at issue, or established a fair use defense. Neither was the case here,” Zada adds.

While Perfect 10 has filed a petition with the Supreme Court, the odds of being granted a review are particularly small. Only time will tell how this case will end, but it seems unlikely that the adult publisher will enjoy a happy ending, one in which it doesn’t have to pay Giganews millions of dollars in attorney’s fees.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The 4.13 kernel is out

Post Syndicated from corbet original https://lwn.net/Articles/732793/rss

Linus has released the 4.13 kernel, right on schedule.
Headline features in this release include
kernel hardening via structure layout
randomization
,
native TLS protocol support,
better huge-page swapping,
improved handling of writeback errors,
better asynchronous I/O support,
better power management via next-interrupt
prediction
,
the elimination of the DocBook toolchain for formatted documentation,
and more. There is one other change that is called out explicitly in the
announcement: “The change in question is simply changing the default cifs behavior:
instead of defaulting to SMB 1.0 (which you really should not use:
just google for ‘stop using SMB1’ or similar), the default cifs mount
now defaults to a rather more modern SMB 3.0.

Pirate Sites and the Dying Art of Customer Service

Post Syndicated from Andy original https://torrentfreak.com/pirate-sites-and-the-dying-art-of-customer-service-170803/

Consumers of products and services in the West are now more educated than ever before. They often research before making a purchase and view follow-up assistance as part of the package. Indeed, many companies live and die on the levels of customer support they’re able to offer.

In this ultra-competitive world, we send faulty technology items straight back to the store, cancel our unreliable phone providers, and switch to new suppliers for the sake of a few dollars, pounds or euros per month. But does this demanding environment translate to the ‘pirate’ world?

It’s important to remember that when the first waves of unauthorized platforms appeared after the turn of the century, content on the Internet was firmly established as being ‘free’. When people first fired up KaZaA, LimeWire, or the few fledgling BitTorrent portals, few could believe their luck. Nevertheless, the fact that there was no charge for content was quickly accepted as the standard.

That’s a position that continues today but for reasons that are not entirely clear, some users of pirate sites treat the availability of such platforms as some kind of right, holding them to the same standards of service that they would their ISP, for example.

One only has to trawl the comments section on The Pirate Bay to see hundreds of examples of people criticizing the quality of uploaded movies, the fact that a software crack doesn’t work, or that some anonymous uploader failed to deliver the latest album quickly enough. That’s aside from the continual complaints screamed on various external platforms which bemoan the site’s downtime record.

For people who recall the sheer joy of finding a working Suprnova mirror for a few minutes almost 15 years ago, this attitude is somewhat baffling. Back then, people didn’t go ballistic when a site went down, they savored the moment when enthusiastic volunteers brought it back up. There was a level of gratefulness that appears somewhat absent today, in a new world where free torrent and streaming sites are suddenly held to the same standards as Comcast or McDonalds.

But while a cultural change among users has definitely taken place over the years, the way sites communicate with their users has taken a hit too. Despite the advent of platforms including Twitter and Facebook, the majority of pirate site operators today have a tendency to leave their users completely in the dark when things go wrong, leading to speculation and concern among grateful and entitled users alike.

So why does The Pirate Bay’s blog stay completely unattended these days? Why do countless sites let dust gather on Twitter accounts that last made an announcement in 2012? And why don’t site operators announce scheduled downtime in advance or let people know what’s going on when the unexpected happens?

“Honestly? I don’t have the time anymore. I also care less than I did,” one site operator told TF.

“11 years of doing this shit is enough to grind anybody down. It’s something I need to do but not doing it makes no difference either. People complain in any case. Then if you start [informing people] again they’ll want it always. Not happening.”

Rather less complimentary was the operator of a large public site. He told us that two decades ago relationships between operators and users were good but have been getting worse ever since.

“Users of pirate content 20 years ago were highly technical. 10 years ago they were somewhat technical. Right now they are fucking watermelon head puppets. They are plain stupid,” he said.

“Pirate sites don’t have customers. They have users. The definition of a customer, when related to the web, is a person that actually buys a service. Since pirates sites don’t sell services (I’m talking about public ones) they have no customers.”

Another site operator told us that his motivations for not interacting with users are based on the changing legal environment, which has become steadily and markedly worse, year upon year.

“I’m not enjoying being open like before. I used to chat keenly with the users, on the site and IRC [Internet Relay Chat] but i’m keeping my distance since a long time ago,” he told us.

“There have always been risks but now I lock everything down. I’m not using Facebook in any way personally or for the site and I don’t need the dramas of Twitter. Everytime you engage on there, problems arise with people wanting a piece of you. Some of the staff use it but I advise the contrary where possible.”

Interested in where the boundaries lie, we asked a couple of sites whether they should be doing more to keep users informed and if that should be considered a ‘customer service’ obligation these days.

“This is not Netflix and i’m not the ‘have a nice day’ guy from McDonalds,” one explained.

“If people want Netflix help then go to Netflix. There’s two of us here doing everything and I mean everything. We’re already in a pinch so spending time to answer every retarded question from kids is right out.”

Our large public site operator agreed, noting that users complain about the most crazy things, including why they don’t have enough space on a drive to download, why a movie that’s out in 2020 hasn’t been uploaded yet, and why can’t they login – when they haven’t even opened an account yet.

While the responses aren’t really a surprise given the ‘free’ nature of the sites and the volume of visitors, things don’t get any better when moving up (we use the term loosely) to paid ‘pirate’ services.

Last week, one streaming platform in particular had an absolute nightmare with what appeared to be technical issues. Nevertheless, some of its users, despite only paying a few pounds per month, demanded their pound of flesh from the struggling service.

One, who raised the topic on Reddit, was advised to ask for his money back for the trouble caused. It raised a couple of eyebrows.

“Put in a ticket and ask [for a refund], morally they should,” the user said.

The use of the word “morally” didn’t sit well with some observers, one of which couldn’t understand how the word could possibly be mentioned in the context of a pirate paying another pirate money, for a pirate service that had broken down.

“Wait let me get this straight,” the critic said. “You want a refund for a gray market service. It’s like buying drugs off the corner only to find out it’s parsley. Do you go back to the dealer and demand a refund? You live and you learn bud. [Shaking my head] at people in here talking about it being morally responsible…too funny.”

It’s not clear when pirate sites started being held to the same standards as regular commercial entities but from anecdotal evidence at least, the problem appears to be getting worse. That being said and from what we’ve heard, users can stop holding their breath waiting for deluxe customer service – it’s not coming anytime soon.

“There’s no way to monetize support,” one admin concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Russian Hacking Tools Codenamed WhiteBear Exposed

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/russian_hacking.html

Kaspersky Labs exposed a highly sophisticated set of hacking tools from Russia called WhiteBear.

From February to September 2016, WhiteBear activity was narrowly focused on embassies and consular operations around the world. All of these early WhiteBear targets were related to embassies and diplomatic/foreign affair organizations. Continued WhiteBear activity later shifted to include defense-related organizations into June 2017. When compared to WhiteAtlas infections, WhiteBear deployments are relatively rare and represent a departure from the broader Skipper Turla target set. Additionally, a comparison of the WhiteAtlas framework to WhiteBear components indicates that the malware is the product of separate development efforts. WhiteBear infections appear to be preceded by a condensed spearphishing dropper, lack Firefox extension installer payloads, and contain several new components signed with a new code signing digital certificate, unlike WhiteAtlas incidents and modules.

The exact delivery vector for WhiteBear components is unknown to us, although we have very strong suspicion the group spearphished targets with malicious pdf files. The decoy pdf document above was likely stolen from a target or partner. And, although WhiteBear components have been consistently identified on a subset of systems previously targeted with the WhiteAtlas framework, and maintain components within the same filepaths and can maintain identical filenames, we were unable to firmly tie delivery to any specific WhiteAtlas component. WhiteBear focused on various embassies and diplomatic entities around the world in early 2016 — tellingly, attempts were made to drop and display decoy pdf’s with full diplomatic headers and content alongside executable droppers on target systems.

One of the clever things the tool does is use hijacked satellite connections for command and control, helping it evade detection by broad surveillance capabilities like what what NSA uses. We’ve seen Russian attack tools that do this before. More details are in the Kaspersky blog post.

Given all the trouble Kaspersky is having because of its association with Russia, it’s interesting to speculate on this disclosure. Either they are independent, and have burned a valuable Russian hacking toolset. Or the Russians decided that the toolset was already burned — maybe the NSA knows all about it and has neutered it somehow — and allowed Kaspersky to publish. Or maybe it’s something in between. That’s the problem with this kind of speculation: without any facts, your theories just amplify whatever opinion you had previously.

Oddly, there hasn’t been much press about this. I have only found one story.

EDITED TO ADD: A colleague pointed out to me that Kaspersky announcements like this often get ignored by the press. There was very little written about ProjectSauron, for example.

EDITED TO ADD: The text I originally wrote said that Kaspersky released the attacks tools, like what Shadow Brokers is doing. They did not. They just exposed the existence of them. Apologies for that error — it was sloppy wording.

HDClub, Russia’s Leading HD-Only Torrent Site, Permanently Shuts Down

Post Syndicated from Andy original https://torrentfreak.com/hdclub-russias-leading-hd-torrent-site-permanently-shuts-down-170830/

While millions of users frequent popular public torrent sites such as The Pirate Bay and RARBG every day, there’s a thriving scene that’s hidden from the wider public eye.

Every week, private torrent trackers cater to dozens of millions of BitTorrent users who have taken the time and effort to gain access to these more secretive communities. Often labeled as elitist and running counter to the broad sharing ethos that made file-sharing the beast it is today, private sites pride themselves on quality, order and speed, something public sites typically struggle to match.

In addition to these notable qualities, many private sites choose to focus on a particular niche. There are sites dedicated to obscure electronic music, comedy, and even magic, but HDClub’s focus was given away by its name.

Dubbing itself “The HighDefinition BitTorrent Community”, HDClub specialized in HD productions including Blu-ray and 3D content, covering movies, TV shows, music videos, and animation.

Born in 2007, HDClub celebrated its ninth birthday on March 9 last year, with 2017 heralding a full decade online for the site. Catering mainly to the Russian and Ukrainian markets, the site’s releases often preserved an English audio option, ideal for those looking for high-quality releases from an unorthodox source at decent speeds.

Of course, HDClub releases often leaked out of the site, meaning that thousands are still available on regular public trackers, as a search on any Western torrent engine reveals.

A sample of HDClub releases listed on Torrentz2

Importantly, the site offered thousands of releases completely unavailable in Russia from licensed sources, meaning it filled a niche in which official outlets either wouldn’t or couldn’t compete. This earned itself a place in Russia’s Top 1000 sites list, despite being a closed membership platform.

The site’s attention to detail and focus earned it a considerable following. For the past few years the site capped membership at 190,000 people but in practice, attendance floated around the 170,000 mark. Seeders peaked at approximately 400,000 with leechers considerably less, making seeding as difficult as one might expect on a ratio-based tracker.

Now, however, the decade-long run of HDClub has come to an abrupt end. Early this week the tracker went dark, reportedly without advance notice. A Russian language announcement now present on its main page explains the reasons for the site’s demise.

“Recently, we received several dozens of complaints from rightsholders weekly, and our community is subjected to attacks and espionage,” the announcement reads.

While public torrent sites are always bombarded with DMCA-style notices, private sites tend to avoid large numbers of complaints. In this case, however, HDClub were clearly feeling the pressure. The site’s main page was open to the public while featuring popular releases, so this probably didn’t help with the load.

It’s not clear what is meant by “attacks and espionage” but it’s possibly a reference to DDoS assaults and third-parties attempting to monitor the site. Nevertheless, as HDClub points out, the climate for torrent, streaming, and similar sites has become increasingly hostile in the region recently.

“In parallel, there is a tightening of Internet legislation in Russia, Ukraine and EU countries,” the site says.

Interestingly, the site’s operators also suggest that interest from some quarters had waned, noting that “the time of enthusiasts irretrievably goes away.” It’s unclear whether that’s a reference to site users, the site’s operators, or indeed both. But in any event, any significant decline in any area can prove fatal, particularly when other pressures are at play.

“In the circumstances, we can no longer support the work of the club in the originally conceived format. The project is closed, but we ask you to refrain from long farewells. Thank you all and goodbye!” the message concludes.

Interestingly, the site ends with a little teaser, which may indicate some hope for the future.

“There are talks on preserving the heritage of the club,” it reads, without adding further details.

Possibly stay tuned…..

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

timeShift(GrafanaBuzz, 1w) Issue 10

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/08/25/timeshiftgrafanabuzz-1w-issue-10/

This week, in addition to the articles we collected from around the web and a number of new Plugins and updates, we have a special announcement. GrafanaCon EU has been announced! Join us in Amsterdam March 1-2, 2018. The call for papers is officially open! We’ll keep you up to date as we fill in the details.


Grafana <3 Prometheus

Last week we mentioned that our colleague Carl Bergquist spoke at PromCon 2017 in Munich. His presentation is now available online. We will post the video once it’s available.


From the Blogosphere

Grafana-based GUI for mgstat, a system monitoring tool for InterSystems Caché, Ensemble or HealthShare: This is the second article in a series about Making Prometheus Monitoring for InterSystems Caché. Mikhail goes into great detail about setting this up on Docker, configuring the first dashboard, and adding templating.

Installation and Integration of Grafana in Zabbix 3.x: Daniel put together an installation guide to get Grafana to display metrics from Zabbix, which utilizes the Zabbix Plugin developed by Grafana Labs Developer Alex Zobnin.

Visualize with RRDtool x Grafana: Atfujiwara wanted to update his MRTG graphs from RRDtool. This post talks about the components needed and how he connected RRDtool to Grafana.

Huawei OceanStor metrics in Grafana: Dennis is using Grafana to display metrics for his storage devices. In this post he walks you through the setup and provides a comprehensive dashboard for all the metrics.

Grafana on a Raspberry Pi2: Pete discusses how he uses Grafana with his garden sensors, and walks you through how to get it up and running on a Pi2.


Grafana Plugins

This week was pretty active on the plugin front. Today we’re announcing two brand new plugins and updates to three others. Installing plugins in Grafana is easy – if you have Hosted Grafana, simply use the one-click install, if you’re using an on-prem instance you can use the Grafana-cli.

NEW PLUGIN

IBM APM Data Source – This plugin collects metrics from the IBM APM (Application Performance Management) products and allows you to visualize it on Grafana dashboards. The plugin supports:

  • IBM Tivoli Monitoring 6.x
  • IBM SmartCloud Application Performance Management 7.x
  • IBM Performance Management 8.x (only on-premises version)

Install Now

NEW PLUGIN

Skydive Data Source – This data source plugin collects metrics from Skydive, an open source real-time network topology and protocols analyzer. Using the Skydive Gremlin query language, you can fetch metrics for flows in your network.

Install now

UPDATED PLUGIN

Datatable Panel – Lots of changes in the latest update to the Datatable Panel Here are some highlights from the changelog:

  • NEW: Export options for Clipboard/CSV/PDF/Excel/Print
  • NEW: Column Aliasing – modify the name of a column as sent by the datasource
  • NEW: Added option for a cell or row to link to another page
  • NEW: Supports Clickable links inside table
  • BUGFIX: CSS files now load when Grafana has a subpath
  • NEW: Added multi-column sorting – sort by any number of columns ascending/descending
  • NEW: Column width hints – suggest a width for a named column
  • BUGFIX: Columns from datasources other than JSON can now be aliased

Update Now

UPDATED PLUGIN

D3 Gauge Panel – The D3 Gauge Panel has a new feature – Tick Mapping. Ticks on the gauge can now be mapped to text.

Update Now

UPDATED PLUGIN

PNP4Nagios Data Source – The most recent update to the PNP Data Source adds support for template variables in queries and as well as support for querying warning and critical thresholds.

Update Now


This week’s MVC (Most Valuable Contributor)

Each week we highlight a contributor to Grafana or the surrounding ecosystem as a thank you for their participation in making open source software great.

Brian Gann
Brian is the maintainer of two Grafana Plugins and this week he submitted substantial updates to both of them (Datatable and D3 Gauge panel plugins); and he says there’s more to come! Thanks for all your hard work, Brian.


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

The Dark Knight popping up in graphs seems to be a recurring theme!
This is the graph Jakub deserves, but not the one he needs right now.



What do you think?

That’s it for the 10th issue of timeShift. Let us know how we’re doing! Submit a comment on this article below, or post something at our community forum. Help us make this better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Announcing the Winners of the AWS Chatbot Challenge – Conversational, Intelligent Chatbots using Amazon Lex and AWS Lambda

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/announcing-the-winners-of-the-aws-chatbot-challenge-conversational-intelligent-chatbots-using-amazon-lex-and-aws-lambda/

A couple of months ago on the blog, I announced the AWS Chatbot Challenge in conjunction with Slack. The AWS Chatbot Challenge was an opportunity to build a unique chatbot that helped to solve a problem or that would add value for its prospective users. The mission was to build a conversational, natural language chatbot using Amazon Lex and leverage Lex’s integration with AWS Lambda to execute logic or data processing on the backend.

I know that you all have been anxiously waiting to hear announcements of who were the winners of the AWS Chatbot Challenge as much as I was. Well wait no longer, the winners of the AWS Chatbot Challenge have been decided.

May I have the Envelope Please? (The Trumpets sound)

The winners of the AWS Chatbot Challenge are:

  • First Place: BuildFax Counts by Joe Emison
  • Second Place: Hubsy by Andrew Riess, Andrew Puch, and John Wetzel
  • Third Place: PFMBot by Benny Leong and his team from MoneyLion.
  • Large Organization Winner: ADP Payroll Innovation Bot by Eric Liu, Jiaxing Yan, and Fan Yang

 

Diving into the Winning Chatbot Projects

Let’s take a walkthrough of the details for each of the winning projects to get a view of what made these chatbots distinctive, as well as, learn more about the technologies used to implement the chatbot solution.

 

BuildFax Counts by Joe Emison

The BuildFax Counts bot was created as a real solution for the BuildFax company to decrease the amount the time that sales and marketing teams can get answers on permits or properties with permits meet certain criteria.

BuildFax, a company co-founded by bot developer Joe Emison, has the only national database of building permits, which updates data from approximately half of the United States on a monthly basis. In order to accommodate the many requests that come in from the sales and marketing team regarding permit information, BuildFax has a technical sales support team that fulfills these requests sent to a ticketing system by manually writing SQL queries that run across the shards of the BuildFax databases. Since there are a large number of requests received by the internal sales support team and due to the manual nature of setting up the queries, it may take several days for getting the sales and marketing teams to receive an answer.

The BuildFax Counts chatbot solves this problem by taking the permit inquiry that would normally be sent into a ticket from the sales and marketing team, as input from Slack to the chatbot. Once the inquiry is submitted into Slack, a query executes and the inquiry results are returned immediately.

Joe built this solution by first creating a nightly export of the data in their BuildFax MySQL RDS database to CSV files that are stored in Amazon S3. From the exported CSV files, an Amazon Athena table was created in order to run quick and efficient queries on the data. He then used Amazon Lex to create a bot to handle the common questions and criteria that may be asked by the sales and marketing teams when seeking data from the BuildFax database by modeling the language used from the BuildFax ticketing system. He added several different sample utterances and slot types; both custom and Lex provided, in order to correctly parse every question and criteria combination that could be received from an inquiry.  Using Lambda, Joe created a Javascript Lambda function that receives information from the Lex intent and used it to build a SQL statement that runs against the aforementioned Athena database using the AWS SDK for JavaScript in Node.js library to return inquiry count result and SQL statement used.

The BuildFax Counts bot is used today for the BuildFax sales and marketing team to get back data on inquiries immediately that previously took up to a week to receive results.

Not only is BuildFax Counts bot our 1st place winner and wonderful solution, but its creator, Joe Emison, is a great guy.  Joe has opted to donate his prize; the $5,000 cash, the $2,500 in AWS Credits, and one re:Invent ticket to the Black Girls Code organization. I must say, you rock Joe for helping these kids get access and exposure to technology.

 

Hubsy by Andrew Riess, Andrew Puch, and John Wetzel

Hubsy bot was created to redefine and personalize the way users traditionally manage their HubSpot account. HubSpot is a SaaS system providing marketing, sales, and CRM software. Hubsy allows users of HubSpot to create engagements and log engagements with customers, provide sales teams with deals status, and retrieves client contact information quickly. Hubsy uses Amazon Lex’s conversational interface to execute commands from the HubSpot API so that users can gain insights, store and retrieve data, and manage tasks directly from Facebook, Slack, or Alexa.

In order to implement the Hubsy chatbot, Andrew and the team members used AWS Lambda to create a Lambda function with Node.js to parse the users request and call the HubSpot API, which will fulfill the initial request or return back to the user asking for more information. Terraform was used to automatically setup and update Lambda, CloudWatch logs, as well as, IAM profiles. Amazon Lex was used to build the conversational piece of the bot, which creates the utterances that a person on a sales team would likely say when seeking information from HubSpot. To integrate with Alexa, the Amazon Alexa skill builder was used to create an Alexa skill which was tested on an Echo Dot. Cloudwatch Logs are used to log the Lambda function information to CloudWatch in order to debug different parts of the Lex intents. In order to validate the code before the Terraform deployment, ESLint was additionally used to ensure the code was linted and proper development standards were followed.

 

PFMBot by Benny Leong and his team from MoneyLion

PFMBot, Personal Finance Management Bot,  is a bot to be used with the MoneyLion finance group which offers customers online financial products; loans, credit monitoring, and free credit score service to improve the financial health of their customers. Once a user signs up an account on the MoneyLion app or website, the user has the option to link their bank accounts with the MoneyLion APIs. Once the bank account is linked to the APIs, the user will be able to login to their MoneyLion account and start having a conversation with the PFMBot based on their bank account information.

The PFMBot UI has a web interface built with using Javascript integration. The chatbot was created using Amazon Lex to build utterances based on the possible inquiries about the user’s MoneyLion bank account. PFMBot uses the Lex built-in AMAZON slots and parsed and converted the values from the built-in slots to pass to AWS Lambda. The AWS Lambda functions interacting with Amazon Lex are Java-based Lambda functions which call the MoneyLion Java-based internal APIs running on Spring Boot. These APIs obtain account data and related bank account information from the MoneyLion MySQL Database.

 

ADP Payroll Innovation Bot by Eric Liu, Jiaxing Yan, and Fan Yang

ADP PI (Payroll Innovation) bot is designed to help employees of ADP customers easily review their own payroll details and compare different payroll data by just asking the bot for results. The ADP PI Bot additionally offers issue reporting functionality for employees to report payroll issues and aids HR managers in quickly receiving and organizing any reported payroll issues.

The ADP Payroll Innovation bot is an ecosystem for the ADP payroll consisting of two chatbots, which includes ADP PI Bot for external clients (employees and HR managers), and ADP PI DevOps Bot for internal ADP DevOps team.


The architecture for the ADP PI DevOps bot is different architecture from the ADP PI bot shown above as it is deployed internally to ADP. The ADP PI DevOps bot allows input from both Slack and Alexa. When input comes into Slack, Slack sends the request to Lex for it to process the utterance. Lex then calls the Lambda backend, which obtains ADP data sitting in the ADP VPC running within an Amazon VPC. When input comes in from Alexa, a Lambda function is called that also obtains data from the ADP VPC running on AWS.

The architecture for the ADP PI bot consists of users entering in requests and/or entering issues via Slack. When requests/issues are entered via Slack, the Slack APIs communicate via Amazon API Gateway to AWS Lambda. The Lambda function either writes data into one of the Amazon DynamoDB databases for recording issues and/or sending issues or it sends the request to Lex. When sending issues, DynamoDB integrates with Trello to keep HR Managers abreast of the escalated issues. Once the request data is sent from Lambda to Lex, Lex processes the utterance and calls another Lambda function that integrates with the ADP API and it calls ADP data from within the ADP VPC, which runs on Amazon Virtual Private Cloud (VPC).

Python and Node.js were the chosen languages for the development of the bots.

The ADP PI bot ecosystem has the following functional groupings:

Employee Functionality

  • Summarize Payrolls
  • Compare Payrolls
  • Escalate Issues
  • Evolve PI Bot

HR Manager Functionality

  • Bot Management
  • Audit and Feedback

DevOps Functionality

  • Reduce call volume in service centers (ADP PI Bot).
  • Track issues and generate reports (ADP PI Bot).
  • Monitor jobs for various environment (ADP PI DevOps Bot)
  • View job dashboards (ADP PI DevOps Bot)
  • Query job details (ADP PI DevOps Bot)

 

Summary

Let’s all wish all the winners of the AWS Chatbot Challenge hearty congratulations on their excellent projects.

You can review more details on the winning projects, as well as, all of the submissions to the AWS Chatbot Challenge at: https://awschatbot2017.devpost.com/submissions. If you are curious on the details of Chatbot challenge contest including resources, rules, prizes, and judges, you can review the original challenge website here:  https://awschatbot2017.devpost.com/.

Hopefully, you are just as inspired as I am to build your own chatbot using Lex and Lambda. For more information, take a look at the Amazon Lex developer guide or the AWS AI blog on Building Better Bots Using Amazon Lex (Part 1)

Chat with you soon!

Tara

Announcement: IPS code

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/08/announcement-ips-code.html

So after 20 years, IBM is killing off my BlackICE code created in April 1998. So it’s time that I rewrite it.

BlackICE was the first “inline” intrusion-detection system, aka. an “intrusion prevention system” or IPS. ISS purchased my company in 2001 and replaced their RealSecure engine with it, and later renamed it Proventia. Then IBM purchased ISS in 2006. Now, they are formally canceling the project and moving customers onto Cisco’s products, which are based on Snort.

So now is a good time to write a replacement. The reason is that BlackICE worked fundamentally differently than Snort, using protocol analysis rather than pattern-matching. In this way, it worked more like Bro than Snort. The biggest benefit of protocol-analysis is speed, making it many times faster than Snort. The second benefit is better detection ability, as I describe in this post on Heartbleed.

So my plan is to create a new project. I’ll be checking in the starter bits into GitHub starting a couple weeks from now. I need to figure out a new name for the project, so I don’t have to rip off a name from William Gibson like I did last time :).

Some notes:

  • Yes, it’ll be GNU open source. I’m a capitalist, so I’ll earn money like snort/nmap dual-licensing it, charging companies who don’t want to open-source their addons. All capitalists GNU license their code.
  • C, not Rust. Sorry, I’m going for extreme scalability. We’ll re-visit this decision later when looking at building protocol parsers.
  • It’ll be 95% compatible with Snort signatures. Their language definition leaves so much ambiguous it’ll be hard to be 100% compatible.
  • It’ll support Snort output as well, though really, Snort’s events suck.
  • Protocol parsers in Lua, so you can use it as a replacement for Bro, writing parsers to extract data you are interested in.
  • Protocol state machine parsers in C, like you see in my Masscan project for X.509.
  • First version IDS only. These days, “inline” means also being able to MitM the SSL stack, so I’m gong to have to think harder on that.
  • Mutli-core worker threads off PF_RING/DPDK/netmap receive queues. Should handle 10gbps, tracking 10 million concurrent connections, with quad-core CPU.
So if you want to contribute to the project, here’s what I need:
  • Requirements from people who work daily with IDS/IPS today. I need you to write up what your products do well that you really like. I need to you write up what they suck at that needs to be fixed. These need to be in some detail.
  • Testing environment to play with. This means having a small server plugged into a real-world link running at a minimum of several gigabits-per-second available for the next year. I’ll sign NDAs related to the data I might see on the network.
  • Coders. I’ll be doing the basic architecture, but protocol parsers, output plugins, etc. will need work. Code will be in C and Lua for the near term. Unfortunately, since I’m going to dual-license, I’ll need waivers before accepting pull requests.
Anyway, follow me on Twitter @erratarob if you want to contribute.

AWS Summit New York – Summary of Announcements

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-summit-new-york-summary-of-announcements/

Whew – what a week! Tara, Randall, Ana, and I have been working around the clock to create blog posts for the announcements that we made at the AWS Summit in New York. Here’s a summary to help you to get started:

Amazon Macie – This new service helps you to discover, classify, and secure content at scale. Powered by machine learning and making use of Natural Language Processing (NLP), Macie looks for patterns and alerts you to suspicious behavior, and can help you with governance, compliance, and auditing. You can read Tara’s post to see how to put Macie to work; you select the buckets of interest, customize the classification settings, and review the results in the Macie Dashboard.

AWS GlueRandall’s post (with deluxe animated GIFs) introduces you to this new extract, transform, and load (ETL) service. Glue is serverless and fully managed, As you can see from the post, Glue crawls your data, infers schemas, and generates ETL scripts in Python. You define jobs that move data from place to place, with a wide selection of transforms, each expressed as code and stored in human-readable form. Glue uses Development Endpoints and notebooks to provide you with a testing environment for the scripts you build. We also announced that Amazon Athena now integrates with Amazon Glue, as does Apache Spark and Hive on Amazon EMR.

AWS Migration Hub – This new service will help you to migrate your application portfolio to AWS. My post outlines the major steps and shows you how the Migration Hub accelerates, tracks,and simplifies your migration effort. You can begin with a discovery step, or you can jump right in and migrate directly. Migration Hub integrates with tools from our migration partners and builds upon the Server Migration Service and the Database Migration Service.

CloudHSM Update – We made a major upgrade to AWS CloudHSM, making the benefits of hardware-based key management available to a wider audience. The service is offered on a pay-as-you-go basis, and is fully managed. It is open and standards compliant, with support for multiple APIs, programming languages, and cryptography extensions. CloudHSM is an integral part of AWS and can be accessed from the AWS Management Console, AWS Command Line Interface (CLI), and through API calls. Read my post to learn more and to see how to set up a CloudHSM cluster.

Managed Rules to Secure S3 Buckets – We added two new rules to AWS Config that will help you to secure your S3 buckets. The s3-bucket-public-write-prohibited rule identifies buckets that have public write access and the s3-bucket-public-read-prohibited rule identifies buckets that have global read access. As I noted in my post, you can run these rules in response to configuration changes or on a schedule. The rules make use of some leading-edge constraint solving techniques, as part of a larger effort to use automated formal reasoning about AWS.

CloudTrail for All Customers – Tara’s post revealed that AWS CloudTrail is now available and enabled by default for all AWS customers. As a bonus, Tara reviewed the principal benefits of CloudTrail and showed you how to review your event history and to deep-dive on a single event. She also showed you how to create a second trail, for use with CloudWatch CloudWatch Events.

Encryption of Data at Rest for EFS – When you create a new file system, you now have the option to select a key that will be used to encrypt the contents of the files on the file system. The encryption is done using an industry-standard AES-256 algorithm. My post shows you how to select a key and to verify that it is being used.

Watch the Keynote
My colleagues Adrian Cockcroft and Matt Wood talked about these services and others on the stage, and also invited some AWS customers to share their stories. Here’s the video:

Jeff;

 

MPAA Revenue Stabilizes, Chris Dodd Earns $3.5 Million

Post Syndicated from Ernesto original https://torrentfreak.com/mpaa-revenue-stabilizes-chris-dodd-earns-3-5-million170813/

Protecting the interests of Hollywood, the MPAA has been heavily involved in numerous anti-piracy efforts around the world in recent years.

Through its involvement in the shutdowns of Popcorn Time, YIFY, isoHunt, Hotfile, Megaupload and several other platforms, the MPAA has worked hard to target piracy around the globe.

Perhaps just as importantly, the group lobbies lawmakers globally while managing anti-piracy campaigns both in and outside the US, including the Creative Content UK program.

All this work doesn’t come for free, obviously, so the MPAA relies on six major movie studios for financial support. After its revenues plummeted a few years ago, they have steadily recovered and according to its latest tax filing, the MPAA’s total income is now over $72 million.

The IRS filing, covering the fiscal year 2015, reveals that the movie studios contributed $65 million, the same as a year earlier. Overall revenue has stabilized as well, after a few years of modest growth.

Going over the numbers, we see that salaries make up a large chunk of the expenses. Former Senator Chris Dodd, the MPAA’s Chairman and CEO, is the highest paid employee with a total income of more than $3.5 million, including a $250,000 bonus.

It was recently announced that Dodd will leave the MPAA next month. He will be replaced by Charles Rivkin, another political heavyweight. Rivkin previously served as Assistant Secretary of State for Economic and Business Affairs in the Obama administration.

In addition to Dodd, there are two other employees who made over a million in 2015, Global General Counsel Steve Fabrizio and Diane Strahan, the MPAA’s Chief Operating Officer.

Looking at some of the other expenses we see that the MPAA’s lobbying budget remained stable at $4.2 million. Another $4.4 million went to various grants, while legal costs totaled $7.2 million that year.

More than two million dollars worth of legal expenses were paid to the US law firm Jenner & Block, which represented the movie studios in various court cases. In addition, the MPAA paid more than $800,000 to the UK law firm Wiggin, which assisted the group in local site-blocking efforts.

Finally, it’s worth looking at the various gifts and grants the MPAA hands out. As reported last year, the group handsomely contributes to various research projects. This includes a recurring million dollar grant for Carnegie Mellon’s ‘Initiative for Digital Entertainment Analytics’ (IDEA), which researches various piracy related topics.

IDEA co-director Rahul Telang previously informed us that the gift is used to hire researchers and pay for research materials. It is not tied to a particular project.

We also see $70,000+ in donations for both the Democratic and Republican Attorneys General associations. The purpose of the grants is listed as “general support.” Interestingly, just recently over a dozen Attorneys General released a public service announcement warning the public to stay away from pirate sites.

These type of donations and grants are nothing new and are a regular part of business across many industries. Still, they are worth keeping in mind.

It will be interesting to see which direction the MPAA takes in the years to come. Under Chris Dodd it has booked a few notable successes, but there is still a long way to go before the piracy situation is somewhat under control.



MPAA’s full form 990 was published in Guidestar recently and a copy is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Top 10 Most Obvious Hacks of All Time (v0.9)

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/07/top-10-most-obvious-hacks-of-all-time.html

For teaching hacking/cybersecurity, I thought I’d create of the most obvious hacks of all time. Not the best hacks, the most sophisticated hacks, or the hacks with the biggest impact, but the most obvious hacks — ones that even the least knowledgeable among us should be able to understand. Below I propose some hacks that fit this bill, though in no particular order.

The reason I’m writing this is that my niece wants me to teach her some hacking. I thought I’d start with the obvious stuff first.

Shared Passwords

If you use the same password for every website, and one of those websites gets hacked, then the hacker has your password for all your websites. The reason your Facebook account got hacked wasn’t because of anything Facebook did, but because you used the same email-address and password when creating an account on “beagleforums.com”, which got hacked last year.

I’ve heard people say “I’m sure, because I choose a complex password and use it everywhere”. No, this is the very worst thing you can do. Sure, you can the use the same password on all sites you don’t care much about, but for Facebook, your email account, and your bank, you should have a unique password, so that when other sites get hacked, your important sites are secure.

And yes, it’s okay to write down your passwords on paper.

Tools: HaveIBeenPwned.com

PIN encrypted PDFs

My accountant emails PDF statements encrypted with the last 4 digits of my Social Security Number. This is not encryption — a 4 digit number has only 10,000 combinations, and a hacker can guess all of them in seconds.
PIN numbers for ATM cards work because ATM machines are online, and the machine can reject your card after four guesses. PIN numbers don’t work for documents, because they are offline — the hacker has a copy of the document on their own machine, disconnected from the Internet, and can continue making bad guesses with no restrictions.
Passwords protecting documents must be long enough that even trillion upon trillion guesses are insufficient to guess.

Tools: Hashcat, John the Ripper

SQL and other injection

The lazy way of combining websites with databases is to combine user input with an SQL statement. This combines code with data, so the obvious consequence is that hackers can craft data to mess with the code.
No, this isn’t obvious to the general public, but it should be obvious to programmers. The moment you write code that adds unfiltered user-input to an SQL statement, the consequence should be obvious. Yet, “SQL injection” has remained one of the most effective hacks for the last 15 years because somehow programmers don’t understand the consequence.
CGI shell injection is a similar issue. Back in early days, when “CGI scripts” were a thing, it was really important, but these days, not so much, so I just included it with SQL. The consequence of executing shell code should’ve been obvious, but weirdly, it wasn’t. The IT guy at the company I worked for back in the late 1990s came to me and asked “this guy says we have a vulnerability, is he full of shit?”, and I had to answer “no, he’s right — obviously so”.

XSS (“Cross Site Scripting”) [*] is another injection issue, but this time at somebody’s web browser rather than a server. It works because websites will echo back what is sent to them. For example, if you search for Cross Site Scripting with the URL https://www.google.com/search?q=cross+site+scripting, then you’ll get a page back from the server that contains that string. If the string is JavaScript code rather than text, then some servers (thought not Google) send back the code in the page in a way that it’ll be executed. This is most often used to hack somebody’s account: you send them an email or tweet a link, and when they click on it, the JavaScript gives control of the account to the hacker.

Cross site injection issues like this should probably be their own category, but I’m including it here for now.

More: Wikipedia on SQL injection, Wikipedia on cross site scripting.
Tools: Burpsuite, SQLmap

Buffer overflows

In the C programming language, programmers first create a buffer, then read input into it. If input is long than the buffer, then it overflows. The extra bytes overwrite other parts of the program, letting the hacker run code.
Again, it’s not a thing the general public is expected to know about, but is instead something C programmers should be expected to understand. They should know that it’s up to them to check the length and stop reading input before it overflows the buffer, that there’s no language feature that takes care of this for them.
We are three decades after the first major buffer overflow exploits, so there is no excuse for C programmers not to understand this issue.

What makes particular obvious is the way they are wrapped in exploits, like in Metasploit. While the bug itself is obvious that it’s a bug, actually exploiting it can take some very non-obvious skill. However, once that exploit is written, any trained monkey can press a button and run the exploit. That’s where we get the insult “script kiddie” from — referring to wannabe-hackers who never learn enough to write their own exploits, but who spend a lot of time running the exploit scripts written by better hackers than they.

More: Wikipedia on buffer overflow, Wikipedia on script kiddie,  “Smashing The Stack For Fun And Profit” — Phrack (1996)
Tools: bash, Metasploit

SendMail DEBUG command (historical)

The first popular email server in the 1980s was called “SendMail”. It had a feature whereby if you send a “DEBUG” command to it, it would execute any code following the command. The consequence of this was obvious — hackers could (and did) upload code to take control of the server. This was used in the Morris Worm of 1988. Most Internet machines of the day ran SendMail, so the worm spread fast infecting most machines.
This bug was mostly ignored at the time. It was thought of as a theoretical problem, that might only rarely be used to hack a system. Part of the motivation of the Morris Worm was to demonstrate that such problems was to demonstrate the consequences — consequences that should’ve been obvious but somehow were rejected by everyone.

More: Wikipedia on Morris Worm

Email Attachments/Links

I’m conflicted whether I should add this or not, because here’s the deal: you are supposed to click on attachments and links within emails. That’s what they are there for. The difference between good and bad attachments/links is not obvious. Indeed, easy-to-use email systems makes detecting the difference harder.
On the other hand, the consequences of bad attachments/links is obvious. That worms like ILOVEYOU spread so easily is because people trusted attachments coming from their friends, and ran them.
We have no solution to the problem of bad email attachments and links. Viruses and phishing are pervasive problems. Yet, we know why they exist.

Default and backdoor passwords

The Mirai botnet was caused by surveillance-cameras having default and backdoor passwords, and being exposed to the Internet without a firewall. The consequence should be obvious: people will discover the passwords and use them to take control of the bots.
Surveillance-cameras have the problem that they are usually exposed to the public, and can’t be reached without a ladder — often a really tall ladder. Therefore, you don’t want a button consumers can press to reset to factory defaults. You want a remote way to reset them. Therefore, they put backdoor passwords to do the reset. Such passwords are easy for hackers to reverse-engineer, and hence, take control of millions of cameras across the Internet.
The same reasoning applies to “default” passwords. Many users will not change the defaults, leaving a ton of devices hackers can hack.

Masscan and background radiation of the Internet

I’ve written a tool that can easily scan the entire Internet in a short period of time. It surprises people that this possible, but it obvious from the numbers. Internet addresses are only 32-bits long, or roughly 4 billion combinations. A fast Internet link can easily handle 1 million packets-per-second, so the entire Internet can be scanned in 4000 seconds, little more than an hour. It’s basic math.
Because it’s so easy, many people do it. If you monitor your Internet link, you’ll see a steady trickle of packets coming in from all over the Internet, especially Russia and China, from hackers scanning the Internet for things they can hack.
People’s reaction to this scanning is weirdly emotional, taking is personally, such as:
  1. Why are they hacking me? What did I do to them?
  2. Great! They are hacking me! That must mean I’m important!
  3. Grrr! How dare they?! How can I hack them back for some retribution!?

I find this odd, because obviously such scanning isn’t personal, the hackers have no idea who you are.

Tools: masscan, firewalls

Packet-sniffing, sidejacking

If you connect to the Starbucks WiFi, a hacker nearby can easily eavesdrop on your network traffic, because it’s not encrypted. Windows even warns you about this, in case you weren’t sure.

At DefCon, they have a “Wall of Sheep”, where they show passwords from people who logged onto stuff using the insecure “DefCon-Open” network. Calling them “sheep” for not grasping this basic fact that unencrypted traffic is unencrypted.

To be fair, it’s actually non-obvious to many people. Even if the WiFi itself is not encrypted, SSL traffic is. They expect their services to be encrypted, without them having to worry about it. And in fact, most are, especially Google, Facebook, Twitter, Apple, and other major services that won’t allow you to log in anymore without encryption.

But many services (especially old ones) may not be encrypted. Unless users check and verify them carefully, they’ll happily expose passwords.

What’s interesting about this was 10 years ago, when most services which only used SSL to encrypt the passwords, but then used unencrypted connections after that, using “cookies”. This allowed the cookies to be sniffed and stolen, allowing other people to share the login session. I used this on stage at BlackHat to connect to somebody’s GMail session. Google, and other major websites, fixed this soon after. But it should never have been a problem — because the sidejacking of cookies should have been obvious.

Tools: Wireshark, dsniff

Stuxnet LNK vulnerability

Again, this issue isn’t obvious to the public, but it should’ve been obvious to anybody who knew how Windows works.
When Windows loads a .dll, it first calls the function DllMain(). A Windows link file (.lnk) can load icons/graphics from the resources in a .dll file. It does this by loading the .dll file, thus calling DllMain. Thus, a hacker could put on a USB drive a .lnk file pointing to a .dll file, and thus, cause arbitrary code execution as soon as a user inserted a drive.
I say this is obvious because I did this, created .lnks that pointed to .dlls, but without hostile DllMain code. The consequence should’ve been obvious to me, but I totally missed the connection. We all missed the connection, for decades.

Social Engineering and Tech Support [* * *]

After posting this, many people have pointed out “social engineering”, especially of “tech support”. This probably should be up near #1 in terms of obviousness.

The classic example of social engineering is when you call tech support and tell them you’ve lost your password, and they reset it for you with minimum of questions proving who you are. For example, you set the volume on your computer really loud and play the sound of a crying baby in the background and appear to be a bit frazzled and incoherent, which explains why you aren’t answering the questions they are asking. They, understanding your predicament as a new parent, will go the extra mile in helping you, resetting “your” password.

One of the interesting consequences is how it affects domain names (DNS). It’s quite easy in many cases to call up the registrar and convince them to transfer a domain name. This has been used in lots of hacks. It’s really hard to defend against. If a registrar charges only $9/year for a domain name, then it really can’t afford to provide very good tech support — or very secure tech support — to prevent this sort of hack.

Social engineering is such a huge problem, and obvious problem, that it’s outside the scope of this document. Just google it to find example after example.

A related issue that perhaps deserves it’s own section is OSINT [*], or “open-source intelligence”, where you gather public information about a target. For example, on the day the bank manager is out on vacation (which you got from their Facebook post) you show up and claim to be a bank auditor, and are shown into their office where you grab their backup tapes. (We’ve actually done this).

More: Wikipedia on Social Engineering, Wikipedia on OSINT, “How I Won the Defcon Social Engineering CTF” — blogpost (2011), “Questioning 42: Where’s the Engineering in Social Engineering of Namespace Compromises” — BSidesLV talk (2016)

Blue-boxes (historical) [*]

Telephones historically used what we call “in-band signaling”. That’s why when you dial on an old phone, it makes sounds — those sounds are sent no differently than the way your voice is sent. Thus, it was possible to make tone generators to do things other than simply dial calls. Early hackers (in the 1970s) would make tone-generators called “blue-boxes” and “black-boxes” to make free long distance calls, for example.

These days, “signaling” and “voice” are digitized, then sent as separate channels or “bands”. This is call “out-of-band signaling”. You can’t trick the phone system by generating tones. When your iPhone makes sounds when you dial, it’s entirely for you benefit and has nothing to do with how it signals the cell tower to make a call.

Early hackers, like the founders of Apple, are famous for having started their careers making such “boxes” for tricking the phone system. The problem was obvious back in the day, which is why as the phone system moves from analog to digital, the problem was fixed.

More: Wikipedia on blue box, Wikipedia article on Steve Wozniak.

Thumb drives in parking lots [*]

A simple trick is to put a virus on a USB flash drive, and drop it in a parking lot. Somebody is bound to notice it, stick it in their computer, and open the file.

This can be extended with tricks. For example, you can put a file labeled “third-quarter-salaries.xlsx” on the drive that required macros to be run in order to open. It’s irresistible to other employees who want to know what their peers are being paid, so they’ll bypass any warning prompts in order to see the data.

Another example is to go online and get custom USB sticks made printed with the logo of the target company, making them seem more trustworthy.

We also did a trick of taking an Adobe Flash game “Punch the Monkey” and replaced the monkey with a logo of a competitor of our target. They now only played the game (infecting themselves with our virus), but gave to others inside the company to play, infecting others, including the CEO.

Thumb drives like this have been used in many incidents, such as Russians hacking military headquarters in Afghanistan. It’s really hard to defend against.

More: “Computer Virus Hits U.S. Military Base in Afghanistan” — USNews (2008), “The Return of the Worm That Ate The Pentagon” — Wired (2011), DoD Bans Flash Drives — Stripes (2008)

Googling [*]

Search engines like Google will index your website — your entire website. Frequently companies put things on their website without much protection because they are nearly impossible for users to find. But Google finds them, then indexes them, causing them to pop up with innocent searches.
There are books written on “Google hacking” explaining what search terms to look for, like “not for public release”, in order to find such documents.

More: Wikipedia entry on Google Hacking, “Google Hacking” book.

URL editing [*]

At the top of every browser is what’s called the “URL”. You can change it. Thus, if you see a URL that looks like this:

http://www.example.com/documents?id=138493

Then you can edit it to see the next document on the server:

http://www.example.com/documents?id=138494

The owner of the website may think they are secure, because nothing points to this document, so the Google search won’t find it. But that doesn’t stop a user from manually editing the URL.
An example of this is a big Fortune 500 company that posts the quarterly results to the website an hour before the official announcement. Simply editing the URL from previous financial announcements allows hackers to find the document, then buy/sell the stock as appropriate in order to make a lot of money.
Another example is the classic case of Andrew “Weev” Auernheimer who did this trick in order to download the account email addresses of early owners of the iPad, including movie stars and members of the Obama administration. It’s an interesting legal case because on one hand, techies consider this so obvious as to not be “hacking”. On the other hand, non-techies, especially judges and prosecutors, believe this to be obviously “hacking”.

DDoS, spoofing, and amplification [*]

For decades now, online gamers have figured out an easy way to win: just flood the opponent with Internet traffic, slowing their network connection. This is called a DoS, which stands for “Denial of Service”. DoSing game competitors is often a teenager’s first foray into hacking.
A variant of this is when you hack a bunch of other machines on the Internet, then command them to flood your target. (The hacked machines are often called a “botnet”, a network of robot computers). This is called DDoS, or “Distributed DoS”. At this point, it gets quite serious, as instead of competitive gamers hackers can take down entire businesses. Extortion scams, DDoSing websites then demanding payment to stop, is a common way hackers earn money.
Another form of DDoS is “amplification”. Sometimes when you send a packet to a machine on the Internet it’ll respond with a much larger response, either a very large packet or many packets. The hacker can then send a packet to many of these sites, “spoofing” or forging the IP address of the victim. This causes all those sites to then flood the victim with traffic. Thus, with a small amount of outbound traffic, the hacker can flood the inbound traffic of the victim.
This is one of those things that has worked for 20 years, because it’s so obvious teenagers can do it, yet there is no obvious solution. President Trump’s executive order of cyberspace specifically demanded that his government come up with a report on how to address this, but it’s unlikely that they’ll come up with any useful strategy.

More: Wikipedia on DDoS, Wikipedia on Spoofing

Conclusion

Tweet me (@ErrataRob) your obvious hacks, so I can add them to the list.

Announcing the Raspberry Jam Big Birthday Weekend 2018

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/raspberry-jam-big-birthday-weekend-2018/

For the last few years, we have held a big Raspberry Pi community event in Cambridge around Raspberry Pi’s birthday, where people have come together for a huge party with talks, workshops, and more. We want more people to have the chance to join in with our birthday celebrations next year, so we’re going to be coordinating Raspberry Jams all over the world to take place over the Raspberry Jam Big Birthday Weekend, 3–4 March 2018.

Raspberry Pi Big Birthday Weekend 2018. GIF with confetti and bopping JAM balloons

Big Birthday fun!

Whether you’ve run a Raspberry Jam before, or you’d like to start a new Jam in your area, we invite you to join us for our Big Birthday Weekend, wherever you are in the world. This event will be a community-led, synchronised, global mega-Jam in celebration of our sixth birthday and the digital making community! Members of the Raspberry Pi Foundation team will be attending Jams far and wide to celebrate with you during the weekend.

Jams across the world will receive a special digital pack – be sure to register your interest so we can get your pack to you! We’ll also be sending out party kits to registered Jams – more info on this below.

Need help getting started?

First of all, check out the Raspberry Jam page to read all about Jams, and take a look at our recent blog post explaining the support for Jams that we offer.

If there’s no Jam near you yet, the Raspberry Jam Big Birthday Weekend is the perfect opportunity to start one yourself! If you’d like some help getting your Jam off the ground, there are a few places you can get support:

  • The Raspberry Jam Guidebook is full of advice gathered from the amazing people who run Jams in the UK.
  • The Raspberry Jam Slack team is available for Jam organisers to chat, share ideas, and get help from each other. Just email jam [at] raspberrypi.org and ask to be invited.
  • Attend a Jam! Find an upcoming Jam near you, and go along to get an idea of what it’s like.
  • Email us – if you have more queries, you can email jam [at] raspberrypi.org and we’ll do what we can to help.

Raspberry Jam

Get involved

If you’re keen to start a new Jam, there’s no need to wait until March – why not get up and running over the summer? Then you’ll be an expert by the time the Raspberry Jam Big Birthday Weekend comes around. Check out the guidebook, join the Jam Slack, and submit your event to the map when you’re ready.

Like the idea of running a Jam, but don’t want to do it by yourself? Then feel free to email us, and we’ll try and help you find someone to co-organise it.

If you don’t fancy organising a Jam for our Big Birthday Weekend, but would like to celebrate with us, keep an eye on our website for an update early next year. We’ll publish a full list of Jams participating in the festivities so you can find one near you. And if you’ve never attended a Jam before, there’s no need to wait: find one to join on the map here.

Raspberry Jam

Register your interest

If you think you’d like to run a Jam as part of the Big Birthday Weekend, register your interest now, and you’ll be the first to receive updates. Don’t worry if you don’t have the venue or logistics in place yet – this is just to let us know you’re keen, and to give us an idea about how big our party is going to be.

We will contact you in autumn to give you more information, as well as some useful resources. On top of our regular Raspberry Jam branding pack, we’ll provide a special digital Big Birthday Weekend pack to help you celebrate and tell everyone about your Jam!

Then, once you have confirmed you’re taking part, you’ll be able to register your Jam on our website. This will make sure that other people interested in joining the party can find your event. If your Jam is among the first 150 to be registered for a Big Birthday Weekend event, we will send you a free pack of goodies to use on the big day!

Go fill in the form, and we’ll be in touch!

 

PS: We’ll be running a big Cambridge event in the summer on the weekend of 30 June–1 July 2018. Put it in your diary – we’ll say more about it as we get closer to the date.

The post Announcing the Raspberry Jam Big Birthday Weekend 2018 appeared first on Raspberry Pi.