Tag Archives: Tools

Canadian Pirate Site Blocks Could Spread to VPNs, Professor Warns

Post Syndicated from Ernesto original https://torrentfreak.com/canadian-pirate-site-blocks-could-spread-to-vpns-professor-warns-180219/

ISP blocking has become a prime measure for the entertainment industry to target pirate sites on the Internet.

In recent years sites have been blocked throughout Europe, in Asia, and even Down Under.

Last month, a coalition of Canadian companies called on the local telecom regulator CRTC to establish a local pirate site blocking program, which would be the first of its kind in North America.

The Canadian deal is backed by both copyright holders and major players in the Telco industry, such as Bell and Rogers, which also have media companies of their own. Instead of court-ordered blockades, they call for a mutually agreed deal where ISPs will block pirate sites.

The plan has triggered a fair amount of opposition. Tens of thousands of people have protested against the proposal and several experts are warning against the negative consequences it may have.

One of the most vocal opponents is University of Ottawa law professor Micheal Geist. In a series of articles, processor Geist highlighted several problems, including potential overblocking.

The Fairplay Canada coalition downplays overblocking, according to Geist. They say the measures will only affect sites that are blatantly, overwhelmingly or structurally engaged in piracy, which appears to be a high standard.

However, the same coalition uses a report from MUSO as its primary evidence. This report draws on a list of 23,000 pirate sites, which may not all be blatant enough to meet the blocking standard.

For example, professor Geist notes that it includes a site dedicated to user-generated subtitles as well as sites that offer stream ripping tools which can be used for legal purposes.

“Stream ripping is a concern for the music industry, but these technologies (which are also found in readily available software programs from a local BestBuy) also have considerable non-infringing uses, such as for downloading Creative Commons licensed videos also found on video sites,” Geist writes.

If the coalition tried to have all these sites blocked the scope would be much larger than currently portrayed. Conversely, if only a few of the sites would be blocked, then the evidence that was used to put these blocks in place would have been exaggerated.

“In other words, either the scope of block list coverage is far broader than the coalition admits or its piracy evidence is inflated by including sites that do not meet its piracy standard,” Geist notes.

Perhaps most concerning is the slippery slope that the blocking efforts can turn into. Professor Geist fears that after the standard piracy sites are dealt with, related targets may be next.

This includes VPN services. While this may sound far-fetched to some, several members of the coalition, such as Bell and Rogers, have already criticized VPNs in the past since these allow people to watch geo-blocked content.

“Once the list of piracy sites (whatever the standard) is addressed, it is very likely that the Bell coalition will turn its attention to other sites and services such as virtual private networks (VPNs).

“This is not mere speculation. Rather, it is taking Bell and its allies at their word on how they believe certain services and sites constitute theft,” Geist adds.

The issue may even be more relevant in this case, since the same VPNs can also be used to circumvent pirate sites blockades.

“Further, since the response to site blocking from some Internet users will surely involve increased use of VPNs to evade the blocks, the attempt to characterize VPNs as services engaged in piracy will only increase,” Geist adds.

Potential overblocking is just one of the many issues with the current proposal, according to the law professor. Geist previously highlighted that current copyright law already provides sufficient remedies to deal with piracy and that piracy isn’t that much of a problem in Canada in the first place.

The CRTC has yet to issue its review of the proposal but now that the cat is out of the bag, rightsholders and ISPs are likely to keep pushing for blockades, one way or the other.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Flight Sim Company Embeds Malware to Steal Pirates’ Passwords

Post Syndicated from Andy original https://torrentfreak.com/flight-sim-company-embeds-malware-to-steal-pirates-passwords-180219/

Anti-piracy systems and DRM come in all shapes and sizes, none of them particularly popular, but one deployed by flight sim company FlightSimLabs is likely to go down in history as one of the most outrageous.

It all started yesterday on Reddit when Flight Sim user ‘crankyrecursion’ reported a little extra something in his download of FlightSimLabs’ A320X module.

“Using file ‘FSLabs_A320X_P3D_v2.0.1.231.exe’ there seems to be a file called ‘test.exe’ included,” crankyrecursion wrote.

“This .exe file is from http://securityxploded.com and is touted as a ‘Chrome Password Dump’ tool, which seems to work – particularly as the installer would typically run with Administrative rights (UAC prompts) on Windows Vista and above. Can anyone shed light on why this tool is included in a supposedly trusted installer?”

The existence of a Chrome password dumping tool is certainly cause for alarm, especially if the software had been obtained from a less-than-official source, such as a torrent or similar site, given the potential for third-party pollution.

However, with the possibility of a nefarious third-party dumping something nasty in a pirate release still lurking on the horizon, things took an unexpected turn. FlightSimLabs chief Lefteris Kalamaras made a statement basically admitting that his company was behind the malware installation.

“We were made aware there is a Reddit thread started tonight regarding our latest installer and how a tool is included in it, that indiscriminately dumps Chrome passwords. That is not correct information – in fact, the Reddit thread was posted by a person who is not our customer and has somehow obtained our installer without purchasing,” Kalamaras wrote.

“[T]here are no tools used to reveal any sensitive information of any customer who has legitimately purchased our products. We all realize that you put a lot of trust in our products and this would be contrary to what we believe.

“There is a specific method used against specific serial numbers that have been identified as pirate copies and have been making the rounds on ThePirateBay, RuTracker and other such malicious sites,” he added.

In a nutshell, FlightSimLabs installed a password dumper onto ALL users’ machines, whether they were pirates or not, but then only activated the password-stealing module when it determined that specific ‘pirate’ serial numbers had been used which matched those on FlightSimLabs’ servers.

“Test.exe is part of the DRM and is only targeted against specific pirate copies of copyrighted software obtained illegally. That program is only extracted temporarily and is never under any circumstances used in legitimate copies of the product,” Kalamaras added.

That didn’t impress Luke Gorman, who published an analysis slamming the flight sim company for knowingly installing password-stealing malware on users machines, even those who purchased the title legitimately.

Password stealer in action (credit: Luke Gorman)

Making matters even worse, the FlightSimLabs chief went on to say that information being obtained from pirates’ machines in this manner is likely to be used in court or other legal processes.

“This method has already successfully provided information that we’re going to use in our ongoing legal battles against such criminals,” Kalamaras revealed.

While the use of the extracted passwords and usernames elsewhere will remain to be seen, it appears that FlightSimLabs has had a change of heart. With immediate effect, the company is pointing customers to a new installer that doesn’t include code for stealing their most sensitive data.

“I want to reiterate and reaffirm that we as a company and as flight simmers would never do anything to knowingly violate the trust that you have placed in us by not only buying our products but supporting them and FlightSimLabs,” Kalamaras said in an update.

“While the majority of our customers understand that the fight against piracy is a difficult and ongoing battle that sometimes requires drastic measures, we realize that a few of you were uncomfortable with this particular method which might be considered to be a bit heavy handed on our part. It is for this reason we have uploaded an updated installer that does not include the DRM check file in question.”

To be continued………

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Google on Collision Course With Movie Biz Over Piracy & Safe Harbor

Post Syndicated from Andy original https://torrentfreak.com/google-on-collision-course-with-movie-biz-over-piracy-safe-harbor-180219/

Wherever Google has a presence, rightsholders are around to accuse the search giant of not doing enough to deal with piracy.

Over the past several years, the company has been attacked by both the music and movie industries but despite overtures from Google, criticism still floods in.

In Australia, things are definitely heating up. Village Roadshow, one of the nation’s foremost movie companies, has been an extremely vocal Google critic since 2015 but now its co-chief, the outspoken Graham Burke, seems to want to take things to the next level.

As part of yet another broadside against Google, Burke has for the second time in a month accused Google of playing a large part in online digital crime.

“My view is they are complicit and they are facilitating crime,” Burke said, adding that if Google wants to sue him over his comments, they’re very welcome to do so.

It’s highly unlikely that Google will take the bait. Burke’s attempt at pushing the issue further into the spotlight will have been spotted a mile off but in any event, legal battles with Google aren’t really something that Burke wants to get involved in.

Australia is currently in the midst of a consultation process for the Copyright Amendment (Service Providers) Bill 2017 which would extend the country’s safe harbor provisions to a broader range of service providers including educational institutions, libraries, archives, key cultural institutions and organizations assisting people with disabilities.

For its part, Village Roadshow is extremely concerned that these provisions may be extended to other providers – specifically Google – who might then use expanded safe harbor to deflect more liability in respect of piracy.

“Village Roadshow….urges that there be no further amendments to safe harbor and in particular there is no advantage to Australia in extending safe harbor to Google,” Burke wrote in his company’s recent submission to the government.

“It is very unlikely given their size and power that as content owners we would ever sue them but if we don’t have that right then we stand naked. Most importantly if Google do the right thing by Australia on the question of piracy then there will be no issues. However, they are very far from this position and demonstrably are facilitating crime.”

Accusations of crime facilitation are nothing new for Google, with rightsholders in the US and Europe having accused the company of the same a number of times over the years. In response, Google always insists that it abides by relevant laws and actually goes much further in tackling piracy than legislation currently requires.

On the safe harbor front, Google begins by saying that not expanding provisions to service providers will have a seriously detrimental effect on business development in the region.

“[Excluding] online service providers falls far short of a balanced, pro-innovation environment for Australia. Further, it takes Australia out of step with other digital economies by creating regulatory uncertainty for [venture capital] investment and startup/entrepreneurial success,” Google’s submission reads.

“[T]he Draft Bill’s narrow safe harbor scheme places Australian-based startups and online service providers — including individual bloggers, websites, small startups, video-hosting services, enterprise cloud companies, auction sites, online marketplaces, hosting providers for real-estate listings, photo hosting services, search engines, review sites, and online platforms —in a disadvantaged position compared with global startups in countries that have strong safe harbor frameworks, such as the United States, Canada, United Kingdom, Singapore, South Korea, Japan, and other EU countries.

“Under the new scheme, Australian-based startups and service providers, unlike their international counterparts, will not receive clear and consistent legal protection when they respond to complaints from rightsholders about alleged instances of online infringement by third-party users on their services,” Google notes.

Interestingly, Google then delivers what appears to be a loosely veiled threat.

One of the key anti-piracy strategies touted by the mainstream entertainment companies is collaboration between rightsholders and service providers, including the latter providing voluntary tools to police infringement online. Google says that if service providers are given a raw deal on safe harbor, the extent of future cooperation may be at risk.

“If Australian-based service providers are carved out of the new safe harbor regime post-reform, they will operate from a lower incentive to build and test new voluntary tools to combat online piracy, potentially reducing their contributions to innovation in best practices in both Australia and international markets,” the company warns.

But while Village Roadshow argue against safe harbors and warn that piracy could kill the movie industry, it is quietly optimistic that the tide is turning.

In a presentation to investors last week, the company said that reducing piracy would have “only an upside” for its business but also added that new research indicates that “piracy growth [is] getting arrested.” As a result, the company says that it will build on the notion that “74% of people see piracy as ‘wrong/theft’” and will call on Australians to do the right thing.

In the meantime, the pressure on Google will continue but lawsuits – in either direction – won’t provide an answer.

Village Roadshow’s submission can be found here, Google’s here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Game Companies Oppose DMCA Exemption for ‘Abandoned’ Online Games

Post Syndicated from Ernesto original https://torrentfreak.com/game-companies-oppose-dmca-exemption-for-abandoned-online-games-180217/

There are a lot of things people are not allowed to do under US copyright law, but perhaps just as importantly there are exemptions.

The U.S. Copyright Office is currently considering whether or not to loosen the DMCA’s anti-circumvention provisions, which prevent the public from ‘tinkering’ with DRM-protected content and devices.

These provisions are renewed every three years after the Office hears various arguments from the public. One of the major topics on the agenda this year is the preservation of abandoned games.

The Copyright Office previously included game preservation exemptions to keep these games accessible. This means that libraries, archives, and museums can use emulators and other circumvention tools to make old classics playable.

Late last year several gaming fans including the Museum of Art and Digital Entertainment (the MADE), a nonprofit organization operating in California, argued for an expansion of this exemption to also cover online games. This includes games in the widely popular multiplayer genre, which require a connection to an online server.

“Although the Current Exemption does not cover it, preservation of online video games is now critical,” MADE wrote in its comment to the Copyright Office.

“Online games have become ubiquitous and are only growing in popularity. For example, an estimated fifty-three percent of gamers play multiplayer games at least once a week, and spend, on average, six hours a week playing with others online.”

This week, the Entertainment Software Association (ESA), which acts on behalf of prominent members including Electonic Arts, Nintendo and Ubisoft, opposed the request.

While they are fine with the current game-preservation exemption, expanding it to online games goes too far, they say. This would allow outsiders to recreate online game environments using server code that was never published in public.

It would also allow a broad category of “affiliates” to help with this which, according to the ESA, could include members of the public

“The proponents characterize these as ‘slight modifications’ to the existing exemption. However they are nothing of the sort. The proponents request permission to engage in forms of circumvention that will enable the complete recreation of a hosted video game-service environment and make the video game available for play by a public audience.”

“Worse yet, proponents seek permission to deputize a legion of ‘affiliates’ to assist in their activities,” ESA adds.

The proposed changes would enable and facilitate infringing use, the game companies warn. They fear that outsiders such as MADE will replicate the game servers and allow the public to play these abandoned games, something games companies would generally charge for. This could be seen as direct competition.

MADE, for example, already charges the public to access its museum so they can play games. This can be seen as commercial use under the DMCA, ESA points out.

“Public performance and display of online games within a museum likewise is a commercial use within the meaning of Section 107. MADE charges an admission fee – ‘$10 to play games all day’.

“Under the authority summarized above, public performance and display of copyrighted works to generate entrance fee revenue is a commercial use, even if undertaken by a nonprofit museum,” the ESA adds.

The ESA also stresses that their members already make efforts to revive older games themselves. There is a vibrant and growing market for “retro” games, which games companies are motivated to serve, they say.

The games companies, therefore, urge the Copyright Office to keep the status quo and reject any exemptions for online games.

“In sum, expansion of the video game preservation exemption as contemplated by Class 8 is not a ‘modest’ proposal. Eliminating the important limitations that the Register provided when adopting the current exemption risks the possibility of wide-scale infringement and substantial market harm,” they write.

The Copyright Office will take all arguments into consideration before it makes a final decision. It’s clear that the wishes of game preservation advocates, such as MADE, are hard to unite with the interests of the game companies, so one side will clearly be disappointed with the outcome.

A copy of ESA’s submissionavailablelble here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Backblaze and GDPR

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/gdpr-compliance/

GDPR General Data Protection Regulation

Over the next few months the noise over GDPR will finally reach a crescendo. For the uninitiated, “GDPR” stands for “General Data Protection Regulation” and it goes into effect on May 25th of this year. GDPR is designed to protect how personal information of EU (European Union) citizens is collected, stored, and shared. The regulation should also improve transparency as to how personal information is managed by a business or organization.

Backblaze fully expects to be GDPR compliant when May 25th rolls around and we thought we’d share our experience along the way. We’ll start with this post as an introduction to GDPR. In future posts, we’ll dive into some of the details of the process we went through in meeting the GDPR objectives.

GDPR: A Two Way Street

To ensure we are GDPR compliant, Backblaze has assembled a dedicated internal team, engaged outside counsel in the United Kingdom, and consulted with other tech companies on best practices. While it is a sizable effort on our part, we view this as a waypoint in our ongoing effort to secure and protect our customers’ data and to be transparent in how we work as a company.

In addition to the effort we are putting into complying with the regulation, we think it is important to underscore and promote the idea that data privacy and security is a two-way street. We can spend millions of dollars on protecting the security of our systems, but we can’t stop a bad actor from finding and using your account credentials left on a note stuck to your monitor. We can give our customers tools like two factor authentication and private encryption keys, but it is the partnership with our customers that is the most powerful protection. The same thing goes for your digital privacy — we’ll do our best to protect your information, but we will need your help to do so.

Why GDPR is Important

At the center of GDPR is the protection of Personally Identifiable Information or “PII.” The definition for PII is information that can be used stand-alone or in concert with other information to identify a specific person. This includes obvious data like: name, address, and phone number, less obvious data like email address and IP address, and other data such as a credit card number, and unique identifiers that can be decoded back to the person.

How Will GDPR Affect You as an Individual

If you are a citizen in the EU, GDPR is designed to protect your private information from being used or shared without your permission. Technically, this only applies when your data is collected, processed, stored or shared outside of the EU, but it’s a good practice to hold all of your service providers to the same standard. For example, when you are deciding to sign up with a service, you should be able to quickly access and understand what personal information is being collected, why it is being collected, and what the business can do with that information. These terms are typically found in “Terms and Conditions” and “Privacy Policy” documents, or perhaps in a written contract you signed before starting to use a given service or product.

Even if you are not a citizen of the EU, GDPR will still affect you. Why? Because nearly every company you deal with, especially online, will have customers that live in the EU. It makes little sense for Backblaze, or any other service provider or vendor, to create a separate set of rules for just EU citizens. In practice, protection of private information should be more accountable and transparent with GDPR.

How Will GDPR Affect You as a Backblaze Customer

Over the coming months Backblaze customers will see changes to our current “Terms and Conditions,” “Privacy Policy,” and to our Backblaze services. While the changes to the Backblaze services are expected to be minimal, the “terms and privacy” documents will change significantly. The changes will include among other things the addition of a group of model clauses and related materials. These clauses will be generally consistent across all GDPR compliant vendors and are meant to be easily understood so that a customer can easily determine how their PII is being collected and used.

Common GDPR Questions:

Here are a few of the more common questions we have heard regarding GDPR.

  1. GDPR will only affect citizens in the EU.
    Answer: The changes that are being made by companies such as Backblaze to comply with GDPR will almost certainly apply to customers from all countries. And that’s a good thing. The protections afforded to EU citizens by GDPR are something all users of our service should benefit from.
  2. After May 25, 2018, a citizen of the EU will not be allowed to use any applications or services that store data outside of the EU.
    Answer: False, no one will stop you as an EU citizen from using the internet-based service you choose. But, you should make sure you know where your data is being collected, processed, and stored. If any of those activities occur outside the EU, make sure the company is following the GDPR guidelines.
  3. My business only has a few EU citizens as customers, so I don’t need to care about GDPR?
    Answer: False, even if you have just one EU citizen as a customer, and you capture, process or store data their PII outside of the EU, you need to comply with GDPR.
  4. Companies can be fined millions of dollars for not complying with GDPR.
    Answer:
    True, but: the regulation allows for companies to be fined up to $4 Million dollars or 20% of global revenue (whichever is greater) if they don’t comply with GDPR. In practice, the feeling is that such fines will be reserved (at least initially) for egregious violators that ignore or merely give “lip-service” to GDPR.
  5. You’ll be able to tell a company is GDPR compliant because they have a “GDPR Certified” badge on their website.
    Answer: There is no official GDPR certification or an official GDPR certification program. Companies that comply with GDPR are expected to follow the articles in the regulation and it should be clear from the outside looking in that they have followed the regulations. For example, their “Terms and Conditions,” and “Privacy Policy” should clearly spell out how and why they collect, use, and share your information. At some point a real GDPR certification program may be adopted, but not yet.

For all the hoopla about GDPR, the regulation is reasonably well thought out and addresses a very important issue — people’s privacy online. Creating a best practices document, or in this case a regulation, that companies such as Backblaze can follow is a good idea. The document isn’t perfect, and over the coming years we expect there to be changes. One thing we hope for is that the countries within the EU continue to stand behind one regulation and not fragment the document into multiple versions, each applying to themselves. We believe that having multiple different GDPR versions for different EU countries would lead to less protection overall of EU citizens.

In summary, GDPR changes are coming over the next few months. Backblaze has our internal staff and our EU-based legal council working diligently to ensure that we will be GDPR compliant by May 25th. We believe that GDPR will have a positive effect in enhancing the protection of personally identifiable information for not only EU citizens, but all of our Backblaze customers.

The post Backblaze and GDPR appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

[$] Authentication and authorization in Samba 4

Post Syndicated from jake original https://lwn.net/Articles/747122/rss

Volker Lendecke is one of the first contributors to Samba,
having submitted his first patches in 1994. In addition to developing
other important file-sharing tools, he’s heavily involved in development of
the winbind service, which is implemented in winbindd. Although the core Active Directory (AD) domain controller
(DC) code was written by his colleague Stefan Metzmacher, winbind is a
crucial component of Samba’s AD functionality.
In his information-packed talk at FOSDEM
2018
, Lendecke
said he aimed to give a high-level
overview of what AD and Samba authentication is, and in particular the
communication pathways and trust relationships between the parts of
Samba that authenticate a Samba user in an AD environment.

Tickbox Must Remove Pirate Streaming Addons From Sold Devices

Post Syndicated from Ernesto original https://torrentfreak.com/tickbox-remove-pirate-streaming-addons-180214/

Online streaming piracy is on the rise and many people now use dedicated media players to watch content through their regular TVs.

This is a thorn in the side of various movie companies, who have launched a broad range of initiatives to curb this trend.

One of these initiatives is the Alliance for Creativity and Entertainment (ACE), an anti-piracy partnership between Hollywood studios, Netflix, Amazon, and more than two dozen other companies.

Last year, ACE filed a lawsuit against the Georgia-based company Tickbox TV, which sells Kodi-powered set-top boxes that stream a variety of popular media.

ACE sees these devices as nothing more than pirate tools so the coalition asked the court for an injunction to prevent Tickbox from facilitating copyright infringement, demanding that it removes all pirate add-ons from previously sold devices.

Last month, a California federal court issued an initial injunction, ordering Tickbox to keep pirate addons out of its box and halt all piracy-inducing advertisements going forward. In addition, the court directed both parties to come up with a proper solution for devices that were already sold.

The movie companies wanted Tickbox to remove infringing addons from previously sold devices, but the device seller refused this initially, equating it to hacking.

This week, both parties were able to reach an ‘agreement’ on the issue. They drafted an updated preliminary injunction which replaces the previous order and will be in effect for the remainder of the lawsuit.

The new injunction prevents Tickbox from linking to any “build,” “theme,” “app,” or “addon” that can be indirectly used to transmit copyright-infringing material. Web browsers such as Internet Explorer, Google Chrome, Safari, and Firefox are specifically excluded.

In addition, Tickbox must also release a new software updater that will remove any infringing software from previously sold devices.

“TickBox shall issue an update to the TickBox launcher software to be automatically downloaded and installed onto any previously distributed TickBox TV device and to be launched when such device connects to the internet,” the injunction reads.

“Upon being launched, the update will delete the Subject [infringing] Software downloaded onto the device prior to the update, or otherwise cause the TickBox TV device to be unable to access any Subject Software downloaded onto or accessed via that device prior to the update.”

All tiles that link to copyright-infringing software from the box’s home screen also have to be stripped. Going forward, only tiles to the Google Play Store or to Kodi within the Google Play Store are allowed.

In addition, the agreement also allows ACE to report newly discovered infringing apps or addons to Tickbox, which the company will then have to remove within 24-hours, weekends excluded.

“This ruling sets an important precedent and reduces the threat from piracy devices to the legal market for creative content and a vibrant creative economy that supports millions of workers around the world,” ACE spokesperson Zoe Thorogood says, commenting on the news.

The new injunction is good news for the movie companies, but many Tickbox customers will not appreciate the forced changes. That said, the legal battle is far from over. The main question, whether Tickbox contributed to the alleged copyright infringements, has yet to be answered.

Ultimately, this case is likely to result in a landmark decision, determining what sellers of streaming boxes can and cannot do in the United States.

A copy of the new Tickbox injunction is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

[$] Two FOSDEM talks on Samba 4

Post Syndicated from jake original https://lwn.net/Articles/747098/rss

Much as some of us would love never to have to deal with Windows,
it exists. It wants to authenticate its users and share
resources like files and printers over the network. Although many
enterprises use Microsoft tools to do this, there is a free alternative,
in the form of Samba. While Samba 3 has been happily providing
authentication along with file and print sharing to Windows clients for
many years,
the Microsoft world has been slowly moving toward Active Directory (AD).
Meanwhile, Samba 4, which adds a free reimplementation of AD on Linux, has
been increasingly ready for deployment. Three short talks at FOSDEM 2018
provided three different views of Samba 4, also known as Samba-AD,
and left behind a pretty clear picture that Samba 4 is truly
ready for use.

Subscribers can read on for a report from guest author Tom Yates on the first two of those talks; stay tuned for another on the third soon.

AWS Hot Startups for February 2018: Canva, Figma, InVision

Post Syndicated from Tina Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-for-february-2018-canva-figma-invision/

Note to readers! Starting next month, we will be publishing our monthly Hot Startups blog post on the AWS Startup Blog. Please come check us out.

As visual communication—whether through social media channels like Instagram or white space-heavy product pages—becomes a central part of everyone’s life, accessible design platforms and tools become more and more important in the world of tech. This trend is why we have chosen to spotlight three design-related startups—namely Canva, Figma, and InVision—as our hot startups for the month of February. Please read on to learn more about these design-savvy companies and be sure to check out our full post here.

Canva (Sydney, Australia)

For a long time, creating designs required expensive software, extensive studying, and time spent waiting for feedback from clients or colleagues. With Canva, a graphic design tool that makes creating designs much simpler and accessible, users have the opportunity to design anything and publish anywhere. The platform—which integrates professional design elements, including stock photography, graphic elements, and fonts for users to build designs either entirely from scratch or from thousands of free templates—is available on desktop, iOS, and Android, making it possible to spin up an invitation, poster, or graphic on a smartphone at any time.

To learn more about Canva, read our full interview with CEO Melanie Perkins here.

Figma (San Francisco, CA)

Figma is a cloud-based design platform that empowers designers to communicate and collaborate more effectively. Using recent advancements in WebGL, Figma offers a design tool that doesn’t require users to install any software or special operating systems. It also allows multiple people to work in a file at the same time—a crucial feature.

As the need for new design talent increases, the industry will need plenty of junior designers to keep up with the demand. Figma is prepared to help students by offering their platform for free. Through this, they “hope to give young designers the resources necessary to kick-start their education and eventually, their careers.”

For more about Figma, check out our full interview with CEO Dylan Field here.

InVision (New York, NY)

Founded in 2011 with the goal of helping improve every digital experience in the world, digital product design platform InVision helps users create a streamlined and scalable product design process, build and iterate on prototypes, and collaborate across organizations. The company, which raised a $100 million series E last November, bringing the company’s total funding to $235 million, currently powers the digital product design process at more than 80 percent of the Fortune 100 and brands like Airbnb, HBO, Netflix, and Uber.

Learn more about InVision here.

Be sure to check out our full post on the AWS Startups blog!

-Tina

How I built a data warehouse using Amazon Redshift and AWS services in record time

Post Syndicated from Stephen Borg original https://aws.amazon.com/blogs/big-data/how-i-built-a-data-warehouse-using-amazon-redshift-and-aws-services-in-record-time/

This is a customer post by Stephen Borg, the Head of Big Data and BI at Cerberus Technologies.

Cerberus Technologies, in their own words: Cerberus is a company founded in 2017 by a team of visionary iGaming veterans. Our mission is simple – to offer the best tech solutions through a data-driven and a customer-first approach, delivering innovative solutions that go against traditional forms of working and process. This mission is based on the solid foundations of reliability, flexibility and security, and we intend to fundamentally change the way iGaming and other industries interact with technology.

Over the years, I have developed and created a number of data warehouses from scratch. Recently, I built a data warehouse for the iGaming industry single-handedly. To do it, I used the power and flexibility of Amazon Redshift and the wider AWS data management ecosystem. In this post, I explain how I was able to build a robust and scalable data warehouse without the large team of experts typically needed.

In two of my recent projects, I ran into challenges when scaling our data warehouse using on-premises infrastructure. Data was growing at many tens of gigabytes per day, and query performance was suffering. Scaling required major capital investment for hardware and software licenses, and also significant operational costs for maintenance and technical staff to keep it running and performing well. Unfortunately, I couldn’t get the resources needed to scale the infrastructure with data growth, and these projects were abandoned. Thanks to cloud data warehousing, the bottleneck of infrastructure resources, capital expense, and operational costs have been significantly reduced or have totally gone away. There is no more excuse for allowing obstacles of the past to delay delivering timely insights to decision makers, no matter how much data you have.

With Amazon Redshift and AWS, I delivered a cloud data warehouse to the business very quickly, and with a small team: me. I didn’t have to order hardware or software, and I no longer needed to install, configure, tune, or keep up with patches and version updates. Instead, I easily set up a robust data processing pipeline and we were quickly ingesting and analyzing data. Now, my data warehouse team can be extremely lean, and focus more time on bringing in new data and delivering insights. In this post, I show you the AWS services and the architecture that I used.

Handling data feeds

I have several different data sources that provide everything needed to run the business. The data includes activity from our iGaming platform, social media posts, clickstream data, marketing and campaign performance, and customer support engagements.

To handle the diversity of data feeds, I developed abstract integration applications using Docker that run on Amazon EC2 Container Service (Amazon ECS) and feed data to Amazon Kinesis Data Streams. These data streams can be used for real time analytics. In my system, each record in Kinesis is preprocessed by an AWS Lambda function to cleanse and aggregate information. My system then routes it to be stored where I need on Amazon S3 by Amazon Kinesis Data Firehose. Suppose that you used an on-premises architecture to accomplish the same task. A team of data engineers would be required to maintain and monitor a Kafka cluster, develop applications to stream data, and maintain a Hadoop cluster and the infrastructure underneath it for data storage. With my stream processing architecture, there are no servers to manage, no disk drives to replace, and no service monitoring to write.

Setting up a Kinesis stream can be done with a few clicks, and the same for Kinesis Firehose. Firehose can be configured to automatically consume data from a Kinesis Data Stream, and then write compressed data every N minutes to Amazon S3. When I want to process a Kinesis data stream, it’s very easy to set up a Lambda function to be executed on each message received. I can just set a trigger from the AWS Lambda Management Console, as shown following.

I also monitor the duration of function execution using Amazon CloudWatch and AWS X-Ray.

Regardless of the format I receive the data from our partners, I can send it to Kinesis as JSON data using my own formatters. After Firehose writes this to Amazon S3, I have everything in nearly the same structure I received but compressed, encrypted, and optimized for reading.

This data is automatically crawled by AWS Glue and placed into the AWS Glue Data Catalog. This means that I can immediately query the data directly on S3 using Amazon Athena or through Amazon Redshift Spectrum. Previously, I used Amazon EMR and an Amazon RDS–based metastore in Apache Hive for catalog management. Now I can avoid the complexity of maintaining Hive Metastore catalogs. Glue takes care of high availability and the operations side so that I know that end users can always be productive.

Working with Amazon Athena and Amazon Redshift for analysis

I found Amazon Athena extremely useful out of the box for ad hoc analysis. Our engineers (me) use Athena to understand new datasets that we receive and to understand what transformations will be needed for long-term query efficiency.

For our data analysts and data scientists, we’ve selected Amazon Redshift. Amazon Redshift has proven to be the right tool for us over and over again. It easily processes 20+ million transactions per day, regardless of the footprint of the tables and the type of analytics required by the business. Latency is low and query performance expectations have been more than met. We use Redshift Spectrum for long-term data retention, which enables me to extend the analytic power of Amazon Redshift beyond local data to anything stored in S3, and without requiring me to load any data. Redshift Spectrum gives me the freedom to store data where I want, in the format I want, and have it available for processing when I need it.

To load data directly into Amazon Redshift, I use AWS Data Pipeline to orchestrate data workflows. I create Amazon EMR clusters on an intra-day basis, which I can easily adjust to run more or less frequently as needed throughout the day. EMR clusters are used together with Amazon RDS, Apache Spark 2.0, and S3 storage. The data pipeline application loads ETL configurations from Spring RESTful services hosted on AWS Elastic Beanstalk. The application then loads data from S3 into memory, aggregates and cleans the data, and then writes the final version of the data to Amazon Redshift. This data is then ready to use for analysis. Spark on EMR also helps with recommendations and personalization use cases for various business users, and I find this easy to set up and deliver what users want. Finally, business users use Amazon QuickSight for self-service BI to slice, dice, and visualize the data depending on their requirements.

Each AWS service in this architecture plays its part in saving precious time that’s crucial for delivery and getting different departments in the business on board. I found the services easy to set up and use, and all have proven to be highly reliable for our use as our production environments. When the architecture was in place, scaling out was either completely handled by the service, or a matter of a simple API call, and crucially doesn’t require me to change one line of code. Increasing shards for Kinesis can be done in a minute by editing a stream. Increasing capacity for Lambda functions can be accomplished by editing the megabytes allocated for processing, and concurrency is handled automatically. EMR cluster capacity can easily be increased by changing the master and slave node types in Data Pipeline, or by using Auto Scaling. Lastly, RDS and Amazon Redshift can be easily upgraded without any major tasks to be performed by our team (again, me).

In the end, using AWS services including Kinesis, Lambda, Data Pipeline, and Amazon Redshift allows me to keep my team lean and highly productive. I eliminated the cost and delays of capital infrastructure, as well as the late night and weekend calls for support. I can now give maximum value to the business while keeping operational costs down. My team pushed out an agile and highly responsive data warehouse solution in record time and we can handle changing business requirements rapidly, and quickly adapt to new data and new user requests.


Additional Reading

If you found this post useful, be sure to check out Deploy a Data Warehouse Quickly with Amazon Redshift, Amazon RDS for PostgreSQL and Tableau Server and Top 8 Best Practices for High-Performance ETL Processing Using Amazon Redshift.


About the Author

Stephen Borg is the Head of Big Data and BI at Cerberus Technologies. He has a background in platform software engineering, and first became involved in data warehousing using the typical RDBMS, SQL, ETL, and BI tools. He quickly became passionate about providing insight to help others optimize the business and add personalization to products. He is now the Head of Big Data and BI at Cerberus Technologies.

 

 

 

Build a Multi-Tenant Amazon EMR Cluster with Kerberos, Microsoft Active Directory Integration and EMRFS Authorization

Post Syndicated from Songzhi Liu original https://aws.amazon.com/blogs/big-data/build-a-multi-tenant-amazon-emr-cluster-with-kerberos-microsoft-active-directory-integration-and-emrfs-authorization/

One of the challenges faced by our customers—especially those in highly regulated industries—is balancing the need for security with flexibility. In this post, we cover how to enable multi-tenancy and increase security by using EMRFS (EMR File System) authorization, the Amazon S3 storage-level authorization on Amazon EMR.

Amazon EMR is an easy, fast, and scalable analytics platform enabling large-scale data processing. EMRFS authorization provides Amazon S3 storage-level authorization by configuring EMRFS with multiple IAM roles. With this functionality enabled, different users and groups can share the same cluster and assume their own IAM roles respectively.

Simply put, on Amazon EMR, we can now have an Amazon EC2 role per user assumed at run time instead of one general EC2 role at the cluster level. When the user is trying to access Amazon S3 resources, Amazon EMR evaluates against a predefined mappings list in EMRFS configurations and picks up the right role for the user.

In this post, we will discuss what EMRFS authorization is (Amazon S3 storage-level access control) and show how to configure the role mappings with detailed examples. You will then have the desired permissions in a multi-tenant environment. We also demo Amazon S3 access from HDFS command line, Apache Hive on Hue, and Apache Spark.

EMRFS authorization for Amazon S3

There are two prerequisites for using this feature:

  1. Users must be authenticated, because EMRFS needs to map the current user/group/prefix to a predefined user/group/prefix. There are several authentication options. In this post, we launch a Kerberos-enabled cluster that manages the Key Distribution Center (KDC) on the master node, and enable a one-way trust from the KDC to a Microsoft Active Directory domain.
  2. The application must support accessing Amazon S3 via Applications that have their own S3FileSystem APIs (for example, Presto) are not supported at this time.

EMRFS supports three types of mapping entries: user, group, and Amazon S3 prefix. Let’s use an example to show how this works.

Assume that you have the following three identities in your organization, and they are defined in the Active Directory:

To enable all these groups and users to share the EMR cluster, you need to define the following IAM roles:

In this case, you create a separate Amazon EC2 role that doesn’t give any permission to Amazon S3. Let’s call the role the base role (the EC2 role attached to the EMR cluster), which in this example is named EMR_EC2_RestrictedRole. Then, you define all the Amazon S3 permissions for each specific user or group in their own roles. The restricted role serves as the fallback role when the user doesn’t belong to any user/group, nor does the user try to access any listed Amazon S3 prefixes defined on the list.

Important: For all other roles, like emrfs_auth_group_role_data_eng, you need to add the base role (EMR_EC2_RestrictedRole) as the trusted entity so that it can assume other roles. See the following example:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::511586466501:role/EMR_EC2_RestrictedRole"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

The following is an example policy for the admin user role (emrfs_auth_user_role_admin_user):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "*"
        }
    ]
}

We are assuming the admin user has access to all buckets in this example.

The following is an example policy for the data science group role (emrfs_auth_group_role_data_sci):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::emrfs-auth-data-science-bucket-demo/*",
                "arn:aws:s3:::emrfs-auth-data-science-bucket-demo"
            ],
            "Action": [
                "s3:*"
            ]
        }
    ]
}

This role grants all Amazon S3 permissions to the emrfs-auth-data-science-bucket-demo bucket and all the objects in it. Similarly, the policy for the role emrfs_auth_group_role_data_eng is shown below:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::emrfs-auth-data-engineering-bucket-demo/*",
                "arn:aws:s3:::emrfs-auth-data-engineering-bucket-demo"
            ],
            "Action": [
                "s3:*"
            ]
        }
    ]
}

Example role mappings configuration

To configure EMRFS authorization, you use EMR security configuration. Here is the configuration we use in this post

Consider the following scenario.

First, the admin user admin1 tries to log in and run a command to access Amazon S3 data through EMRFS. The first role emrfs_auth_user_role_admin_user on the mapping list, which is a user role, is mapped and picked up. Then admin1 has access to the Amazon S3 locations that are defined in this role.

Then a user from the data engineer group (grp_data_engineering) tries to access a data bucket to run some jobs. When EMRFS sees that the user is a member of the grp_data_engineering group, the group role emrfs_auth_group_role_data_eng is assumed, and the user has proper access to Amazon S3 that is defined in the emrfs_auth_group_role_data_eng role.

Next, the third user comes, who is not an admin and doesn’t belong to any of the groups. After failing evaluation of the top three entries, EMRFS evaluates whether the user is trying to access a certain Amazon S3 prefix defined in the last mapping entry. This type of mapping entry is called the prefix type. If the user is trying to access s3://emrfs-auth-default-bucket-demo/, then the prefix mapping is in effect, and the prefix role emrfs_auth_prefix_role_default_s3_prefix is assumed.

If the user is not trying to access any of the Amazon S3 paths that are defined on the list—which means it failed the evaluation of all the entries—it only has the permissions defined in the EMR_EC2RestrictedRole. This role is assumed by the EC2 instances in the cluster.

In this process, all the mappings defined are evaluated in the defined order, and the first role that is mapped is assumed, and the rest of the list is skipped.

Setting up an EMR cluster and mapping Active Directory users and groups

Now that we know how EMRFS authorization role mapping works, the next thing we need to think about is how we can use this feature in an easy and manageable way.

Active Directory setup

Many customers manage their users and groups using Microsoft Active Directory or other tools like OpenLDAP. In this post, we create the Active Directory on an Amazon EC2 instance running Windows Server and create the users and groups we will be using in the example below. After setting up Active Directory, we use the Amazon EMR Kerberos auto-join capability to establish a one-way trust from the KDC running on the EMR master node to the Active Directory domain on the EC2 instance. You can use your own directory services as long as it talks to the LDAP (Lightweight Directory Access Protocol).

To create and join Active Directory to Amazon EMR, follow the steps in the blog post Use Kerberos Authentication to Integrate Amazon EMR with Microsoft Active Directory.

After configuring Active Directory, you can create all the users and groups using the Active Directory tools and add users to appropriate groups. In this example, we created users like admin1, dataeng1, datascientist1, grp_data_engineering, and grp_data_science, and then add the users to the right groups.

Join the EMR cluster to an Active Directory domain

For clusters with Kerberos, Amazon EMR now supports automated Active Directory domain joins. You can use the security configuration to configure the one-way trust from the KDC to the Active Directory domain. You also configure the EMRFS role mappings in the same security configuration.

The following is an example of the EMR security configuration with a trusted Active Directory domain EMRKRB.TEST.COM and the EMRFS role mappings as we discussed earlier:

The EMRFS role mapping configuration is shown in this example:

We will also provide an example AWS CLI command that you can run.

Launching the EMR cluster and running the tests

Now you have configured Kerberos and EMRFS authorization for Amazon S3.

Additionally, you need to configure Hue with Active Directory using the Amazon EMR configuration API in order to log in using the AD users created before. The following is an example of Hue AD configuration.

[
  {
    "Classification":"hue-ini",
    "Properties":{

    },
    "Configurations":[
      {
        "Classification":"desktop",
        "Properties":{

        },
        "Configurations":[
          {
            "Classification":"ldap",
            "Properties":{

            },
            "Configurations":[
              {
                "Classification":"ldap_servers",
                "Properties":{

                },
                "Configurations":[
                  {
                    "Classification":"AWS",
                    "Properties":{
                      "base_dn":"DC=emrkrb,DC=test,DC=com",
                      "ldap_url":"ldap://emrkrb.test.com",
                      "search_bind_authentication":"false",
                      "bind_dn":"CN=adjoiner,CN=users,DC=emrkrb,DC=test,DC=com",
                      "bind_password":"Abc123456",
                      "create_users_on_login":"true",
                      "nt_domain":"emrkrb.test.com"
                    },
                    "Configurations":[

                    ]
                  }
                ]
              }
            ]
          },
          {
            "Classification":"auth",
            "Properties":{
              "backend":"desktop.auth.backend.LdapBackend"
            },
            "Configurations":[

            ]
          }
        ]
      }
    ]
  }

Note: In the preceding configuration JSON file, change the values as required before pasting it into the software setting section in the Amazon EMR console.

Now let’s use this configuration and the security configuration you created before to launch the cluster.

In the Amazon EMR console, choose Create cluster. Then choose Go to advanced options. On the Step1: Software and Steps page, under Edit software settings (optional), paste the configuration in the box.

The rest of the setup is the same as an ordinary cluster setup, except in the Security Options section. In Step 4: Security, under Permissions, choose Custom, and then choose the RestrictedRole that you created before.

Choose the appropriate subnets (these should meet the base requirement in order for a successful Active Directory join—see the Amazon EMR Management Guide for more details), and choose the appropriate security groups to make sure it talks to the Active Directory. Choose a key so that you can log in and configure the cluster.

Most importantly, choose the security configuration that you created earlier to enable Kerberos and EMRFS authorization for Amazon S3.

You can use the following AWS CLI command to create a cluster.

aws emr create-cluster --name "TestEMRFSAuthorization" \ 
--release-label emr-5.10.0 \ --instance-type m3.xlarge \ 
--instance-count 3 \ 
--ec2-attributes InstanceProfile=EMR_EC2_DefaultRole,KeyName=MyEC2KeyPair \ --service-role EMR_DefaultRole \ 
--security-configuration MyKerberosConfig \ 
--configurations file://hue-config.json \
--applications Name=Hadoop Name=Hive Name=Hue Name=Spark \ 
--kerberos-attributes Realm=EC2.INTERNAL, \ KdcAdminPassword=<YourClusterKDCAdminPassword>, \ ADDomainJoinUser=<YourADUserLogonName>,ADDomainJoinPassword=<YourADUserPassword>, \ 
CrossRealmTrustPrincipalPassword=<MatchADTrustPwd>

Note: If you create the cluster using CLI, you need to save the JSON configuration for Hue into a file named hue-config.json and place it on the server where you run the CLI command.

After the cluster gets into the Waiting state, try to connect by using SSH into the cluster using the Active Directory user name and password.

ssh -l [email protected] <EMR IP or DNS name>

Quickly run two commands to show that the Active Directory join is successful:

  1. id [user name] shows the mapped AD users and groups in Linux.
  2. hdfs groups [user name] shows the mapped group in Hadoop.

Both should return the current Active Directory user and group information if the setup is correct.

Now, you can test the user mapping first. Log in with the admin1 user, and run a Hadoop list directory command:

hadoop fs -ls s3://emrfs-auth-data-science-bucket-demo/

Now switch to a user from the data engineer group.

Retry the previous command to access the admin’s bucket. It should throw an Amazon S3 Access Denied exception.

When you try listing the Amazon S3 bucket that a data engineer group member has accessed, it triggers the group mapping.

hadoop fs -ls s3://emrfs-auth-data-engineering-bucket-demo/

It successfully returns the listing results. Next we will test Apache Hive and then Apache Spark.

 

To run jobs successfully, you need to create a home directory for every user in HDFS for staging data under /user/<username>. Users can configure a step to create a home directory at cluster launch time for every user who has access to the cluster. In this example, you use Hue since Hue will create the home directory in HDFS for the user at the first login. Here Hue also needs to be integrated with the same Active Directory as explained in the example configuration described earlier.

First, log in to Hue as a data engineer user, and open a Hive Notebook in Hue. Then run a query to create a new table pointing to the data engineer bucket, s3://emrfs-auth-data-engineering-bucket-demo/table1_data_eng/.

You can see that the table was created successfully. Now try to create another table pointing to the data science group’s bucket, where the data engineer group doesn’t have access.

It failed and threw an Amazon S3 Access Denied error.

Now insert one line of data into the successfully create table.

Next, log out, switch to a data science group user, and create another table, test2_datasci_tb.

The creation is successful.

The last task is to test Spark (it requires the user directory, but Hue created one in the previous step).

Now let’s come back to the command line and run some Spark commands.

Login to the master node using the datascientist1 user:

Start the SparkSQL interactive shell by typing spark-sql, and run the show tables command. It should list the tables that you created using Hive.

As a data science group user, try select on both tables. You will find that you can only select the table defined in the location that your group has access to.

Conclusion

EMRFS authorization for Amazon S3 enables you to have multiple roles on the same cluster, providing flexibility to configure a shared cluster for different teams to achieve better efficiency. The Active Directory integration and group mapping make it much easier for you to manage your users and groups, and provides better auditability in a multi-tenant environment.


Additional Reading

If you found this post useful, be sure to check out Use Kerberos Authentication to Integrate Amazon EMR with Microsoft Active Directory and Launching and Running an Amazon EMR Cluster inside a VPC.


About the Authors

Songzhi Liu is a Big Data Consultant with AWS Professional Services. He works closely with AWS customers to provide them Big Data & Machine Learning solutions and best practices on the Amazon cloud.

 

 

 

 

Progressing from tech to leadership

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2018/02/on-leadership.html

I’ve been a technical person all my life. I started doing vulnerability research in the late 1990s – and even today, when I’m not fiddling with CNC-machined robots or making furniture, I’m probably clobbering together a fuzzer or writing a book about browser protocols and APIs. In other words, I’m a geek at heart.

My career is a different story. Over the past two decades and a change, I went from writing CGI scripts and setting up WAN routers for a chain of shopping malls, to doing pentests for institutional customers, to designing a series of network monitoring platforms and handling incident response for a big telco, to building and running the product security org for one of the largest companies in the world. It’s been an interesting ride – and now that I’m on the hook for the well-being of about 100 folks across more than a dozen subteams around the world, I’ve been thinking a bit about the lessons learned along the way.

Of course, I’m a bit hesitant to write such a post: sometimes, your efforts pan out not because of your approach, but despite it – and it’s possible to draw precisely the wrong conclusions from such anecdotes. Still, I’m very proud of the culture we’ve created and the caliber of folks working on our team. It happened through the work of quite a few talented tech leads and managers even before my time, but it did not happen by accident – so I figured that my observations may be useful for some, as long as they are taken with a grain of salt.

But first, let me start on a somewhat somber note: what nobody tells you is that one’s level on the leadership ladder tends to be inversely correlated with several measures of happiness. The reason is fairly simple: as you get more senior, a growing number of people will come to you expecting you to solve increasingly fuzzy and challenging problems – and you will no longer be patted on the back for doing so. This should not scare you away from such opportunities, but it definitely calls for a particular mindset: your motivation must come from within. Look beyond the fight-of-the-day; find satisfaction in seeing how far your teams have come over the years.

With that out of the way, here’s a collection of notes, loosely organized into three major themes.

The curse of a techie leader

Perhaps the most interesting observation I have is that for a person coming from a technical background, building a healthy team is first and foremost about the subtle art of letting go.

There is a natural urge to stay involved in any project you’ve started or helped improve; after all, it’s your baby: you’re familiar with all the nuts and bolts, and nobody else can do this job as well as you. But as your sphere of influence grows, this becomes a choke point: there are only so many things you could be doing at once. Just as importantly, the project-hoarding behavior robs more junior folks of the ability to take on new responsibilities and bring their own ideas to life. In other words, when done properly, delegation is not just about freeing up your plate; it’s also about empowerment and about signalling trust.

Of course, when you hand your project over to somebody else, the new owner will initially be slower and more clumsy than you; but if you pick the new leads wisely, give them the right tools and the right incentives, and don’t make them deathly afraid of messing up, they will soon excel at their new jobs – and be grateful for the opportunity.

A related affliction of many accomplished techies is the conviction that they know the answers to every question even tangentially related to their domain of expertise; that belief is coupled with a burning desire to have the last word in every debate. When practiced in moderation, this behavior is fine among peers – but for a leader, one of the most important skills to learn is knowing when to keep your mouth shut: people learn a lot better by experimenting and making small mistakes than by being schooled by their boss, and they often try to read into your passing remarks. Don’t run an authoritarian camp focused on total risk aversion or perfectly efficient resource management; just set reasonable boundaries and exit conditions for experiments so that they don’t spiral out of control – and be amazed by the results every now and then.

Death by planning

When nothing is on fire, it’s easy to get preoccupied with maintaining the status quo. If your current headcount or budget request lists all the same projects as last year’s, or if you ever find yourself ending an argument by deferring to a policy or a process document, it’s probably a sign that you’re getting complacent. In security, complacency usually ends in tears – and when it doesn’t, it leads to burnout or boredom.

In my experience, your goal should be to develop a cadre of managers or tech leads capable of coming up with clever ideas, prioritizing them among themselves, and seeing them to completion without your day-to-day involvement. In your spare time, make it your mission to challenge them to stay ahead of the curve. Ask your vendor security lead how they’d streamline their work if they had a 40% jump in the number of vendors but no extra headcount; ask your product security folks what’s the second line of defense or containment should your primary defenses fail. Help them get good ideas off the ground; set some mental success and failure criteria to be able to cut your losses if something does not pan out.

Of course, malfunctions happen even in the best-run teams; to spot trouble early on, instead of overzealous project tracking, I found it useful to encourage folks to run a data-driven org. I’d usually ask them to imagine that a brand new VP shows up in our office and, as his first order of business, asks “why do you have so many people here and how do I know they are doing the right things?”. Not everything in security can be quantified, but hard data can validate many of your assumptions – and will alert you to unseen issues early on.

When focusing on data, it’s important not to treat pie charts and spreadsheets as an art unto itself; if you run a security review process for your company, your CSAT scores are going to reach 100% if you just rubberstamp every launch request within ten minutes of receiving it. Make sure you’re asking the right questions; instead of “how satisfied are you with our process”, try “is your product better as a consequence of talking to us?”

Whenever things are not progressing as expected, it is a natural instinct to fall back to micromanagement, but it seldom truly cures the ill. It’s probable that your team disagrees with your vision or its feasibility – and that you’re either not listening to their feedback, or they don’t think you’d care. It’s good to assume that most of your employees are as smart or smarter than you; barking your orders at them more loudly or more frequently does not lead anyplace good. It’s good to listen to them and either present new facts or work with them on a plan you can all get behind.

In some circumstances, all that’s needed is honesty about the business trade-offs, so that your team feels like your “partner in crime”, not a victim of circumstance. For example, we’d tell our folks that by not falling behind on basic, unglamorous work, we earn the trust of our VPs and SVPs – and that this translates into the independence and the resources we need to pursue more ambitious ideas without being told what to do; it’s how we game the system, so to speak. Oh: leading by example is a pretty powerful tool at your disposal, too.

The human factor

I’ve come to appreciate that hiring decent folks who can get along with others is far more important than trying to recruit conference-circuit superstars. In fact, hiring superstars is a decidedly hit-and-miss affair: while certainly not a rule, there is a proportion of folks who put the maintenance of their celebrity status ahead of job responsibilities or the well-being of their peers.

For teams, one of the most powerful demotivators is a sense of unfairness and disempowerment. This is where tech-originating leaders can shine, because their teams usually feel that their bosses understand and can evaluate the merits of the work. But it also means you need to be decisive and actually solve problems for them, rather than just letting them vent. You will need to make unpopular decisions every now and then; in such cases, I think it’s important to move quickly, rather than prolonging the uncertainty – but it’s also important to sincerely listen to concerns, explain your reasoning, and be frank about the risks and trade-offs.

Whenever you see a clash of personalities on your team, you probably need to respond swiftly and decisively; being right should not justify being a bully. If you don’t react to repeated scuffles, your best people will probably start looking for other opportunities: it’s draining to put up with constant pie fights, no matter if the pies are thrown straight at you or if you just need to duck one every now and then.

More broadly, personality differences seem to be a much better predictor of conflict than any technical aspects underpinning a debate. As a boss, you need to identify such differences early on and come up with creative solutions. Sometimes, all you need is taking some badly-delivered but valid feedback and having a conversation with the other person, asking some questions that can help them reach the same conclusions without feeling that their worldview is under attack. Other times, the only path forward is making sure that some folks simply don’t run into each for a while.

Finally, dealing with low performers is a notoriously hard but important part of the game. Especially within large companies, there is always the temptation to just let it slide: sideline a struggling person and wait for them to either get over their issues or leave. But this sends an awful message to the rest of the team; for better or worse, fairness is important to most. Simply firing the low performers is seldom the best solution, though; successful recovery cases are what sets great managers apart from the average ones.

Oh, one more thought: people in leadership roles have their allegiance divided between the company and the people who depend on them. The obligation to the company is more formal, but the impact you have on your team is longer-lasting and more intimate. When the obligations to the employer and to your team collide in some way, make sure you can make the right call; it might be one of the the most consequential decisions you’ll ever make.

2018 Picademy dates in the United States

Post Syndicated from Andrew Collins original https://www.raspberrypi.org/blog/new-picademy-2018-dates-in-united-states/

Cue the lights! Cue the music! Picademy is back for another year stateside. We’re excited to bring our free computer science and digital making professional development program for educators to four new cities this summer — you can apply right now.

Picademy USA Denver Raspberry Pi
Picademy USA Seattle Raspberry Pi
Picademy USA Jersey City Raspberry Pi
Raspberry Pi Picademy USA Atlanta

We’re thrilled to kick off our 2018 season! Before we get started, let’s take a look back at our community’s accomplishments in the 2017 Picademy North America season.

Picademy 2017 highlights

Last year, we partnered with four awesome venues to host eight Picademy events in the United States. At every event across the country, we met incredibly talented educators passionate about bringing digital making to their learners. Whether it was at Ann Arbor District Library’s makerspace, UC Irvine’s College of Engineering, or a creative community center in Boise, Idaho, we were truly inspired by all our Picademy attendees and were thrilled to welcome them to the Raspberry Pi Certified Educator community.

JWU Hosts Picademy

JWU Providence’s College of Engineering & Design recently partnered with the Raspberry Pi Foundation to host Picademy, a free training session designed to give educators the tools to teach computer skills with confidence and creativity. | http://www.jwu.edu

The 2017 Picademy cohorts were a diverse bunch with a lot of experience in their field. We welcomed more than 300 educators from 32 U.S. states and 10 countries. They were a mix of high school, middle school, and elementary classroom teachers, librarians, museum staff, university lecturers, and teacher trainers. More than half of our attendees were teaching computer science or technology already, and over 90% were specifically interested in incorporating physical computing into their work.

Picademy has a strong and lasting impact on educators. Over 80% of graduates said they felt confident using Raspberry Pi after attending, and 88% said they were now interested in leading a digital making event in their community. To showcase two wonderful examples of this success: Chantel Mason led a Raspberry Pi workshop for families and educators in her community in St. Louis, Missouri this fall, and Dean Palmer led a digital making station at the Computer Science for Rhode Island Summit in December.

Picademy 2018 dates

This year, we’re partnering with four new venues to host our Picademy season.


We’ll be at mindSpark Learning in Denver the first week in June, at Liberty Science Center in Jersey City later that month, at Georgia Tech University in Atlanta in mid-July, and finally at the Living Computer Museum in Seattle the first week in August.


A big thank you to each of these venues for hosting us and supporting our free educator professional development program!

Ready to join us for Picademy 2018? Learn more and apply now: rpf.io/picademy2018.

The post 2018 Picademy dates in the United States appeared first on Raspberry Pi.

Court Orders Tickbox to Keep Pirate Streaming Addons Out

Post Syndicated from Ernesto original https://torrentfreak.com/court-orders-tickbox-to-keep-pirate-streaming-addons-out-180131/

Kodi-powered set-top boxes are a great way to to stream video content to a TV, but sellers who ship these devices with unauthorized add-ons give them a bad reputation.

According to the Alliance for Creativity and Entertainment (ACE), an anti-piracy partnership comprised of Hollywood studios, Netflix, Amazon, and more than two dozen other companies, Tickbox TV is one of these bad actors.

Last year, ACE filed a lawsuit against the Georgia-based company, which sells Kodi-powered set-top boxes that stream a variety of popular media.

According to ACE, these devices are nothing more than pirate tools, allowing buyers to stream copyright-infringing content and being advertised as such. The coalition, therefore, asked the court for an injunction to prevent Tickbox from facilitating copyright infringement by removing all pirate add-ons from previously sold devices.

This week US District Court Judge Michael Fitzgerald issued a preliminary injunction, which largely sides with the movie companies. According to the Judge, there is sufficient reason to believe that Tickbox can be held liable for inducing copyright infringement.

One of the claims is that Tickbox promoted its service for piracy purposes, and according to the Judge the movie companies provided enough evidence to make this likely. This includes various advertising messages the box seller used.

“There is ample evidence that, at least prior to Plaintiffs’commencement of this action, TickBox explicitly advertised the Device as a means to accessing unauthorized versions of copyrighted audiovisual content,” Judge Fitzgerald writes.

In its defense, Tickbox argued that it merely offered a computer which users can then configure to their liking. However, the Judge points out that the company went further, as it actively directed its users to install certain themes (builds) to watch movies, TV and sports.

“Thus, the fact that the Device is just a ‘computer’ that can be used for infringing and noninfringing purposes does not insulate TickBox from liability if [..] the Device is actually used for infringing purposes and TickBox encourages such use.”

Taking these and several other factors into account, the Court ruled that a preliminary injunction is warranted at this stage. After the lawsuit was filed, Tickbox already voluntarily removed much of the inducing advertisements and addons, and this will remain so.

The preliminary injunction compels TickBox to the current version of the user interface, without easy access to pirate add-ons. The devices should no longer contain links to any of the themes and addons that the movie companies have flagged as copyright infringing.

Tickbox had argued that a broad injunction could shut down its business, but the court counters this. Customers will still be able to use the box for legitimate purposes. If they are no longer interested it suggests that piracy was the main draw.

“[A]n injunction of this scope will not ‘shut down Defendant’s business’ as TickBox contends. In the event that such an injunction does shut TickBox down, that will be indicative not of an unjustifiably burdensome injunction, but of a nonviable business model,” Judge Fitzgerald writes.

The preliminary injunction is not final yet as there are several questions still unanswered.

It’s unclear, for example, if and how Tickbox should remove addons from previously sold devices. The Court, therefore, instructs both parties to attempt to reach agreement on these outstanding issues, to include them in an updated injunction.

The above findings are preliminary and apply specifically to the injunction request and the case itself will continue. However, the Court’s early opinion suggests that Tickbox has plenty of work ahead to prove its innocence.

A copy of the preliminary injunction is available here (pdf), and Judge Fitzgerald’s findings can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Security updates for Wednesday

Post Syndicated from jake original https://lwn.net/Articles/745912/rss

Security updates have been issued by Arch Linux (dnsmasq, libmupdf, mupdf, mupdf-gl, mupdf-tools, and zathura-pdf-mupdf), CentOS (kernel), Debian (smarty3, thunderbird, and unbound), Fedora (bind, bind-dyndb-ldap, coreutils, curl, dnsmasq, dnsperf, gcab, java-1.8.0-openjdk, libxml2, mongodb, poco, rubygem-rack-protection, transmission, unbound, and wireshark), Red Hat (collectd, erlang, and openstack-nova), SUSE (bind), and Ubuntu (clamav and webkit2gtk).