Tag Archives: Github

AMD Uses DMCA to Mitigate Massive GPU Source Code Leak

Post Syndicated from Andy original https://torrentfreak.com/amd-uses-dmca-to-mitigate-massive-gpu-source-code-leak-200325/

Graphics cards are big business and AMD is one of the leading brands with an estimated 32% share of the discrete desktop market.

In July 2019, to celebrate its 50th anniversary, AMD released its Radeon RX 5000 series powered by ‘Navi’ GPUs (Graphics Processing Unit). The source code for these devices is extremely sensitive and considered secret but perhaps not for much longer.

This week rumors began to circulate that an unnamed individual had somehow obtained the source for Navi 10, Navi 21 and Arden devices, the latter representing the rumored GPU for the yet-to-be-released Xbox Series X. Confirming whether such leaks are genuine is difficult but yesterday AMD took action which tends to support the theory.

In a DMCA notice sent to development platform Github, AMD identified the recently-created ‘xxXsoullessXxx’ repository and a project titled “AMD-navi-GPU-HARDWARE-SOURCE” as the location of its “stolen” intellectual property.

“This repository contains intellectual property owned by and stolen from AMD,” the semiconductor company wrote. “The original IP is held privately and was stolen from AMD.”

Github responded by immediately taking the repository down, as per AMD’s request. That prompted us to try and find the person behind the repo and to ask some questions about what AMD was trying to suppress. The individual informed TorrentFreak that AMD’s GPU source code was the content in question. (Responses edited for clarity)

“In November 2019, I found AMD Navi GPU hardware source codes in a hacked computer,” the person explained. “The user didn’t take any effective action against the leak of the codes.”

Questioned further on the route of extraction, we were told that a combination of factors led to the leak.

“The source code was unexpectedly achieved from an unprotected computer//server through some exploits. I later found out about the files inside it. They weren’t even protected properly or even encrypted with anything which is just sad.”

The individual, who claims to be female, told us that the package included code for Navi 10 and Navi 21 devices. She also confirmed that the source for the Xbox Series X GPU ‘Arden’ was part of the haul.

When asked whether the person had spoken to AMD about the leak, the answer was negative.

“I haven’t spoken to AMD about it because I am pretty sure that instead of accepting their mistake and moving on, they will try to sue me. So why not just leak it to everyone?” we were told.

The alleged leaker further told us that one “source code packet” had already been released. Whether that is limited to the material made available via Github remains unclear but TF was able to find links to a file-hosting site where an archive claiming to be the content was stored. Given the potentially criminal route via which the content was obtained, we did not download the package.

That AMD is concerned about the leak was underlined once again late yesterday. Having indicated in its initial complaint to Github that the source couldn’t be found anywhere else, the company later backtracked, identifying at least four other locations on Github where the project had been forked. All of those repos have been taken down.

While taking down the repositories is a logical first step for AMD, the gravity of this leak is hard to underestimate. The claimed hacker told TF that she valued the source at $100m but how that calculation was arrived at is unknown. While AMD considers its next steps, an even bigger storm may be heading the company’s way.

“If I get no buyer I will just leak everything,” the leaker concluded, adding that the files would be secured with passwords that will only be handed out to select individuals.

Drom: TF, for the latest news on copyright battles, torrent sites and more. We also have an annual VPN review.

Spotify Hits Windows Software That Downloads Tracks & Removes DRM

Post Syndicated from Andy original https://torrentfreak.com/spotify-hits-windows-software-that-downloads-tracks-removes-drm-200313/

With more than 271 million users across 79 markets, Spotify is the most popular music streaming service in the world.

Its 50 million song library is accessed by 124 million paying subscribers, who gain additional features such as an ad-free experience and the ability to download tracks to their own devices for offline listening. These tracks are encrypted so can’t be used outside the Spotify software, at least by conventional means.

One tool that turns this business model on its head is Windows-based application XSpotify. The tool has gained popularity for a number of reasons, not least its ability to remove DRM from the tracks stored in Spotify’s extensive library and permanently download them for keeping on users’ machines.

XSpotify has been quietly growing its userbase, offering track downloads from both free Spotify accounts (in 160 kb/s, 32-bit, 44100 Hz .ogg) and premium accounts (in 320 kb/s, 32-bit, 44100 Hz .ogg) while pulling down metadata such as artist, title, and album covers. Considering the above and its ability to block ads, it’s no surprise that Spotify eventually took legal action to tackle the spread of the tool.

This week, Washington-based law firm Perkins Coie LLP sent a broad takedown notice to Github, where XSpotify was available for download, citing breaches of the DMCA by the app and its developer.

“Copyrighted files on Spotify’s services are protected by encryption. Spotify uses a key to decrypt the copyrighted files so legitimate users can listen to the copyrighted files through the Spotify services. Spotify’s encryption system prevents users from listening to copyrighted works without Spotify’s decryption key,” the notice reads.

“XSpotify states that it is a ‘DRM bypass’ that allows users to ‘Download all songs directly from Spotify servers.’ XSpotify’s technology circumvents Spotify’s encryption by stealing the Spotify key and using it in a way Spotify prohibits, namely, enabling users to access encrypted copyrighted content without authorization.

“By providing technology that circumvents Spotify’s access controls, XSpotify violates 17 U.S.C. §§ 1201(a)(2),” the law firm writes.

The section of US law cited by Spotify’s attorneys is clear. Among other things, it states that no person shall offer any technology to the public that is “primarily designed or produced for the purpose of circumventing a technological measure that effectively controls access to a work protected under this title.”

In addition to removing the main XSpotify repository, Github was also ordered to delete almost 130 others that carried forks of the popular tool. At the time of writing, every repository reported by Spotify as infringing has been removed. Of course, XSpotify is still available for download from other locations but whether its developer will continue his work after this warning shot is yet to be seen.

Drom: TF, for the latest news on copyright battles, torrent sites and more. We also have an annual VPN review.

DMCA Notices Took Down 14,320 Github Projects in 2019

Post Syndicated from Andy original https://torrentfreak.com/dmca-notices-took-down-14320-github-projects-in-2019-200226/

Code development platform Github is home to a staggering 40 million developers worldwide, most of whom use its world-class features without much outside interference.

Occasionally, however, code published to the site can attract the negative attention of outside parties, some of whom believe that it somehow infringes on their rights.

The reasons for these claims are varied but most commonly on TF we cover copyright infringement issues. Recent examples can be found in a notice filed by the MPA which targeted the repository of ‘pirate’ app TeaTV or when Instagram requested code to be removed, ostensibly to protect its users’ copyrights.

In common with platforms like Google and Reddit, Github publishes an annual transparency report, which offers additional information on how the company responds to requests for user information and removal of content. Its latest, the sixth since 2014, reveals that in response to copyright complaints, Github permanently took down 14,320 projects.

“14,320 may sound like a lot of projects,” says Abby Vollmer, senior manager of policy at GitHub, “but it’s only about one one-hundredth of a percent of the repositories on GitHub at the end of 2019.”

Since multiple projects can be targeted in a single DMCA notice, the actual volume of notices processed by Github is much lower, 1,762 valid notices to be precise. The company handled just 37 counter-notices, which are sent to Github by users requesting that content is reinstated due to the belief it is non-infringing or has been incorrectly targeted.

The counter-notice figure provided by Github also includes DMCA notice retractions, where the submitter withdrew their original complaint. The company does not track incomplete or insufficient notices, so these are not detailed in its report.

Github acknowledges that it has seen considerable growth in the volume of DMCA notices submitted to the platform over the past several years. However, the company highlights that the increase in repositories affected is somewhat in line with its surging userbase.

“Based on DMCA data we’ve compiled over the last few years, we’ve seen an increase in DMCA notices received and processed, trending with growth in registered users over the same period of time, until this year,” Vollmer says.

“However, if we compare the number of repositories affected by DMCA notices to the approximate number of registered users over the same period of time, then we see an increase this year that correlates with that of GitHub’s community.”

While most copyright complaints are filed by companies, their agents, or individuals, Github said that this year it needed to add a new category titled ‘court-ordered takedowns’. Github says it received a single order in 2019 that was about copyright but wasn’t a regular complaint. Unfortunately, however, it’s not allowed to provide many details.

“We received one this year and interestingly, it was about copyright but not under the DMCA. Since it was a gagged court order, we weren’t able to provide our usual transparency to the user of sharing and posting the notice, but we are able to report on the fact that we processed a takedown on this basis,” Vollmer concludes.

Overall, the number of copyright complaints received by Github is relatively small considering its size. It’s unlikely that leading rightsholders see the platform as a major problem but over the years many have had projects taken down, including Nintendo, WinRAR, the body in charge of HDMI administration, and Grindr.

A pirate site even got in on the act, but that was a clear anomaly.

Github’s 2019 Transparency Report can be viewed in full here

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

MPA Targets Pirate App TeaTV, Asks Github to Consider Repeat Infringer Policy

Post Syndicated from Andy original https://torrentfreak.com/mpa-targets-pirate-app-teatv-asks-github-to-consider-repeat-infringer-policy-200222/

Accessing regular websites in order to stream copies of the latest movies and TV shows is still popular among Internet users but the rise of set-top boxes and portable devices has fueled the uptake of app-based piracy tools.

It’s a cramped marketplace but last year TeaTV gained notable traction and was installed by hundreds of thousands, maybe even millions, of pirates looking to access video at zero cost. This momentum earned TeaTV a place in an October 2019 CNBC feature, something which triggered even more interest in the tool and its disappearance from the web.

In the wake of that piece, a source close to TeaTV informed TF that the software (which is available for Android, Windows and macOS) would be back, a promise that was later fulfilled. However, it now transpires that Hollywood is attempting to disrupt access to the tool via complaints filed with code development platform Github.

A notice filed by the Motion Picture Association (MPA) this week begins by referencing the CNBC article, noting that TeaTV “is an app notoriously devoted to copyright infringement.” It reveals previous correspondence with Github during October and November 2019, and January 2020, and thanks Github “for its additional guidance” offered by the Microsoft-owned platform late December 2019.

“We previously provided you links to the Github repositories that TeaTV is using and are now providing you with the attached file titled ‘GitHub-Code’ which shows code hosted on Github that provides links to pirate sites with infringing copies of motion pictures and television shows that are scraped by the TeaTV app to provide access to the infringing content users are looking for,” the complaint reads.

Four repositories listed by the MPA in previous notices have already been taken down but the MPA has now taken further action by demanding the deletion of repos carrying the three executable files for the Android, Windows, and macOS variants of TeaTV.

“Also attached is a file titled ‘GitHub-Executables’ which shows that the final version of the app is available for download from the GitHub platform. These executable files are pre-configured to infringe copyright-protected motion pictures and television shows that are owned or controlled by our Members,” the MPA writes.

Additionally, the Hollywood group says it carried out a network traffic analysis on the TeaTV app and found that its API connected to accounts on Github, located at three URLs, all of which should be removed.

After the MPA reminded Github of the 2005 MGM v. Grokster decision, noting that “the distribution of a product can itself give rise to liability where evidence shows that the distributor intended and encouraged the product to be used to infringe”, Github removed all of the URLs listed in the complaint, leaving the familiar “unavailable” notice behind.

While the MPA will be satisfied with the suspension of the pages, its takedown notice also asks Github to consider 17 U.S.C. § 512(i)(1)(A), which grants an exemption from liability for service providers when they take action against repeat infringers.

“The limitations on liability established by this section shall apply to a service provider only if the service provider…has adopted and reasonably implemented, and informs subscribers and account holders of the service provider’s system or network of, a policy that provides for the termination in appropriate circumstances of subscribers and account holders of the service provider’s system or network who are repeat infringers,” the code reads.

The main TeaTV account and repository are currently active but with no content available. TeaTV.net, however, is still online, as is the .XYZ domain from where the clients can be downloaded and movies and TV shows streamed, albeit in a cumbersome fashion when compared to the app.

TorrentFreak requested comment from the operators of TeaTV as to whether the MPA had been in touch directly. At the time of publishing, we were yet to receive a response.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Instagram Uses DMCA Complaint to Protect Users’ “Copyrighted Works”

Post Syndicated from Andy original https://torrentfreak.com/instagram-uses-dmca-complaint-to-protect-users-copyrighted-works-200130/

DMCA notices are sent in their millions every single week, mainly to restrict access to copyright-infringing content. These notices usually target the infringing content itself or links to the same, but there are other options too.

The anti-circumvention provisions of the DMCA allow companies that own or provide access to copyrighted works to target tools and systems that facilitate access to that content in an unauthorized manner. Recent examples can be found in the war currently being waged by the RIAA against various YouTube-ripping sites, which provide illicit access to copyright works, according to the industry group.

This week Facebook-owned Instagram entered the arena when it filed a DMCA notice against code repository Github. It targeted Instagram-API, an independent Instagram API created by a Spain-based developer known as ‘mgp25‘. Instagram claims that at least in part, the notice was filed to prevent unauthorized access to its users’ posts, which can contain copyrighted works.

“The Company maintains technological measures to control access to and protect Instagram users’ posts, which are copyrighted works. This notice relates to GitHub users offering, providing, and/or trafficking in technologies, products, and/or services primarily designed to circumvent the Company’s technological measures,” the complaint begins.

According to Instagram, Instagram-API is code that was designed to emulate the official Instagram mobile app, allowing users to send and receive data, including copyrighted content, through Instagram’s private API. It’s a description that is broadly confirmed by the tool’s creator.

“The API is more or less like a replica of the mobile app. Basically, the API mimics the requests Instagram does, so if you want to check someone’s profile, the mobile app uses a certain request, so through basic analysis we can emulate that request and be able to get the profile info too. The same happens with other functionalities,” mgp25 informs TorrentFreak.

While Instagram clearly views the tool as a problem, mgp25 says that it was originally created to solve one.

“Back in the day I wasn’t able to use Instagram on my phone, and I wanted something to upload photos and communicate with my friends. That’s why I made the API in the first place,” he explains.

There are no claims from Instagram that Instagram-API was developed using any of its copyrighted code. Indeed, the tool’s developer says that it was the product of reverse-engineering, something he believes should be protected in today’s online privacy minefield.

“I think reverse engineering should be exempt from the DMCA and should be legal. By reverse engineering we can verify whether apps are violating user privacy, stealing data, backdooring your device or doing even worse things,” he says.

“Without reverse engineering we wouldn’t know whether the software was a government spy tool. Reverse engineering should be a right every user should have, not only to provide interoperability functionalities but to assure their privacy rights are not being violated.”

While many would consider that to be a reasonable statement, Instagram isn’t happy with the broad abilities of Instagram-API. In addition to the above-mentioned features, it also enables access to “Instagram users’ copyrighted works in manners that exceed the scope of access and functionality that would be permitted by a user with a legitimate, authorized Instagram account,” the company adds.

After the filing of the complaint, it took a couple of days for Github to delete the project but it is now well and truly down. The same is true for more than 1,500 forks of Instagram-API that were all wiped out after their URLs were detailed in the same complaint.

Regardless of how mgp25 feels about the takedown, the matter will now come to a close. The developer says he has no idea how far Instagram and Facebook are prepared to go in order to neutralize his software so he won’t be filing a counter-notice to find out.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

WinRAR Nukes Pirate Keygen But is a “Good Guy” Towards Regular Users

Post Syndicated from Andy original https://torrentfreak.com/winrar-locks-pirate-keygen-but-the-good-guy-towards-regular-users-191207/

There’s a high probability that most people reading this article will be familiar with the image on the right.

That’s because in computing terms, data compression tool WinRAR has been around for what seems like forever.

Indeed, with its 25th birthday coming up next April, WinRAR launched before many of its users were even born. Nevertheless, it has stood the tests of time and according to the latest estimates, now has around 500 million users.

Indeed, the company told us this week that WinRAR is the third most installed software in the world behind Chrome and Acrobat Reader. The reason for that, at least in part, is the company’s liberal business model.

Perhaps the most curious thing about this ubiquitous tool is that while WinRAR gives the impression of being free, technically it is paid software. Users get a 40-day period to trial the tool and then, if they like it, they can part with cash in order to obtain a license.

However, WinRAR never times out and relies completely on users’ inclination to pay for something that doesn’t need to be paid for to retain functionality. As a result, WinRAR has huge numbers of pirate users yet the company does pretty much nothing to stop them.

Those who do pay for a license get rid of a ‘nag’ screen and gain a couple of features that most people don’t need. But for pirates (and the tool is massively popular with pirates), an unlicensed WinRAR still does what it’s supposed to, i.e unpacking all those pesky compressed pirate releases.

Of course, there are people out there who would still rather not pay a penny to use a piece of software that is essentially free to use. So, in order to obtain a ‘license’ and get rid of the nag screen, they use a piece of software called a ‘keygen’ that generates one for them.

The company behind WinRAR doesn’t seem to care too much about casual piracy but it is bothered about keygens. This week we spotted a lawyer for the company Win.rar GmbH filing a complaint with code repository Github targeting such a tool.

“We have put in a licensing generation system that is impossible to decrypt (until now that is). This system works by our employees generating a unique .key file and the end user putting it in their WinRAR installation directory so in that way the product activates,” the notice states.

“It violates our technological measures by the repo holding the source code and the compiled application to a custom-created keygen which is built to bypass our licensing generation system and allows end users to create their own unique .key files for no charge which therefore bypasses our technological measures.”

The format of the DMCA notice is part of a growing trend. It doesn’t claim that the keygen copies WinRAR’s code but instead states that it violates the company’s rights by breaching the anti-circumvention provisions of the DMCA. As such, the notice cannot be easily countered.

“This GitHub repository violates a section of 17 U.S.C. § 1201 which is a part of the Digital Millennium Copyright Act,” the notice adds.

“Since 17 U.S.C. § 1201 doesn’t have a counter-notification process if GitHub does not provide one then appealing of this notice is improbable. GitHub is legally not required to provide an appeals system for anti-circumvention cases.”

Github didn’t waste any time taking the repository down but before it disappeared, this is what it looked like. Notice the Chinese text at the top, which is of special interest.

The author of the tool identifies as Double Sine or DoubleLabyrinth, hailing from Tianjin University in China. He or she seems to have created the keygen as a technical challenge but there is some irony to be found in the coder’s location.

Since 2015, WinRAR has provided a completely free version of WinRAR for regular users in China. This, the company said, was to thank people for sticking with WinRAR over the years.

“We are proud to announce that after years of hard work, we now finally provide a completely free Simplified Chinese version of WinRAR to individual users in China,” a note on the local website reads.

“You can now officially download and use WinRAR completely free of charge from winrar.com.cn, without searching or downloading cracked products, or looking for illegal versions, or downloading from unsafe websites at risk of security.”

Speaking with TorrentFreak, a representative from WinRAR’s marketing team couldn’t immediately elaborate on the specifics of the DMCA notice but noted that people shouldn’t really have a need to pirate its product.

“Indeed this is an interesting case, as we also don’t see the necessity of using a pirated version of WinRAR instead of our trial version. We know that our licensing policy for end customers is not as strict as with other software publishers, but for us it is still important that WinRAR is being used, even if the trial period might be over,” the representative said.

“From a legal perspective, everybody should buy at the end of the trial, but we still think that at least uncompressing content should be still possible as unrar.exe is open source anyway.”

The company also highlighted the existence of cartoons and memes on the Internet which relate to WinRAR’s indefinite trial, noting that “we like all of them and it meets our sense of humor.”

Perhaps more importantly, however, the company understands the importance of maintaining the positive image it’s earned by not persecuting users who use the product beyond its trial period. Going after them isn’t on the agenda but they would prefer people not to go down the piracy route.

“[I]n the field of private users we have always been the ‘good guys’ by not starting legal actions against every private user using it beyond the trial period, thus we also don’t understand the need of pirated license keys for WinRAR,” the company concludes.

Rival open source tools such as 7-Zip offer similar functionality for free, no keygens needed or nag screens in sight. But, for the majority of users, WinRAR remains the tool of choice, even after a quarter of a century. It’s a remarkable achievement backed up by an intriguing business model.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

truffleHog – Search Git for High Entropy Strings with Commit History

Post Syndicated from Darknet original https://www.darknet.org.uk/2019/12/trufflehog-search-git-for-high-entropy-strings-with-commit-history/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

truffleHog – Search Git for High Entropy Strings with Commit History

truffleHog is a Python-based tool to search Git for high entropy strings, digging deep into commit history and branches. This is effective at finding secrets accidentally committed.

truffleHog previously functioned by running entropy checks on git diffs. This functionality still exists, but high signal regex checks have been added, and the ability to surpress entropy checking has also been added.

truffleHog –regex –entropy=False https://github.com/dxa4481/truffleHog.git

or

truffleHog file:///user/dxa4481/codeprojects/truffleHog/

truffleHog will go through the entire commit history of each branch, and check each diff from each commit, and check for secrets.

Read the rest of truffleHog – Search Git for High Entropy Strings with Commit History now! Only available at Darknet.

Setting up a CI/CD pipeline by integrating Jenkins with AWS CodeBuild and AWS CodeDeploy

Post Syndicated from Noha Ghazal original https://aws.amazon.com/blogs/devops/setting-up-a-ci-cd-pipeline-by-integrating-jenkins-with-aws-codebuild-and-aws-codedeploy/

In this post, I explain how to use the Jenkins open-source automation server to deploy AWS CodeBuild artifacts with AWS CodeDeploy, creating a functioning CI/CD pipeline. When properly implemented, the CI/CD pipeline is triggered by code changes pushed to your GitHub repo, automatically fed into CodeBuild, then the output is deployed on CodeDeploy.

Solution overview

The functioning pipeline creates a fully managed build service that compiles your source code. It then produces code artifacts that can be used by CodeDeploy to deploy to your production environment automatically.

The deployment workflow starts by placing the application code on the GitHub repository. To automate this scenario, I added source code management to the Jenkins project under the Source Code section. I chose the GitHub option, which by design clones a copy from the GitHub repo content in the Jenkins local workspace directory.

In the second step of my automation procedure, I enabled a trigger for the Jenkins server using an “Poll SCM” option. This option makes Jenkins check the configured repository for any new commits/code changes with a specified frequency. In this testing scenario, I configured the trigger to perform every two minutes. The automated Jenkins deployment process works as follows:

  1. Jenkins checks for any new changes on GitHub every two minutes.
  2. Change determination:
    1. If Jenkins finds no changes, Jenkins exits the procedure.
    2. If it does find changes, Jenkins clones all the files from the GitHub repository to the Jenkins server workspace directory.
  3. The File Operation plugin deletes all the files cloned from GitHub. This keeps the Jenkins workspace directory clean.
  4. The AWS CodeBuild plugin zips the files and sends them to a predefined Amazon S3 bucket location then initiates the CodeBuild project, which obtains the code from the S3 bucket. The project then creates the output artifact zip file, and stores that file again on the S3 bucket.
  5. The HTTP Request plugin downloads the CodeBuild output artifacts from the S3 bucket.
    I edited the S3 bucket policy to allow access from the Jenkins server IP address. See the following example policy:

    {
      "Version": "2012-10-17",
      "Id": "S3PolicyId1",
      "Statement": [
        {
          "Sid": "IPAllow",
          "Effect": "Allow",
          "Principal": "*",
          "Action": "s3:*",
          "Resource": "arn:aws:s3:::examplebucket/*",
          "Condition": {
             "IpAddress": {"aws:SourceIp": "x.x.x.x/x"},  <--- IP of the Jenkins server
          } 
        } 
      ]
    }
    
    

    This policy enables the HTTP request plugin to access the S3 bucket. This plugin doesn’t use the IAM instance profile or the AWS access keys (access key ID and secret access key).

  6. The output artifact is a compressed ZIP file. The CodeDeploy plugin by design requires the files to be unzipped to zip them and send them over to the S3 bucket for the CodeDeploy deployment. For that, I used the File Operation plugin to perform the following:
    1. Unzip the CodeBuild zipped artifact output in the Jenkins root workspace directory. At this point, the workspace directory should include the original zip file downloaded from the S3 bucket from Step 5 and the files extracted from this archive.
    2. Delete the original .zip file, and leave only the source bundle contents for the deployment.
  7. The CodeDeploy plugin selects and zips all workspace directory files. This plugin uses the CodeDeploy application name, deployment group name, and deployment configurations that you configured to initiate a new CodeDeploy deployment. The CodeDeploy plugin then uploads the newly zipped file according to the S3 bucket location provided to CodeDeploy as a source code for its new deployment operation.

Walkthrough

In this post, I walk you through the following steps:

  • Creating resources to build the infrastructure, including the Jenkins server, CodeBuild project, and CodeDeploy application.
  • Accessing and unlocking the Jenkins server.
  • Creating a project and configuring the CodeDeploy Jenkins plugin.
  • Testing the whole CI/CD pipeline.

Create the resources

In this section, I show you how to launch an AWS CloudFormation template, a tool that creates the following resources:

  • Amazon S3 bucket—Stores the GitHub repository files and the CodeBuild artifact application file that CodeDeploy uses.
  • IAM S3 bucket policy—Allows the Jenkins server access to the S3 bucket.
  • JenkinsRole—An IAM role and instance profile for the Amazon EC2 instance for use as a Jenkins server. This role allows Jenkins on the EC2 instance to access the S3 bucket to write files and access to create CodeDeploy deployments.
  • CodeDeploy application and CodeDeploy deployment group.
  • CodeDeploy service role—An IAM role to enable CodeDeploy to read the tags applied to the instances or the EC2 Auto Scaling group names associated with the instances.
  • CodeDeployRole—An IAM role and instance profile for the EC2 instances of CodeDeploy. This role has permissions to write files to the S3 bucket created by this template and to create deployments in CodeDeploy.
  • CodeBuildRole—An IAM role to be used by CodeBuild to access the S3 bucket and create the build projects.
  • Jenkins server—An EC2 instance running Jenkins.
  • CodeBuild project—This is configured with the S3 bucket and S3 artifact.
  • Auto Scaling group—Contains EC2 instances running Apache and the CodeDeploy agent fronted by an Elastic Load Balancer.
  • Auto Scaling launch configurations—For use by the Auto Scaling group.
  • Security groups—For the Jenkins server, the load balancer, and the CodeDeploy EC2 instances.

 

  1. To create the CloudFormation stack (for example in the AWS Frankfurt Region) click the below link:
    .

    .
  2. Choose Next and provide the following values on the Specify Details page:
    • For Stack name, name your stack as you prefer.
    • For CodedeployInstanceCount, choose the default of t2.medium.
      To check the supported instance types by AWS Region, see Supported Regions.
    • For InstanceCount, keep the default of 3, to launch three EC2 instances for CodeDeploy.
    • For JenkinsInstanceType, keep the default of t2.medium.
    • For KeyName, choose an existing EC2 key pair in your AWS account. Use this to connect by using SSH to the Jenkins server and the CodeDeploy EC2 instances. Make sure that you have access to the private key of this key pair.
    • For PublicSubnet1, choose a public subnet from which the load balancer, Jenkins server, and CodeDeploy web servers launch.
    • For PublicSubnet2, choose a public subnet from which the load balancers and CodeDeploy web servers launch.
    • For VpcId, choose the VPC for the public subnets you used in PublicSubnet1 and PublicSubnet2.
    • For YourIPRange, enter the CIDR block of the network from which you connect to the Jenkins server using HTTP and SSH. If your local machine has a static public IP address, go to https://www.whatismyip.com/ to find your IP address, and then enter your IP address followed by /32. If you don’t have a static IP address (or aren’t sure if you have one), enter 0.0.0.0/0. Then, any address can reach your Jenkins server.
      .
  3. Choose Next.
  4. On the Review page, select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box.
  5. Choose Create and wait for the CloudFormation stack status to change to CREATE_COMPLETE. This takes approximately 6–10 minutes.
  6. Check the resulting values on the Outputs tab. You need them later.
    .
  7. Browse to the ELBDNSName value from the Outputs tab, verifying that you can see the Sample page. You should see a congratulatory message.
  8. Your Jenkins server should be ready to deploy.

Access and unlock your Jenkins server

In this section, I discuss how to access, unlock, and customize your Jenkins server.

  1. Copy the JenkinsServerDNSName value from the Outputs tab of the CloudFormation stack, and paste it into your browser.
  2. To unlock the Jenkins server, SSH to the server using the IP address and key pair, following the instructions from Unlocking Jenkins.
  3. Use the root user to Cat the log file (/var/log/jenkins/jenkins.log) and copy the automatically generated alphanumeric password (between the two sets of asterisks). Then, use the password to unlock your Jenkins server, as shown in the following screenshots.
    .
  4. On the Customize Jenkins page, choose Install suggested plugins.

  5. Wait until Jenkins installs all the suggested plugins. When the process completes, you should see the check marks alongside all of the installed plugins.
    .
    .
  6. On the Create First Admin User page, enter a user name, password, full name, and email address of the Jenkins user.
  7. Choose Save and continue, Save and finish, and Start using Jenkins.
    .
    After you install all the needed Jenkins plugins along with their required dependencies, the Jenkins server restarts. This step should take about two minutes. After Jenkins restarts, refresh the page. Your Jenkins server should be ready to use.

Create a project and configure the CodeDeploy Jenkins plugin

Now, to create our project in Jenkins we need to configure the required Jenkins plugin.

  1. Sign in to Jenkins with the user name and password that you created earlier and click on Manage Jenkins then Manage Plugins.
  2. From the Available tab search for and select the below plugins then choose Install without restart:
    .
    AWS CodeDeploy
    AWS CodeBuild
    Http Request
    File Operations
    .
  3. Select the Restart Jenkins when installation is complete and no jobs are running.
    Jenkins will take couple of minutes to download the plugins along with their dependencies then will restart.
  4. Login then choose New Item, Freestyle project.
  5. Enter a name for the project (for example, CodeDeployApp), and choose OK.
    .

    .
  6. On the project configuration page, under Source Code Management, choose Git. For Repository URL, enter the URL of your GitHub repository.
    .

    .
  7. For Build Triggers, select the Poll SCM check box. In the Schedule, for testing enter H/2 * * * *. This entry tells Jenkins to poll GitHub every two minutes for updates.
    .

    .
  8. Under Build Environment, select the Delete workspace before build starts check box. Each Jenkins project has a dedicated workspace directory. This option allows you to wipe out your workspace directory with each new Jenkins build, to keep it clean.
    .

    .
  9. Under Build Actions, add a Build Step, and AWS CodeBuild. On the AWS Configurations, choose Manually specify access and secret keys and provide the keys.
    .
    .
  10. From the CloudFormation stack Outputs tab, copy the AWS CodeBuild project name (myProjectName) and paste it in the Project Name field. Also, set the Region that you are using and choose Use Jenkins source.
    It is a best practice is to store AWS credentials for CodeBuild in the native Jenkins credential store. For more information, see the Jenkins AWS CodeBuild Plugin wiki.
    .
    .
  11. To make sure that all files cloned from the GitHub repository are deleted choose Add build step and select File Operation plugin, then click Add and select File Delete. Under File Delete operation in the Include File Pattern, type an asterisk.
    .
    .
  12. Under Build, configure the following:
    1. Choose Add a Build step.
    2. Choose HTTP Request.
    3. Copy the S3 bucket name from the CloudFormation stack Outputs tab and paste it after (http://s3-eu-central-1.amazonaws.com/) along with the name of the zip file codebuild-artifact.zip as the value for HTTP Plugin URL.
      Example: (http://s3-eu-central-1.amazonaws.com/mybucketname/codebuild-artifact.zip)
    4. For Ignore SSL errors?, choose Yes.
      .

      .
  13. Under HTTP Request, choose Advanced and leave the default values for Authorization, Headers, and Body. Under Response, for Output response to file, enter the codebuild-artifact.zip file name.
    .

    .
  14. Add the two build steps for the File Operations plugin, in the following order:
    1. Unzip action: This build step unzips the codebuild-artifact.zip file and places the contents in the root workspace directory.
    2. File Delete action: This build step deletes the codebuild-artifact.zip file, leaving only the source bundle contents for deployment.
      .
      .
  15. On the Post-build Actions, choose Add post-build actions and select the Deploy an application to AWS CodeDeploy check box.
  16. Enter the following values from the Outputs tab of your CloudFormation stack and leave the other settings at their default (blank):
    • For AWS CodeDeploy Application Name, enter the value of CodeDeployApplicationName.
    • For AWS CodeDeploy Deployment Group, enter the value of CodeDeployDeploymentGroup.
    • For AWS CodeDeploy Deployment Config, enter CodeDeployDefault.OneAtATime.
    • For AWS Region, choose the Region where you created the CodeDeploy environment.
    • For S3 Bucket, enter the value of S3BucketName.
      The CodeDeploy plugin uses the Include Files option to filter the files based on specific file names existing in your current Jenkins deployment workspace directory. The plugin zips specified files into one file. It then sends them to the location specified in the S3 Bucket parameter for CodeDeploy to download and use in the new deployment.
      .
      As shown below, in the optional Include Files field, I used (**) so all files in the workspace directory get zipped.
      .
      .
  17. Choose Deploy Revision. This option registers the newly created revision to your CodeDeploy application and gets it ready for deployment.
  18. Select the Wait for deployment to finish? check box. This option allows you to view the CodeDeploy deployments logs and events on your Jenkins server console output.
    .
    .
    Now that you have created a project, you are ready to test deployment.

Testing the whole CI/CD pipeline

To test the whole solution, put an application on your GitHub repository. You can download the sample from here.

The following screenshot shows an application tree containing the application source files, including text and binary files, executables, and packages:

In this example, the application files are the templates directory, test_app.py file, and web.py file.

The appspec.yml file is the main application specification file telling CodeDeploy how to deploy your application. Jenkins uses the AppSpec file to manage each deployment as a series of lifecycle event “hooks”, as defined in the file. For information about how to create a well-formed AppSpec file, see AWS CodeDeploy AppSpec File Reference.

The buildspec.yml file is a collection of build commands and related settings, in YAML format, that CodeBuild uses to run a build. You can include a build spec as part of the source code, or you can define a build spec when you create a build project. For more information, see How AWS CodeBuild Works.

The scripts folder contains the scripts that you would like to run during the CodeDeploy LifecycleHooks execution with respect to your application requirements. For more information, see Plan a Revision for AWS CodeDeploy.

To test this solution, perform the following steps:

  1. Unzip the application files and send them to your GitHub repository, run the following git commands from the path where you placed your sample application:
    $ git add -A
    
    $ git commit -m 'Your first application'
    
    $ git push
  2. On the Jenkins server dashboard, wait for two minutes until the previously set project trigger starts working. After the trigger starts working, you should see a new build taking place.
    .

    .
  3. In the Jenkins server Console Output page, check the build events and review the steps performed by each Jenkins plugin. You can also review the CodeDeploy deployment in detail, as shown in the following screenshot:
    .

On completion, Jenkins should report that you have successfully deployed a web application. You can also use your ELBDNSName value to confirm that the deployed application is running successfully.

.

.Conclusion

In this post, I outlined how you can use a Jenkins open-source automation server to deploy CodeBuild artifacts with CodeDeploy. I showed you how to construct a functioning CI/CD pipeline with these tools. I walked you through how to build the deployment infrastructure and automatically deploy application version changes from GitHub to your production environment.

Hopefully, you have found this post informative and the proposed solution useful. As always, AWS welcomes all feedback or comment.

About the Author

.

 

Noha Ghazal is a Cloud Support Engineer at Amazon Web Services. She is is a subject matter expert for AWS CodeDeploy. In her role, she enjoys supporting customers with their CodeDeploy and other DevOps configurations. Outside of work she enjoys drawing portraits, fishing and playing video games.

 

 

Nintendo Takes Down Facebook-Tooled Donkey Kong Remake

Post Syndicated from Andy original https://torrentfreak.com/nintendo-takes-down-facebook-tooled-donkey-kong-remake-190930/

If one took a broad overview of the entire history of video gaming, few would dare to argue Nintendo’s legend status over the past several decades.

The Japanese company’s games, both old and new, are renowned for their brilliance and enduring characters. Arguably the most iconic is Mario, who first made his appearance as the hero in the timeless 1981 release Donkey Kong.

Even today, dangerously close to 40 years on, countless players still enjoy this and other classics on emulators and similar tools but Nintendo’s tolerance is becoming increasingly fragile. Over the past couple of years, as players toil in the shadows to defeat Kong, Nintendo has become a litigation machine throwing takedown notices and even lawsuits (1,2,3) at sites and alleged infringers.

The company’s latest effort came on Friday when it sent a copyright complaint to development platform Github. The target was a remake of Donkey Kong built with React Native, the open-source mobile application framework created by Facebook.

Created by developer ‘bberak’, this React Native version of Donkey Kong isn’t an emulation, it was created from the ground up for iOS and Android and documented in a detailed post on Hackernoon in April 2018.

The jumps and gameplay quirks reveal this is no emu

Perhaps a little unusually, given the risks associated with stepping on Nintendo’s toes lately, the original repo — which was now been taken down — basically acknowledges that parts of the project may infringe copyright. The game’s code may have been created independently but the visual and audio assets are undoubtedly Nintendo’s. And the repo happily pointed to the company behind the project too.

“Copyright Notice: All content, artwork, sounds, characters and graphics are the property of Nintendo of America Inc, its affiliates and/or subsidiaries,” the repo read.

“Get in Touch: We are Neap — a development and design team in Sydney. We love building stuff and meeting new people, so get in touch with us at https://neap.co.”

The Neap website reveals that ‘bberak’ is Boris Berak, co-founder and Technical Director of the Australia-based company. TF contacted them for comment but at the time of publication, we hadn’t received a response.

In hindsight, it was probably a mistake to use Donkey Kong as a technical demo since Nintendo has already shown an aversion to such projects in the past. Back in June 2017, the company targeted a Donkey Kong remake for Roku, also hosted on Github. Interestingly, the complaint filed Friday appears to have an artifact from that two-year-old notice.

Stating the content being targeted most recently, Nintendo states: “Nintendo’s Donkey Kong video game, covered by U.S. Copyright Reg. No. PA0000115040 (supplemented by PA0000547470). The reported repository contains a recreation of Nintendo’s Donkey Kong video game for Roku, which was created and published without Nintendo’s authorization.”

The text is an exact match with that in the earlier complaint, even going as far as referencing Roku, which appears to be an error. Nevertheless, those details are irrelevant to the claim and won’t be good grounds for a counter-notice.

As Nintendo’s notice points out, at least another 30 developers forked this Donkey Kong variant on Github, so all those repositories have been taken down too. They could probably be restored if Berak removed all the original Donkey Kong references, graphics, and sound, but that seems unlikely.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Pulling Raspberry Pi translation data from GitHub

Post Syndicated from Nina Szymor original https://www.raspberrypi.org/blog/pulling-translation-data-from-github/

What happens when you give two linguists jobs at Raspberry Pi? They start thinking they can do digital making, even though they have zero coding skills! Because if you don’t feel inspired to step out of your comfort zone here — surrounded by all the creativity, making, and technology — then there is no hope you’ll be motivated to do it anywhere else.

two smiling women standing in front of a colourful wall

Maja and Nina, our translation team, and coding beginners

Maja and I support the community of Raspberry Pi translation volunteers, and we wanted to build something to celebrate them and the amazing work they do! Our educational content is already available in 26 languages, with more than 400 translations on our projects website. But our volunteer community is always translating more content, and so off we went, on an ambitious (by our standards!) mission to create a Raspberry Pi–powered translation notification system. This is a Raspberry Pi that pulls GitHub data to display a message on a Sense HAT and play a tune whenever we add fresh translated content to the Raspberry Pi projects website!

Breaking it down

There were three parts to the project: two of them were pretty easy (displaying a message on a Sense HAT and playing a tune), and one more challenging (pulling information about new translated content added to our repositories on GitHub). We worked on each part separately and then put all of the code together.

Two computers and two pastries

Mandatory for coding: baked goods and tea

Displaying a message on Sense HAT and playing a sound

We used the Raspberry Pi projects Getting started with the Sense HAT and GPIO music box to help us with this part of our build.

At first we wanted the Sense HAT to display fireworks, but we soon realised how bad we both are at designing animations, so we moved on to displaying a less creative but still satisfying smiley face, followed by a message saying “Hooray! Another translation!” and another smiley face. LED screen displaying the message 'Another translation!'

We used the sense_hat and time modules, and wrote a function that can be easily used in the main body of the program. You can look at the comments in the code above to see what each line does:

Python code snippet for displaying a message on a Sense HAT

So we could add the fun tune, we learned how to use the Pygame library to play sounds. Using Pygame it’s really simple to create a function that plays a sound: once you have the .wav file in your chosen location, you simply import and initialise the pygame module, create a Sound object, and provide it with the path to your .wav file. You can then play your sound:

Python code snippet for playing a sound

We’ve programmed our translation notification system to play the meow sound three times, using the sleep function to create a one-second break between each sound. Because why would you want one meow if you can have three?

Pulling repository information from GitHub

This was the more challenging part for Maja and me, so we asked for help from experienced programmers, including our colleague Ben Nuttall. We explained what we wanted to do: pull information from our GitHub repositories where all the projects available on the Raspberry Pi projects website are kept, and every time a new language directory is found, to execute the sparkles and meow functions to let us and EVERYONE in the office know that we have new translations! Ben did a bit of research and quickly found the PyGithub library, which enables you to manage your GitHub resources using Python scripts.

Python code snippet for pulling data from GitHub

Check out the comments to see what the code does

The script runs in an infinite loop, checking all repositories in the ‘raspberrypilearning’ organisation for new translations (directories with names in form of xx-XX, eg. fr-CA) every 60 minutes. Any new translation is then printed and preserved in memory. We had some initial issues with the usage of the PyGithub library: calling .get_commits() on an empty repository throws an exception, but the library doesn’t provide any functions to check whether a repo is empty or not. Fortunately, wrapping this logic in a try...except statement solved the problem.

And there we have it: success!

Demo of our Translation Notification System build

Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspberry Pi from one of our Approved Resellers: http://rpf.io/ytproducts Find out more about the #RaspberryPi Foundation: Raspberry Pi http://rpf.io/ytrpi Code Club UK http://rpf.io/ytccuk Code Club International http://rpf.io/ytcci CoderDojo http://rpf.io/ytcd Check out our free online training courses: http://rpf.io/ytfl Find your local Raspberry Jam event: http://rpf.io/ytjam Work through our free online projects: http://rpf.io/ytprojects Do you have a question about your Raspberry Pi?

Our ideas for further development

We’re pretty proud that the whole Raspberry Pi office now hears a meowing cat whenever new translated content is added to our projects website, but we’ve got plans for further development of our translation notification system. Our existing translated educational resources have already been viewed by over 1 million users around the world, and we want anyone interested in the translations our volunteers make possible to be able to track new translated projects as the go live!

One way to do that is to modify the code to tweet or send an email with the name of the newly added translation together with a link to the project and information on the language in which it was added. Alternatively, we could adapt the system to only execute the sparkles and meow functions when a translation in a particular language is added. Then our more than 1000 volunteers, or any learner using our translations, could set up their own Raspberry Pi and Sense HAT to receive notifications of content in the language that interests them, rather than in all languages.

We need your help

Both ideas pose a pretty big challenge for the inexperienced new coders of the Raspberry Pi translation team, so we’d really appreciate any tips you have for helping us get started or for improving our existing system! Please share your thoughts in the comments below.

The post Pulling Raspberry Pi translation data from GitHub appeared first on Raspberry Pi.

“Confidential” HDMI Specifications Docs Hit With DMCA Takedown

Post Syndicated from Andy original https://torrentfreak.com/confidential-hdmi-standards-docs-hit-with-dmca-takedown-190511/

Credit: Pixabay

HDMI (High Definition Multimedia Interface) is today’s standard for transferring digital video and audio between compatible devices.

The standard variant comes as a male connector (plug) or female connector (socket). Chances are that most people will have many of these scattered around their homes, with TVs, monitors, set-top boxes, video games consoles, and dozens of other video-capable devices utilizing the interface.

It’s no surprise then that the list of companies that have adopted the HDMI standard for their products is huge, with founders including Maxell, Panasonic, Sanyo, Philips, and Sony leading the way.

Since its inception back in 2002, many versions of HDMI have been developed, each utilizing the same basic connector but with added features. While new functions aren’t available to users of pre-update hardware, the entire system is backward compatible.

These updates (which are given version numbers such as HDMI 1.0 (2002) right up to the latest HDMI 2.1 (2017)) are described in technical specifications documents. However, according to the HDMI Licensing Administrator, Inc., the licensing agent for the HDMI product, these documents are not only copyrighted but also contain secret information.

Github user ‘Glenwing’ has been archiving these documents for the last few years in his personal “Display Industry Standards Archive” but was recently hit with a DMCA takedown notice after HDMI Licensing Administrator filed a complaint against him.

GitHub itself published details of the DMCA complaint which claims copyright over the documents and further states that they aren’t for public consumption.

“HDMI Licensing Administrator, Inc. is the licensing Agent to the founders of the HDMI® Digital Interface. It has been brought to our attention that user Glenwing is publicly making confidential copyrighted content available on your hub without authorization,” the notice reads.

Since we’ve seen these documents available freely online before, we contacted Glenwing to find out what the problem was.

He told us that HDMI specification version 1.3a is available for public download from the HDMI website but considering copies of the other specifications can be found online elsewhere, he didn’t think there would be an issue putting them in one place.

“I just assumed it was something considered unimportant to them, considering there have been other hosted copies of ‘confidential’ HDMI versions that were widely linked, easily locatable by simply Googling ‘HDMI 1.4 pdf’ etc,” he explains.

“These documents have even been linked as a source on the HDMI Wikipedia page. You can’t get any more visible than that, and those copies remained online for years. But now that I’ve been revisiting my original sources I downloaded from, they’re mostly dead links. It seems HDMI Licensing may have started to clean house all over the web, not just targeting my page specifically.”

Glenwing confirmed that all copies of the specifications he uploaded to Github were just obtained from various sources on the Internet, such as Wikipedia citations or simple Google searches.

He’s clearly just a tech enthusiast with a great interest in the topic, who would like to share his knowledge with others. There’s certainly no malicious intent.

“I never really intended these documents for distribution anyway, and if I could hide the Github page from Google results with a robots.txt file or something, I would,” he says.

“I upload them primarily for my own reference, to have every version in one place, so that when I write guides trying to educate people about the capabilities of HDMI, DisplayPort, how to correctly calculate video bandwidth, how these standards have changed over time, etc., I can link these documents as sources.”

Interestingly, this takedown wasn’t the first received by Glenwing. He initially received a notice just a few days earlier from the Consumer Technology Association (of which HDMI Licensing Administrator is a member) which targeted half a dozen CTA standards documents.

“Six copyrighted CTA standards are posted in their entirety here:
https://glenwing.github.io/docs/,” the notice from CTA reads. “[T]he works are not licensed under an open source license…the best solution is removal,” it adds.

So are these documents sensitive too? Glenwing believes not.

“This notice I actually received first, and it was a bit puzzling at the time; I had six CTA documents, which are all different revisions of the same (public) standard, CTA-861 [A DTV Profile for Uncompressed High Speed Digital Interfaces]. The three latest revisions (G, F, and E) are available for free download from the CTA website, the older revisions are not, likely because they are simply outdated, not because anyone considers them secret information,” he says.

“It’s fairly common for standards organizations to only host the latest versions, and whenever a new revision is released, older versions often become difficult to find. That was sort of the point of my page, to preserve every version I could find for historical purposes.”

In the absence of his own archive on Github, Glenwing then began to link directly to pages on the Consumer Technology Association site that host the documents and offer them for download. Functionally, access to the documents should have been the same. Or at least that was the plan.

As this piece was being put together, CTA removed the copies of its own standards from its own website, leaving dead links in their place. It now appears that they can only be accessed via the CTA Store, albeit for the knockdown price of $0.00, following a registration process.

Bizarrely, there are other sources for the documents, such as this site which offers to sell one of the publicly available documents for a mere $278. People shouldn’t have to pay a penny of course, as per a May 2018 press release from the CTA which declared free document access to all….

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Eating Dogfood at Scale: How We Build Serverless Apps with Workers

Post Syndicated from Jonathan Spies original https://blog.cloudflare.com/building-serverless-apps-with-workers/

Eating Dogfood at Scale: How We Build Serverless Apps with Workers

Eating Dogfood at Scale: How We Build Serverless Apps with Workers

You’ve had a chance to build a Cloudflare Worker. You’ve tried KV Storage and have a great use case for your Worker. You’ve even demonstrated the usefulness to your product or organization. Now you need to go from writing a single file in the Cloudflare Dashboard UI Editor to source controlled code with multiple environments deployed using your favorite CI tool.

Fortunately, we have a powerful and flexible API for managing your workers. You can customize your deployment to your heart’s content. Our blog has already featured many things made possible by that API:

These tools make deployments easier to configure, but it still takes time to manage. The Serverless Framework Cloudflare Workers plugin removes that deployment overhead so you can spend more time working on your application and less on your deployment.

Focus on your application

Here at Cloudflare, we’ve been working to rebuild our Access product to run entirely on Workers. The move will allow Access to take advantage of the resiliency, performance, and flexibility of Workers. We’ll publish a more detailed post about that migration once complete, but the experience required that we retool some of our process to match or existing development experience as much as possible.

To us this meant:

  • Git
  • Easily deploy
  • Different environments
  • Unit Testing
  • CI Integration
  • Typescript/Multiple Files
  • Everything Must Be Automated

The Cloudflare Access team looked at three options for automating all of these tools in our pipeline. All of the options will work and could be right for you, but custom scripting can be a chore to maintain and Terraform lacked some extensibility.

  1. Custom Scripting
  2. Terraform
  3. Serverless Framework

We decided on the Serverless Framework. Serverless Framework provided a tool to mirror our existing process as closely as possible without too much DevOps overhead. Serverless is extremely simple and doesn’t interfere with the application code. You can get a project set up and deployed in seconds. It’s obviously less work than writing your own custom management scripts. But it also requires less boiler plate than Terraform because the Serverless Framework is designed for the “serverless” niche. However, if you are already using Terraform to manage other Cloudflare products, Terraform might be the best fit.

Walkthrough

Everything for the project happens in a YAML file called serverless.yml. Let’s go through the features of the configuration file.

To get started, we need to install serverless from npm and generate a new project.

npm install serverless -g
serverless create --template cloudflare-workers --path myproject
cd myproject
npm install

If you are an enterprise client, you want to use the cloudflare-workers-enterprise template as it will set up more than one worker (but don’t worry, you can add more to any template). Also, I’ll touch on this later, but if you want to write your workers in Rust, use the cloudflare-workers-rust template.

You should now have a project that feels familiar, ready to be added to your favorite source control. In the project should be a serverless.yml file like the following.

service:
    name: hello-world

provider:
  name: cloudflare
  config:
    accountId: CLOUDFLARE_ACCOUNT_ID
    zoneId: CLOUDFLARE_ZONE_ID

plugins:
  - serverless-cloudflare-workers

functions:
  hello:
    name: hello
    script: helloWorld  # there must be a file called helloWorld.js
    events:
      - http:
          url: example.com/hello/*
          method: GET
          headers:
            foo: bar
            x-client-data: value

The service block simply contains the name of your service. This will be used in your Worker script names if you do not overwrite them.

Under provider, name must be ‘cloudflare’  and you need to add your account and zone IDs. You can find them in the Cloudflare Dashboard.

The plugins section adds the Cloudflare specific code.

Now for the good part: functions. Each block under functions is a Worker script.

name: (optional) If left blank it will be STAGE-service.name-script.identifier. If I removed name from this file and deployed in production stage, the script would be named production-hello-world-hello.

script: the relative path to the javascript file with the worker script. I like to organize mine in a folder called handlers.

events: Currently Workers only support http events. We call these routes. The example provided says that GET https://example.com/hello/<anything here> will  cause this worker to execute. The headers block is for testing invocations.

At this point you can deploy your worker!

[email protected] CLOUDFLARE_AUTH_KEY=XXXXXXXX serverless deploy

This is very easy to deploy, but it doesn’t address our requirements. Luckily, there’s just a few simple modifications to make.

Maturing our YAML File

Here’s a more complex YAML file.

service:
  name: hello-world

package:
  exclude:
    - node_modules/**
  excludeDevDependencies: false

custom:
  defaultStage: development
  deployVars: ${file(./config/deploy.${self:provider.stage}.yml)}

kv: &kv
  - variable: MYUSERS
    namespace: users

provider:
  name: cloudflare
  stage: ${opt:stage, self:custom.defaultStage}
  config:
    accountId: ${env:CLOUDFLARE_ACCOUNT_ID}
    zoneId: ${env:CLOUDFLARE_ZONE_ID}

plugins:
  - serverless-cloudflare-workers

functions:
  hello:
    name: ${self:provider.stage}-hello
    script: handlers/hello
    webpack: true
    environment:
      MY_ENV_VAR: ${self:custom.deployVars.env_var_value}
      SENTRY_KEY: ${self:custom.deployVars.sentry_key}
    resources: 
      kv: *kv
    events:
      - http:
          url: "${self:custom.deployVars.SUBDOMAIN}.mydomain.com/hello"
          method: GET
      - http:
          url: "${self:custom.deployVars.SUBDOMAIN}.mydomain.com/alsohello*"
          method: GET

We can add a custom section where we can put custom variables to use later in the file.

defaultStage: We set this to development so that forgetting to pass a stage doesn’t trigger a production deploy. Combined with the stage option under provider we can set the stage for deployment.

deployVars: We use this custom variable to load another YAML file dependent on the stage. This lets us have different values for different stages. In development, this line loads the file ./config/deploy.development.yml. Here’s an example file:

env_var_value: true
sentry_key: XXXXX
SUBDOMAIN: dev

kv: Here we are showing off a feature of YAML. If you assign a name to a block using the ‘&’, you can use it later as a YAML variable. This is very handy in a multi script account. We could have named this variable anything, but we are naming it kv since it holds our Workers Key Value storage settings to be used in our function below.

Inside of the kv block we’re creating a namespace and binding it to a variable available in your Worker. It will ensure that the namespace “users” exists and is bound to MYUSERS.

kv: &kv
  - variable: MYUSERS
    namespace: users

provider: The only new part of the provider block is stage.

stage: ${opt:stage, self:custom.defaultStage}

This line sets stage to either the command line option or custom.defaultStage if opt:stage is blank. When we deploy, we pass —stage=production to serverless deploy.

Now under our function we have added webpack, resources, and environment.

webpack: If set to true, will simply bundle each handler into a single file for deployment. It will also take a string representing a path to a webpack config file so you can customize it. This is how we add Typescript support to our projects.

resources: This block is used to automate resource creation. In resources we’re linking back to the kv block we created earlier.

Side note: If you would like to include WASM bindings in your project, it can be done in a very similar way to how we included Workers KV. For more information on WASM, see the documentation.

environment: This is the butter for the bread that is managing configuration for different stages. Here we can specify values to bind to variables to use in worker scripts. Combined with YAML magic, we can store our values in the aforementioned config files so that we deploy different values in different stages. With environments, we can easily tie into our CI tool. The CI tool has our deploy.production.yml. We simply run the following command from within our CI.

sls deploy --stage=production

Finally, I added a route to demonstrate that a script can be executed on multiple routes.

At this point I’ve covered (or hinted) at everything on our original list except Unit Testing. There are a few ways to do this.

We have a previous blog post about Unit Testing that covers using cloud worker, a great tool built by Dollar Shave Club.

My team opted to use the classic node frameworks mocha and sinon. Because we are using Typescript, we can build for node or build for v8. You can also make mocha work for non-typescript projects if you use an experimental feature that adds import/export support to node.

--experimental-modules

We’re excited about moving more and more of our services to Cloudflare Workers, and the Serverless Framework makes that easier to do. If you’d like to learn even more or get involved with the project, see us on github.com. For additional information on using Serverless Framework with Cloudflare Workers, check out our documentation on the Serverless Framework.

Microsoft acquires GitHub

Post Syndicated from corbet original https://lwn.net/Articles/756443/rss

Here’s the
press release
announcing Microsoft’s agreement to acquire GitHub for a
mere $7.5 billion. “GitHub will retain its developer-first
ethos and will operate independently to provide an open platform for all
developers in all industries. Developers will continue to be able to use
the programming languages, tools and operating systems of their choice for
their projects — and will still be able to deploy their code to any
operating system, any cloud and any device.

Storing Encrypted Credentials In Git

Post Syndicated from Bozho original https://techblog.bozho.net/storing-encrypted-credentials-in-git/

We all know that we should not commit any passwords or keys to the repo with our code (no matter if public or private). Yet, thousands of production passwords can be found on GitHub (and probably thousands more in internal company repositories). Some have tried to fix that by removing the passwords (once they learned it’s not a good idea to store them publicly), but passwords have remained in the git history.

Knowing what not to do is the first and very important step. But how do we store production credentials. Database credentials, system secrets (e.g. for HMACs), access keys for 3rd party services like payment providers or social networks. There doesn’t seem to be an agreed upon solution.

I’ve previously argued with the 12-factor app recommendation to use environment variables – if you have a few that might be okay, but when the number of variables grow (as in any real application), it becomes impractical. And you can set environment variables via a bash script, but you’d have to store it somewhere. And in fact, even separate environment variables should be stored somewhere.

This somewhere could be a local directory (risky), a shared storage, e.g. FTP or S3 bucket with limited access, or a separate git repository. I think I prefer the git repository as it allows versioning (Note: S3 also does, but is provider-specific). So you can store all your environment-specific properties files with all their credentials and environment-specific configurations in a git repo with limited access (only Ops people). And that’s not bad, as long as it’s not the same repo as the source code.

Such a repo would look like this:

project
└─── production
|   |   application.properites
|   |   keystore.jks
└─── staging
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client1
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client2
|   |   application.properites
|   |   keystore.jks

Since many companies are using GitHub or BitBucket for their repositories, storing production credentials on a public provider may still be risky. That’s why it’s a good idea to encrypt the files in the repository. A good way to do it is via git-crypt. It is “transparent” encryption because it supports diff and encryption and decryption on the fly. Once you set it up, you continue working with the repo as if it’s not encrypted. There’s even a fork that works on Windows.

You simply run git-crypt init (after you’ve put the git-crypt binary on your OS Path), which generates a key. Then you specify your .gitattributes, e.g. like that:

secretfile filter=git-crypt diff=git-crypt
*.key filter=git-crypt diff=git-crypt
*.properties filter=git-crypt diff=git-crypt
*.jks filter=git-crypt diff=git-crypt

And you’re done. Well, almost. If this is a fresh repo, everything is good. If it is an existing repo, you’d have to clean up your history which contains the unencrypted files. Following these steps will get you there, with one addition – before calling git commit, you should call git-crypt status -f so that the existing files are actually encrypted.

You’re almost done. We should somehow share and backup the keys. For the sharing part, it’s not a big issue to have a team of 2-3 Ops people share the same key, but you could also use the GPG option of git-crypt (as documented in the README). What’s left is to backup your secret key (that’s generated in the .git/git-crypt directory). You can store it (password-protected) in some other storage, be it a company shared folder, Dropbox/Google Drive, or even your email. Just make sure your computer is not the only place where it’s present and that it’s protected. I don’t think key rotation is necessary, but you can devise some rotation procedure.

git-crypt authors claim to shine when it comes to encrypting just a few files in an otherwise public repo. And recommend looking at git-remote-gcrypt. But as often there are non-sensitive parts of environment-specific configurations, you may not want to encrypt everything. And I think it’s perfectly fine to use git-crypt even in a separate repo scenario. And even though encryption is an okay approach to protect credentials in your source code repo, it’s still not necessarily a good idea to have the environment configurations in the same repo. Especially given that different people/teams manage these credentials. Even in small companies, maybe not all members have production access.

The outstanding questions in this case is – how do you sync the properties with code changes. Sometimes the code adds new properties that should be reflected in the environment configurations. There are two scenarios here – first, properties that could vary across environments, but can have default values (e.g. scheduled job periods), and second, properties that require explicit configuration (e.g. database credentials). The former can have the default values bundled in the code repo and therefore in the release artifact, allowing external files to override them. The latter should be announced to the people who do the deployment so that they can set the proper values.

The whole process of having versioned environment-speific configurations is actually quite simple and logical, even with the encryption added to the picture. And I think it’s a good security practice we should try to follow.

The post Storing Encrypted Credentials In Git appeared first on Bozho's tech blog.

Amazon SageMaker Updates – Tokyo Region, CloudFormation, Chainer, and GreenGrass ML

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/sagemaker-tokyo-summit-2018/

Today, at the AWS Summit in Tokyo we announced a number of updates and new features for Amazon SageMaker. Starting today, SageMaker is available in Asia Pacific (Tokyo)! SageMaker also now supports CloudFormation. A new machine learning framework, Chainer, is now available in the SageMaker Python SDK, in addition to MXNet and Tensorflow. Finally, support for running Chainer models on several devices was added to AWS Greengrass Machine Learning.

Amazon SageMaker Chainer Estimator


Chainer is a popular, flexible, and intuitive deep learning framework. Chainer networks work on a “Define-by-Run” scheme, where the network topology is defined dynamically via forward computation. This is in contrast to many other frameworks which work on a “Define-and-Run” scheme where the topology of the network is defined separately from the data. A lot of developers enjoy the Chainer scheme since it allows them to write their networks with native python constructs and tools.

Luckily, using Chainer with SageMaker is just as easy as using a TensorFlow or MXNet estimator. In fact, it might even be a bit easier since it’s likely you can take your existing scripts and use them to train on SageMaker with very few modifications. With TensorFlow or MXNet users have to implement a train function with a particular signature. With Chainer your scripts can be a little bit more portable as you can simply read from a few environment variables like SM_MODEL_DIR, SM_NUM_GPUS, and others. We can wrap our existing script in a if __name__ == '__main__': guard and invoke it locally or on sagemaker.


import argparse
import os

if __name__ =='__main__':

    parser = argparse.ArgumentParser()

    # hyperparameters sent by the client are passed as command-line arguments to the script.
    parser.add_argument('--epochs', type=int, default=10)
    parser.add_argument('--batch-size', type=int, default=64)
    parser.add_argument('--learning-rate', type=float, default=0.05)

    # Data, model, and output directories
    parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
    parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
    parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])
    parser.add_argument('--test', type=str, default=os.environ['SM_CHANNEL_TEST'])

    args, _ = parser.parse_known_args()

    # ... load from args.train and args.test, train a model, write model to args.model_dir.

Then, we can run that script locally or use the SageMaker Python SDK to launch it on some GPU instances in SageMaker. The hyperparameters will get passed in to the script as CLI commands and the environment variables above will be autopopulated. When we call fit the input channels we pass will be populated in the SM_CHANNEL_* environment variables.


from sagemaker.chainer.estimator import Chainer
# Create my estimator
chainer_estimator = Chainer(
    entry_point='example.py',
    train_instance_count=1,
    train_instance_type='ml.p3.2xlarge',
    hyperparameters={'epochs': 10, 'batch-size': 64}
)
# Train my estimator
chainer_estimator.fit({'train': train_input, 'test': test_input})

# Deploy my estimator to a SageMaker Endpoint and get a Predictor
predictor = chainer_estimator.deploy(
    instance_type="ml.m4.xlarge",
    initial_instance_count=1
)

Now, instead of bringing your own docker container for training and hosting with Chainer, you can just maintain your script. You can see the full sagemaker-chainer-containers on github. One of my favorite features of the new container is built-in chainermn for easy multi-node distribution of your chainer training jobs.

There’s a lot more documentation and information available in both the README and the example notebooks.

AWS GreenGrass ML with Chainer

AWS GreenGrass ML now includes a pre-built Chainer package for all devices powered by Intel Atom, NVIDIA Jetson, TX2, and Raspberry Pi. So, now GreenGrass ML provides pre-built packages for TensorFlow, Apache MXNet, and Chainer! You can train your models on SageMaker then easily deploy it to any GreenGrass-enabled device using GreenGrass ML.

JAWS UG

I want to give a quick shout out to all of our wonderful and inspirational friends in the JAWS UG who attended the AWS Summit in Tokyo today. I’ve very much enjoyed seeing your pictures of the summit. Thanks for making Japan an amazing place for AWS developers! I can’t wait to visit again and meet with all of you.

Randall

Measuring the throughput for Amazon MQ using the JMS Benchmark

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/measuring-the-throughput-for-amazon-mq-using-the-jms-benchmark/

This post is courtesy of Alan Protasio, Software Development Engineer, Amazon Web Services

Just like compute and storage, messaging is a fundamental building block of enterprise applications. Message brokers (aka “message-oriented middleware”) enable different software systems, often written in different languages, on different platforms, running in different locations, to communicate and exchange information. Mission-critical applications, such as CRM and ERP, rely on message brokers to work.

A common performance consideration for customers deploying a message broker in a production environment is the throughput of the system, measured as messages per second. This is important to know so that application environments (hosts, threads, memory, etc.) can be configured correctly.

In this post, we demonstrate how to measure the throughput for Amazon MQ, a new managed message broker service for ActiveMQ, using JMS Benchmark. It should take between 15–20 minutes to set up the environment and an hour to run the benchmark. We also provide some tips on how to configure Amazon MQ for optimal throughput.

Benchmarking throughput for Amazon MQ

ActiveMQ can be used for a number of use cases. These use cases can range from simple fire and forget tasks (that is, asynchronous processing), low-latency request-reply patterns, to buffering requests before they are persisted to a database.

The throughput of Amazon MQ is largely dependent on the use case. For example, if you have non-critical workloads such as gathering click events for a non-business-critical portal, you can use ActiveMQ in a non-persistent mode and get extremely high throughput with Amazon MQ.

On the flip side, if you have a critical workload where durability is extremely important (meaning that you can’t lose a message), then you are bound by the I/O capacity of your underlying persistence store. We recommend using mq.m4.large for the best results. The mq.t2.micro instance type is intended for product evaluation. Performance is limited, due to the lower memory and burstable CPU performance.

Tip: To improve your throughput with Amazon MQ, make sure that you have consumers processing messaging as fast as (or faster than) your producers are pushing messages.

Because it’s impossible to talk about how the broker (ActiveMQ) behaves for each and every use case, we walk through how to set up your own benchmark for Amazon MQ using our favorite open-source benchmarking tool: JMS Benchmark. We are fans of the JMS Benchmark suite because it’s easy to set up and deploy, and comes with a built-in visualizer of the results.

Non-Persistent Scenarios – Queue latency as you scale producer throughput

JMS Benchmark nonpersistent scenarios

Getting started

At the time of publication, you can create an mq.m4.large single-instance broker for testing for $0.30 per hour (US pricing).

This walkthrough covers the following tasks:

  1.  Create and configure the broker.
  2. Create an EC2 instance to run your benchmark
  3. Configure the security groups
  4.  Run the benchmark.

Step 1 – Create and configure the broker
Create and configure the broker using Tutorial: Creating and Configuring an Amazon MQ Broker.

Step 2 – Create an EC2 instance to run your benchmark
Launch the EC2 instance using Step 1: Launch an Instance. We recommend choosing the m5.large instance type.

Step 3 – Configure the security groups
Make sure that all the security groups are correctly configured to let the traffic flow between the EC2 instance and your broker.

  1. Sign in to the Amazon MQ console.
  2. From the broker list, choose the name of your broker (for example, MyBroker)
  3. In the Details section, under Security and network, choose the name of your security group or choose the expand icon ( ).
  4. From the security group list, choose your security group.
  5. At the bottom of the page, choose Inbound, Edit.
  6. In the Edit inbound rules dialog box, add a role to allow traffic between your instance and the broker:
    • Choose Add Rule.
    • For Type, choose Custom TCP.
    • For Port Range, type the ActiveMQ SSL port (61617).
    • For Source, leave Custom selected and then type the security group of your EC2 instance.
    • Choose Save.

Your broker can now accept the connection from your EC2 instance.

Step 4 – Run the benchmark
Connect to your EC2 instance using SSH and run the following commands:

$ cd ~
$ curl -L https://github.com/alanprot/jms-benchmark/archive/master.zip -o master.zip
$ unzip master.zip
$ cd jms-benchmark-master
$ chmod a+x bin/*
$ env \
  SERVER_SETUP=false \
  SERVER_ADDRESS={activemq-endpoint} \
  ACTIVEMQ_TRANSPORT=ssl\
  ACTIVEMQ_PORT=61617 \
  ACTIVEMQ_USERNAME={activemq-user} \
  ACTIVEMQ_PASSWORD={activemq-password} \
  ./bin/benchmark-activemq

After the benchmark finishes, you can find the results in the ~/reports directory. As you may notice, the performance of ActiveMQ varies based on the number of consumers, producers, destinations, and message size.

Amazon MQ architecture

The last bit that’s important to know so that you can better understand the results of the benchmark is how Amazon MQ is architected.

Amazon MQ is architected to be highly available (HA) and durable. For HA, we recommend using the multi-AZ option. After a message is sent to Amazon MQ in persistent mode, the message is written to the highly durable message store that replicates the data across multiple nodes in multiple Availability Zones. Because of this replication, for some use cases you may see a reduction in throughput as you migrate to Amazon MQ. Customers have told us they appreciate the benefits of message replication as it helps protect durability even in the face of the loss of an Availability Zone.

Conclusion

We hope this gives you an idea of how Amazon MQ performs. We encourage you to run tests to simulate your own use cases.

To learn more, see the Amazon MQ website. You can try Amazon MQ for free with the AWS Free Tier, which includes up to 750 hours of a single-instance mq.t2.micro broker and up to 1 GB of storage per month for one year.

Project Floofball and more: Pi pet stuff

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/project-floofball-pi-pet-stuff/

It’s a public holiday here today (yes, again). So, while we indulge in the traditional pastime of barbecuing stuff (ourselves, mainly), here’s a little trove of Pi projects that cater for our various furry friends.

Project Floofball

Nicole Horward created Project Floofball for her hamster, Harold. It’s an IoT hamster wheel that uses a Raspberry Pi and a magnetic door sensor to log how far Harold runs.

Project Floofball: an IoT hamster wheel

An IoT Hamsterwheel using a Raspberry Pi and a magnetic door sensor, to see how far my hamster runs.

You can follow Harold’s runs in real time on his ThingSpeak channel, and you’ll find photos of the build on imgur. Nicole’s Python code, as well as her template for the laser-cut enclosure that houses the wiring and LCD display, are available on the hamster wheel’s GitHub repo.

A live-streaming pet feeder

JaganK3 used to work long hours that meant he couldn’t be there to feed his dog on time. He found that he couldn’t buy an automated feeder in India without paying a lot to import one, so he made one himself. It uses a Raspberry Pi to control a motor that turns a dispensing valve in a hopper full of dry food, giving his dog a portion of food at set times.

A transparent cylindrical hopper of dry dog food, with a motor that can turn a dispensing valve at the lower end. The motor is connected to a Raspberry Pi in a plastic case. Hopper, motor, Pi, and wiring are all mounted on a board on the wall.

He also added a web cam for live video streaming, because he could. Find out more in JaganK3’s Instructable for his pet feeder.

Shark laser cat toy

Sam Storino, meanwhile, is using a Raspberry Pi to control a laser-pointer cat toy with a goshdarned SHARK (which is kind of what I’d expect from the guy who made the steampunk-looking cat feeder a few weeks ago). The idea is to keep his cats interested and active within the confines of a compact city apartment.

Raspberry Pi Automatic Cat Laser Pointer Toy

Post with 52 votes and 7004 views. Tagged with cat, shark, lasers, austin powers, raspberry pi; Shared by JeorgeLeatherly. Raspberry Pi Automatic Cat Laser Pointer Toy

If I were a cat, I would definitely be entirely happy with this. Find out more on Sam’s website.

And there’s more

Michel Parreno has written a series of articles to help you monitor and feed your pet with Raspberry Pi.

All of these makers are generous in acknowledging the tutorials and build logs that helped them with their projects. It’s lovely to see the Raspberry Pi and maker community working like this, and I bet their projects will inspire others too.

Now, if you’ll excuse me. I’m late for a barbecue.

The post Project Floofball and more: Pi pet stuff appeared first on Raspberry Pi.