Tag Archives: CAD

How to Compete with Giants

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/how-to-compete-with-giants/

How to Compete with Giants

This post by Backblaze’s CEO and co-founder Gleb Budman is the sixth in a series about entrepreneurship. You can choose posts in the series from the list below:

  1. How Backblaze got Started: The Problem, The Solution, and the Stuff In-Between
  2. Building a Competitive Moat: Turning Challenges Into Advantages
  3. From Idea to Launch: Getting Your First Customers
  4. How to Get Your First 1,000 Customers
  5. Surviving Your First Year
  6. How to Compete with Giants

Use the Join button above to receive notification of new posts in this series.

Perhaps your business is competing in a brand new space free from established competitors. Most of us, though, start companies that compete with existing offerings from large, established companies. You need to come up with a better mousetrap — not the first mousetrap.

That’s the challenge Backblaze faced. In this post, I’d like to share some of the lessons I learned from that experience.

Backblaze vs. Giants

Competing with established companies that are orders of magnitude larger can be daunting. How can you succeed?

I’ll set the stage by offering a few sets of giants we compete with:

  • When we started Backblaze, we offered online backup in a market where companies had been offering “online backup” for at least a decade, and even the newer entrants had raised tens of millions of dollars.
  • When we built our storage servers, the alternatives were EMC, NetApp, and Dell — each of which had a market cap of over $10 billion.
  • When we introduced our cloud storage offering, B2, our direct competitors were Amazon, Google, and Microsoft. You might have heard of them.

What did we learn by competing with these giants on a bootstrapped budget? Let’s take a look.

Determine What Success Means

For a long time Apple considered Apple TV to be a hobby, not a real product worth focusing on, because it did not generate a billion in revenue. For a $10 billion per year revenue company, a new business that generates $50 million won’t move the needle and often isn’t worth putting focus on. However, for a startup, getting to $50 million in revenue can be the start of a wildly successful business.

Lesson Learned: Don’t let the giants set your success metrics.

The Advantages Startups Have

The giants have a lot of advantages: more money, people, scale, resources, access, etc. Following their playbook and attacking head-on means you’re simply outgunned. Common paths to failure are trying to build more features, enter more markets, outspend on marketing, and other similar approaches where scale and resources are the primary determinants of success.

But being a startup affords many advantages most giants would salivate over. As a nimble startup you can leverage those to succeed. Let’s breakdown nine competitive advantages we’ve used that you can too.

1. Drive Focus

It’s hard to build a $10 billion revenue business doing just one thing, and most giants have a broad portfolio of businesses, numerous products for each, and targeting a variety of customer segments in multiple markets. That adds complexity and distributes management attention.

Startups get the benefit of having everyone in the company be extremely focused, often on a singular mission, product, customer segment, and market. While our competitors sell everything from advertising to Zantac, and are investing in groceries and shipping, Backblaze has focused exclusively on cloud storage. This means all of our best people (i.e. everyone) is focused on our cloud storage business. Where is all of your focus going?

Lesson Learned: Align everyone in your company to a singular focus to dramatically out-perform larger teams.

2. Use Lack-of-Scale as an Advantage

You may have heard Paul Graham say “Do things that don’t scale.” There are a host of things you can do specifically because you don’t have the same scale as the giants. Use that as an advantage.

When we look for data center space, we have more options than our largest competitors because there are simply more spaces available with room for 100 cabinets than for 1,000 cabinets. With some searching, we can find data center space that is better/cheaper.

When a flood in Thailand destroyed factories, causing the world’s supply of hard drives to plummet and prices to triple, we started drive farming. The giants certainly couldn’t. It was a bit crazy, but it let us keep prices unchanged for our customers.

Our Chief Cloud Officer, Tim, used to work at Adobe. Because of their size, any new product needed to always launch in a multitude of languages and in global markets. Once launched, they had scale. But getting any new product launched was incredibly challenging.

Lesson Learned: Use lack-of-scale to exploit opportunities that are closed to giants.

3. Build a Better Product

This one is probably obvious. If you’re going to provide the same product, at the same price, to the same customers — why do it? Remember that better does not always mean more features. Here’s one way we built a better product that didn’t require being a bigger company.

All online backup services required customers to choose what to include in their backup. We found that this was complicated for users since they often didn’t know what needed to be backed up. We flipped the model to back up everything and allow users to exclude if they wanted to, but it was not required. This reduced the number of features/options, while making it easier and better for the user.

This didn’t require the resources of a huge company; it just required understanding customers a bit deeper and thinking about the solution differently. Building a better product is the most classic startup competitive advantage.

Lesson Learned: Dig deep with your customers to understand and deliver a better mousetrap.

4. Provide Better Service

How can you provide better service? Use your advantages. Escalations from your customer care folks to engineering can go through fewer hoops. Fixing an issue and shipping can be quicker. Access to real answers on Twitter or Facebook can be more effective.

A strategic decision we made was to have all customer support people as full-time employees in our headquarters. This ensures they are in close contact to the whole company for feedback to quickly go both ways.

Having a smaller team and fewer layers enables faster internal communication, which increases customer happiness. And the option to do things that don’t scale — such as help a customer in a unique situation — can go a long way in building customer loyalty.

Lesson Learned: Service your customers better by establishing clear internal communications.

5. Remove The Unnecessary

After determining that the industry standard EMC/NetApp/Dell storage servers would be too expensive to build our own cloud storage upon, we decided to build our own infrastructure. Many said we were crazy to compete with these multi-billion dollar companies and that it would be impossible to build a lower cost storage server. However, not only did it prove to not be impossible — it wasn’t even that hard.

One key trick? Remove the unnecessary. While EMC and others built servers to sell to other companies for a wide variety of use cases, Backblaze needed servers that only Backblaze would run, and for a single use case. As a result we could tailor the servers for our needs by removing redundancy from each server (since we would run redundant servers), and using lower-performance components (since we would get high-performance by running parallel servers).

What do your customers and use cases not need? This can trim costs and complexity while often improving the product for your use case.

Lesson Learned: Don’t think “what can we add” to what the giants offer — think “what can we remove.”

6. Be Easy

How many times have you visited a large company website, particularly one that’s not consumer-focused, only to leave saying, “Huh? I don’t understand what you do.” Keeping your website clear, and your product and pricing simple, will dramatically increase conversion and customer satisfaction. If you’re able to make it 2x easier and thus increasing your conversion by 2x, you’ve just allowed yourself to spend ½ as much acquiring a customer.

Providing unlimited data backup wasn’t specifically about providing more storage — it was about making it easier. Since users didn’t know how much data they needed to back up, charging per gigabyte meant they wouldn’t know the cost. Providing unlimited data backup meant they could just relax.

Customers love easy — and being smaller makes easy easier to deliver. Use that as an advantage in your website, marketing materials, pricing, product, and in every other customer interaction.

Lesson Learned: Ease-of-use isn’t a slogan: it’s a competitive advantage. Treat it as seriously as any other feature of your product

7. Don’t Be Afraid of Risk

Obviously unnecessary risks are unnecessary, and some risks aren’t worth taking. However, large companies that have given guidance to Wall Street with a $0.01 range on their earning-per-share are inherently going to be very risk-averse. Use risk-tolerance to open up opportunities, and adjust your tolerance level as you scale. In your first year, there are likely an infinite number of ways your business may vaporize; don’t be too worried about taking a risk that might have a 20% downside when the upside is hockey stick growth.

Using consumer-grade hard drives in our servers may have caused pain and suffering for us years down-the-line, but they were priced at approximately 50% of enterprise drives. Giants wouldn’t have considered the option. Turns out, the consumer drives performed great for us.

Lesson Learned: Use calculated risks as an advantage.

8. Be Open

The larger a company grows, the more it wants to hide information. Some of this is driven by regulatory requirements as a public company. But most of this is cultural. Sharing something might cause a problem, so let’s not. All external communication is treated as a critical press release, with rounds and rounds of editing by multiple teams and approvals. However, customers are often desperate for information. Moreover, sharing information builds trust, understanding, and advocates.

I started blogging at Backblaze before we launched. When we blogged about our Storage Pod and open-sourced the design, many thought we were crazy to share this information. But it was transformative for us, establishing Backblaze as a tech thought leader in storage and giving people a sense of how we were able to provide our service at such a low cost.

Over the years we’ve developed a culture of being open internally and externally, on our blog and with the press, and in communities such as Hacker News and Reddit. Often we’ve been asked, “why would you share that!?” — but it’s the continual openness that builds trust. And that culture of openness is incredibly challenging for the giants.

Lesson Learned: Overshare to build trust and brand where giants won’t.

9. Be Human

As companies scale, typically a smaller percent of founders and executives interact with customers. The people who build the company become more hidden, the language feels “corporate,” and customers start to feel they’re interacting with the cliche “faceless, nameless corporation.” Use your humanity to your advantage. From day one the Backblaze About page listed all the founders, and my email address. While contacting us shouldn’t be the first path for a customer support question, I wanted it to be clear that we stand behind the service we offer; if we’re doing something wrong — I want to know it.

To scale it’s important to have processes and procedures, but sometimes a situation falls outside of a well-established process. While we want our employees to follow processes, they’re still encouraged to be human and “try to do the right thing.” How to you strike this balance? Simon Sinek gives a good talk about it: make your employees feel safe. If employees feel safe they’ll be human.

If your customer is a consumer, they’ll appreciate being treated as a human. Even if your customer is a corporation, the purchasing decision-makers are still people.

Lesson Learned: Being human is the ultimate antithesis to the faceless corporation.

Build Culture to Sustain Your Advantages at Scale

Presumably the goal is not to always be competing with giants, but to one day become a giant. Does this mean you’ll lose all of these advantages? Some, yes — but not all. Some of these advantages are cultural, and if you build these into the culture from the beginning, and fight to keep them as you scale, you can keep them as you become a giant.

Tesla still comes across as human, with Elon Musk frequently interacting with people on Twitter. Apple continues to provide great service through their Genius Bar. And, worst case, if you lose these at scale, you’ll still have the other advantages of being a giant such as money, people, scale, resources, and access.

Of course, some new startup will be gunning for you with grand ambitions, so just be sure not to get complacent. 😉

The post How to Compete with Giants appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

[$] Point releases for the GNU C Library

Post Syndicated from corbet original https://lwn.net/Articles/736429/rss

The GNU C Library (glibc) project produces regular releases on an
approximately six-month cadence. The current release is 2.26
from early August; the 2.27 release is expected at the beginning of
February 2018. Unlike many other projects, though, glibc does not normally
create point releases for important fixes between the major releases.
The last point release from glibc was 2.14.1, which came out in 2011.
A discussion on the need for a 2.26 point release led to questions about
whether such releases have a useful place in the current
software-development environment.

New KRACK Attack Against Wi-Fi Encryption

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/10/new_krack_attac.html

Mathy Vanhoef has just published a devastating attack against WPA2, the 14-year-old encryption protocol used by pretty much all wi-fi systems. Its an interesting attack, where the attacker forces the protocol to reuse a key. The authors call this attack KRACK, for Key Reinstallation Attacks

This is yet another of a series of marketed attacks; with a cool name, a website, and a logo. The Q&A on the website answers a lot of questions about the attack and its implications. And lots of good information in this ArsTechnica article.

There is an academic paper, too:

“Key Reinstallation Attacks: Forcing Nonce Reuse in WPA2,” by Mathy Vanhoef and Frank Piessens.

Abstract: We introduce the key reinstallation attack. This attack abuses design or implementation flaws in cryptographic protocols to reinstall an already-in-use key. This resets the key’s associated parameters such as transmit nonces and receive replay counters. Several types of cryptographic Wi-Fi handshakes are affected by the attack. All protected Wi-Fi networks use the 4-way handshake to generate a fresh session key. So far, this 14-year-old handshake has remained free from attacks, and is even proven secure. However, we show that the 4-way handshake is vulnerable to a key reinstallation attack. Here, the adversary tricks a victim into reinstalling an already-in-use key. This is achieved by manipulating and replaying handshake messages. When reinstalling the key, associated parameters such as the incremental transmit packet number (nonce) and receive packet number (replay counter) are reset to their initial value. Our key reinstallation attack also breaks the PeerKey, group key, and Fast BSS Transition (FT) handshake. The impact depends on the handshake being attacked, and the data-confidentiality protocol in use. Simplified, against AES-CCMP an adversary can replay and decrypt (but not forge) packets. This makes it possible to hijack TCP streams and inject malicious data into them. Against WPA-TKIP and GCMP the impact is catastrophic: packets can be replayed, decrypted, and forged. Because GCMP uses the same authentication key in both communication directions, it is especially affected.

Finally, we confirmed our findings in practice, and found that every Wi-Fi device is vulnerable to some variant of our attacks. Notably, our attack is exceptionally devastating against Android 6.0: it forces the client into using a predictable all-zero encryption key.

I’m just reading about this now, and will post more information
as I learn it.

EDITED TO ADD: More news.

EDITED TO ADD: This meets my definition of brilliant. The attack is blindingly obvious once it’s pointed out, but for over a decade no one noticed it.

EDITED TO ADD: Matthew Green has a blog post on what went wrong. The vulnerability is in the interaction between two protocols. At a meta level, he blames the opaque IEEE standards process:

One of the problems with IEEE is that the standards are highly complex and get made via a closed-door process of private meetings. More importantly, even after the fact, they’re hard for ordinary security researchers to access. Go ahead and google for the IETF TLS or IPSec specifications — you’ll find detailed protocol documentation at the top of your Google results. Now go try to Google for the 802.11i standards. I wish you luck.

The IEEE has been making a few small steps to ease this problem, but they’re hyper-timid incrementalist bullshit. There’s an IEEE program called GET that allows researchers to access certain standards (including 802.11) for free, but only after they’ve been public for six months — coincidentally, about the same time it takes for vendors to bake them irrevocably into their hardware and software.

This whole process is dumb and — in this specific case — probably just cost industry tens of millions of dollars. It should stop.

Nicholas Weaver explains why most people shouldn’t worry about this:

So unless your Wi-Fi password looks something like a cat’s hairball (e.g. “:SNEIufeli7rc” — which is not guessable with a few million tries by a computer), a local attacker had the capability to determine the password, decrypt all the traffic, and join the network before KRACK.

KRACK is, however, relevant for enterprise Wi-Fi networks: networks where you needed to accept a cryptographic certificate to join initially and have to provide both a username and password. KRACK represents a new vulnerability for these networks. Depending on some esoteric details, the attacker can decrypt encrypted traffic and, in some cases, inject traffic onto the network.

But in none of these cases can the attacker join the network completely. And the most significant of these attacks affects Linux devices and Android phones, they don’t affect Macs, iPhones, or Windows systems. Even when feasible, these attacks require physical proximity: An attacker on the other side of the planet can’t exploit KRACK, only an attacker in the parking lot can.

Pirate Bay’s Iconic .SE Domain has Expired (Updated)

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-bays-iconic-se-domain-has-expired-and-is-for-sale-171016/

When The Pirate Bay first came online during the summer of 2003, its main point of access was thepiratebay.org.

Since then the site has burnt through more than a dozen domains, trying to evade seizures or other legal threats.

For many years thepiratebay.se operated as the site’s main domain name. Earlier this year the site moved back to the good old .org again, and from the looks of it, TPB is ready to say farewell to the Swedish domain.

Thepiratebay.se expired last week and, if nothing happens, it will be de-activated tomorrow. This means that the site might lose control over a piece of its history.

The torrent site moved from the ORG to the SE domain in 2012, fearing that US authorities would seize the former. Around that time the Department of Homeland Security took hundreds of sites offline and the Pirate Bay team feared that they would be next.

Thepiratebay.se has expired

Ironically, however, the next big threat came from Sweden, the Scandinavian country where the site once started.

In 2013, a local anti-piracy group filed a motion targeting two of The Pirate Bay’s domains, ThePirateBay.se and PirateBay.se. This case that has been dragging on for years now.

During this time TPB moved back and forth between domains but the .se domain turned out to be a safer haven than most alternatives, despite the legal issues. Many other domains were simply seized or suspended without prior notice.

When the Swedish Court of Appeal eventually ruled that The Pirate Bay’s domain had to be confiscated and forfeited to the state, the site’s operators moved back to the .org domain, where it all started.

Although a Supreme Court appeal is still pending, according to a report from IDG earlier this year the court has placed a lock on the domain. This prevents the owner from changing or transferring it, which may explain why it has expired.

The lock is relevant, as the domain not only expired but has also been put of for sale again in the SEDO marketplace, with a minimum bid of $90. This sale would be impossible, if the domain is locked.

Thepiratebay.se for sale

Perhaps the most ironic of all is the fact that TPB moved to .se because it feared that the US controlled .org domain was easy prey.

Fast forward half a decade and over a dozen domains have come and gone while thepiratebay.org still stands strong, despite entertainment industry pressure.

Update: We updated the article to mention that the domain name is locked by the Swedish Supreme Court. This means that it can’t be updated and would explain why it has expired.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Tech Giants Protest Looming US Pirate Site Blocking Order

Post Syndicated from Ernesto original https://torrentfreak.com/tech-giants-protest-looming-us-pirate-site-blocking-order-171013/

While domain seizures against pirate sites are relatively common in the United states, ISP and search engine blocking is not. This could change soon though.

In an ongoing case against Sci-Hub, regularly referred to as the “Pirate Bay of Science,” a magistrate judge in Virginia recently recommended a broad order which would require search engines and Internet providers to block the site.

The recommendation followed a request from the academic publisher American Chemical Society (ACS) that wants these third-party services to make the site in question inaccessible. While Sci-Hub has chosen not to defend itself, a group of tech giants has now stepped in to prevent the broad injunction from being issued.

This week the Computer & Communications Industry Association (CCIA), which includes members such as Cloudflare, Facebook, and Google, asked the court to limit the proposed measures. In an amicus curiae brief submitted to the Virginia District Court, they share their concerns.

“Here, Plaintiff is seeking—and the Magistrate Judge has recommended—a permanent injunction that would sweep in various Neutral Service Providers, despite their having violated no laws and having no connection to this case,” CCIA writes.

According to the tech companies, neutral service providers are not “in active concert or participation” with the defendant, and should, therefore, be excluded from the proposed order.

While search engines may index Sci-Hub and ISPs pass on packets from this site, they can’t be seen as “confederates” that are working together with them to violate the law, CCIA stresses.

“Plaintiff has failed to make a showing that any such provider had a contract with these Defendants or any direct contact with their activities—much less that all of the providers who would be swept up by the proposed injunction had such a connection.”

Even if one of the third party services could be found liable the matter should be resolved under the DMCA, which expressly prohibits such broad injunctions, the CCIA claims.

“The DMCA thus puts bedrock limits on the injunctions that can be imposed on qualifying providers if they are named as defendants and are held liable as infringers. Plaintiff here ignores that.

“What ACS seeks, in the posture of a permanent injunction against nonparties, goes beyond what Congress was willing to permit, even against service providers against whom an actual judgment of infringement has been entered.That request must be rejected.”

The tech companies hope the court will realize that the injunction recommended by the magistrate judge will set a dangerous precedent, which goes beyond what the law is intended for, so will impose limits in response to their concerns.

It will be interesting to see whether any copyright holder groups will also chime in, to argue the opposite.

CCIA’s full amicus curiae brief is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Developer Tools Expands Integration to Include GitHub

Post Syndicated from Balaji Iyer original https://aws.amazon.com/blogs/devops/aws-developer-tools-expands-integration-to-include-github/

AWS Developer Tools is a set of services that include AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, and AWS CodeDeploy. Together, these services help you securely store and maintain version control of your application’s source code and automatically build, test, and deploy your application to AWS or your on-premises environment. These services are designed to enable developers and IT professionals to rapidly and safely deliver software.

As part of our continued commitment to extend the AWS Developer Tools ecosystem to third-party tools and services, we’re pleased to announce AWS CodeStar and AWS CodeBuild now integrate with GitHub. This will make it easier for GitHub users to set up a continuous integration and continuous delivery toolchain as part of their release process using AWS Developer Tools.

In this post, I will walk through the following:

Prerequisites:

You’ll need an AWS account, a GitHub account, an Amazon EC2 key pair, and administrator-level permissions for AWS Identity and Access Management (IAM), AWS CodeStar, AWS CodeBuild, AWS CodePipeline, Amazon EC2, Amazon S3.

 

Integrating GitHub with AWS CodeStar

AWS CodeStar enables you to quickly develop, build, and deploy applications on AWS. Its unified user interface helps you easily manage your software development activities in one place. With AWS CodeStar, you can set up your entire continuous delivery toolchain in minutes, so you can start releasing code faster.

When AWS CodeStar launched in April of this year, it used AWS CodeCommit as the hosted source repository. You can now choose between AWS CodeCommit or GitHub as the source control service for your CodeStar projects. In addition, your CodeStar project dashboard lets you centrally track GitHub activities, including commits, issues, and pull requests. This makes it easy to manage project activity across the components of your CI/CD toolchain. Adding the GitHub dashboard view will simplify development of your AWS applications.

In this section, I will show you how to use GitHub as the source provider for your CodeStar projects. I’ll also show you how to work with recent commits, issues, and pull requests in the CodeStar dashboard.

Sign in to the AWS Management Console and from the Services menu, choose CodeStar. In the CodeStar console, choose Create a new project. You should see the Choose a project template page.

CodeStar Project

Choose an option by programming language, application category, or AWS service. I am going to choose the Ruby on Rails web application that will be running on Amazon EC2.

On the Project details page, you’ll now see the GitHub option. Type a name for your project, and then choose Connect to GitHub.

Project details

You’ll see a message requesting authorization to connect to your GitHub repository. When prompted, choose Authorize, and then type your GitHub account password.

Authorize

This connects your GitHub identity to AWS CodeStar through OAuth. You can always review your settings by navigating to your GitHub application settings.

Installed GitHub Apps

You’ll see AWS CodeStar is now connected to GitHub:

Create project

You can choose a public or private repository. GitHub offers free accounts for users and organizations working on public and open source projects and paid accounts that offer unlimited private repositories and optional user management and security features.

In this example, I am going to choose the public repository option. Edit the repository description, if you like, and then choose Next.

Review your CodeStar project details, and then choose Create Project. On Choose an Amazon EC2 Key Pair, choose Create Project.

Key Pair

On the Review project details page, you’ll see Edit Amazon EC2 configuration. Choose this link to configure instance type, VPC, and subnet options. AWS CodeStar requires a service role to create and manage AWS resources and IAM permissions. This role will be created for you when you select the AWS CodeStar would like permission to administer AWS resources on your behalf check box.

Choose Create Project. It might take a few minutes to create your project and resources.

Review project details

When you create a CodeStar project, you’re added to the project team as an owner. If this is the first time you’ve used AWS CodeStar, you’ll be asked to provide the following information, which will be shown to others:

  • Your display name.
  • Your email address.

This information is used in your AWS CodeStar user profile. User profiles are not project-specific, but they are limited to a single AWS region. If you are a team member in projects in more than one region, you’ll have to create a user profile in each region.

User settings

User settings

Choose Next. AWS CodeStar will create a GitHub repository with your configuration settings (for example, https://github.com/biyer/ruby-on-rails-service).

When you integrate your integrated development environment (IDE) with AWS CodeStar, you can continue to write and develop code in your preferred environment. The changes you make will be included in the AWS CodeStar project each time you commit and push your code.

IDE

After setting up your IDE, choose Next to go to the CodeStar dashboard. Take a few minutes to familiarize yourself with the dashboard. You can easily track progress across your entire software development process, from your backlog of work items to recent code deployments.

Dashboard

After the application deployment is complete, choose the endpoint that will display the application.

Pipeline

This is what you’ll see when you open the application endpoint:

The Commit history section of the dashboard lists the commits made to the Git repository. If you choose the commit ID or the Open in GitHub option, you can use a hotlink to your GitHub repository.

Commit history

Your AWS CodeStar project dashboard is where you and your team view the status of your project resources, including the latest commits to your project, the state of your continuous delivery pipeline, and the performance of your instances. This information is displayed on tiles that are dedicated to a particular resource. To see more information about any of these resources, choose the details link on the tile. The console for that AWS service will open on the details page for that resource.

Issues

You can also filter issues based on their status and the assigned user.

Filter

AWS CodeBuild Now Supports Building GitHub Pull Requests

CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue. You can use prepackaged build environments to get started quickly or you can create custom build environments that use your own build tools.

We recently announced support for GitHub pull requests in AWS CodeBuild. This functionality makes it easier to collaborate across your team while editing and building your application code with CodeBuild. You can use the AWS CodeBuild or AWS CodePipeline consoles to run AWS CodeBuild. You can also automate the running of AWS CodeBuild by using the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the AWS CodeBuild Plugin for Jenkins.

AWS CodeBuild

In this section, I will show you how to trigger a build in AWS CodeBuild with a pull request from GitHub through webhooks.

Open the AWS CodeBuild console at https://console.aws.amazon.com/codebuild/. Choose Create project. If you already have a CodeBuild project, you can choose Edit project, and then follow along. CodeBuild can connect to AWS CodeCommit, S3, BitBucket, and GitHub to pull source code for builds. For Source provider, choose GitHub, and then choose Connect to GitHub.

Configure

After you’ve successfully linked GitHub and your CodeBuild project, you can choose a repository in your GitHub account. CodeBuild also supports connections to any public repository. You can review your settings by navigating to your GitHub application settings.

GitHub Apps

On Source: What to Build, for Webhook, select the Rebuild every time a code change is pushed to this repository check box.

Note: You can select this option only if, under Repository, you chose Use a repository in my account.

Source

In Environment: How to build, for Environment image, select Use an image managed by AWS CodeBuild. For Operating system, choose Ubuntu. For Runtime, choose Base. For Version, choose the latest available version. For Build specification, you can provide a collection of build commands and related settings, in YAML format (buildspec.yml) or you can override the build spec by inserting build commands directly in the console. AWS CodeBuild uses these commands to run a build. In this example, the output is the string “hello.”

Environment

On Artifacts: Where to put the artifacts from this build project, for Type, choose No artifacts. (This is also the type to choose if you are just running tests or pushing a Docker image to Amazon ECR.) You also need an AWS CodeBuild service role so that AWS CodeBuild can interact with dependent AWS services on your behalf. Unless you already have a role, choose Create a role, and for Role name, type a name for your role.

Artifacts

In this example, leave the advanced settings at their defaults.

If you expand Show advanced settings, you’ll see options for customizing your build, including:

  • A build timeout.
  • A KMS key to encrypt all the artifacts that the builds for this project will use.
  • Options for building a Docker image.
  • Elevated permissions during your build action (for example, accessing Docker inside your build container to build a Dockerfile).
  • Resource options for the build compute type.
  • Environment variables (built-in or custom). For more information, see Create a Build Project in the AWS CodeBuild User Guide.

Advanced settings

You can use the AWS CodeBuild console to create a parameter in Amazon EC2 Systems Manager. Choose Create a parameter, and then follow the instructions in the dialog box. (In that dialog box, for KMS key, you can optionally specify the ARN of an AWS KMS key in your account. Amazon EC2 Systems Manager uses this key to encrypt the parameter’s value during storage and decrypt during retrieval.)

Create parameter

Choose Continue. On the Review page, either choose Save and build or choose Save to run the build later.

Choose Start build. When the build is complete, the Build logs section should display detailed information about the build.

Logs

To demonstrate a pull request, I will fork the repository as a different GitHub user, make commits to the forked repo, check in the changes to a newly created branch, and then open a pull request.

Pull request

As soon as the pull request is submitted, you’ll see CodeBuild start executing the build.

Build

GitHub sends an HTTP POST payload to the webhook’s configured URL (highlighted here), which CodeBuild uses to download the latest source code and execute the build phases.

Build project

If you expand the Show all checks option for the GitHub pull request, you’ll see that CodeBuild has completed the build, all checks have passed, and a deep link is provided in Details, which opens the build history in the CodeBuild console.

Pull request

Summary:

In this post, I showed you how to use GitHub as the source provider for your CodeStar projects and how to work with recent commits, issues, and pull requests in the CodeStar dashboard. I also showed you how you can use GitHub pull requests to automatically trigger a build in AWS CodeBuild — specifically, how this functionality makes it easier to collaborate across your team while editing and building your application code with CodeBuild.


About the author:

Balaji Iyer is an Enterprise Consultant for the Professional Services Team at Amazon Web Services. In this role, he has helped several customers successfully navigate their journey to AWS. His specialties include architecting and implementing highly scalable distributed systems, serverless architectures, large scale migrations, operational security, and leading strategic AWS initiatives. Before he joined Amazon, Balaji spent more than a decade building operating systems, big data analytics solutions, mobile services, and web applications. In his spare time, he enjoys experiencing the great outdoors and spending time with his family.

 

JavaScript got better while I wasn’t looking

Post Syndicated from Eevee original https://eev.ee/blog/2017/10/07/javascript-got-better-while-i-wasnt-looking/

IndustrialRobot has generously donated in order to inquire:

In the last few years there seems to have been a lot of activity with adding emojis to Unicode. Has there been an equal effort to add ‘real’ languages/glyph systems/etc?

And as always, if you don’t have anything to say on that topic, feel free to choose your own. :p

Yes.

I mean, each release of Unicode lists major new additions right at the top — Unicode 10, Unicode 9, Unicode 8, etc. They also keep fastidious notes, so you can also dig into how and why these new scripts came from, by reading e.g. the proposal for the addition of Zanabazar Square. I don’t think I have much to add here; I’m not a real linguist, I only play one on TV.

So with that out of the way, here’s something completely different!

A brief history of JavaScript

JavaScript was created in seven days, about eight thousand years ago. It was pretty rough, and it stayed rough for most of its life. But that was fine, because no one used it for anything besides having a trail of sparkles follow your mouse on their Xanga profile.

Then people discovered you could actually do a handful of useful things with JavaScript, and it saw a sharp uptick in usage. Alas, it stayed pretty rough. So we came up with polyfills and jQuerys and all kinds of miscellaneous things that tried to smooth over the rough parts, to varying degrees of success.

And… that’s it. That’s pretty much how things stayed for a while.


I have complicated feelings about JavaScript. I don’t hate it… but I certainly don’t enjoy it, either. It has some pretty neat ideas, like prototypical inheritance and “everything is a value”, but it buries them under a pile of annoying quirks and a woefully inadequate standard library. The DOM APIs don’t make things much better — they seem to be designed as though the target language were Java, rarely taking advantage of any interesting JavaScript features. And the places where the APIs overlap with the language are a hilarious mess: I have to check documentation every single time I use any API that returns a set of things, because there are at least three totally different conventions for handling that and I can’t keep them straight.

The funny thing is that I’ve been fairly happy to work with Lua, even though it shares most of the same obvious quirks as JavaScript. Both languages are weakly typed; both treat nonexistent variables and keys as simply false values, rather than errors; both have a single data structure that doubles as both a list and a map; both use 64-bit floating-point as their only numeric type (though Lua added integers very recently); both lack a standard object model; both have very tiny standard libraries. Hell, Lua doesn’t even have exceptions, not really — you have to fake them in much the same style as Perl.

And yet none of this bothers me nearly as much in Lua. The differences between the languages are very subtle, but combined they make a huge impact.

  • Lua has separate operators for addition and concatenation, so + is never ambiguous. It also has printf-style string formatting in the standard library.

  • Lua’s method calls are syntactic sugar: foo:bar() just means foo.bar(foo). Lua doesn’t even have a special this or self value; the invocant just becomes the first argument. In contrast, JavaScript invokes some hand-waved magic to set its contextual this variable, which has led to no end of confusion.

  • Lua has an iteration protocol, as well as built-in iterators for dealing with list-style or map-style data. JavaScript has a special dedicated Array type and clumsy built-in iteration syntax.

  • Lua has operator overloading and (surprisingly flexible) module importing.

  • Lua allows the keys of a map to be any value (though non-scalars are always compared by identity). JavaScript implicitly converts keys to strings — and since there’s no operator overloading, there’s no way to natively fix this.

These are fairly minor differences, in the grand scheme of language design. And almost every feature in Lua is implemented in a ridiculously simple way; in fact the entire language is described in complete detail in a single web page. So writing JavaScript is always frustrating for me: the language is so close to being much more ergonomic, and yet, it isn’t.

Or, so I thought. As it turns out, while I’ve been off doing other stuff for a few years, browser vendors have been implementing all this pie-in-the-sky stuff from “ES5” and “ES6”, whatever those are. People even upgrade their browsers now. Lo and behold, the last time I went to write JavaScript, I found out that a number of papercuts had actually been solved, and the solutions were sufficiently widely available that I could actually use them in web code.

The weird thing is that I do hear a lot about JavaScript, but the feature I’ve seen raved the most about by far is probably… built-in types for working with arrays of bytes? That’s cool and all, but not exactly the most pressing concern for me.

Anyway, if you also haven’t been keeping tabs on the world of JavaScript, here are some things we missed.

let

MDN docs — supported in Firefox 44, Chrome 41, IE 11, Safari 10

I’m pretty sure I first saw let over a decade ago. Firefox has supported it for ages, but you actually had to opt in by specifying JavaScript version 1.7. Remember JavaScript versions? You know, from back in the days when people actually suggested you write stuff like this:

1
<SCRIPT LANGUAGE="JavaScript1.2" TYPE="text/javascript">

Yikes.

Anyway, so, let declares a variable — but scoped to the immediately containing block, unlike var, which scopes to the innermost function. The trouble with var was that it was very easy to make misleading:

1
2
3
4
5
6
// foo exists here
while (true) {
    var foo = ...;
    ...
}
// foo exists here too

If you reused the same temporary variable name in a different block, or if you expected to be shadowing an outer foo, or if you were trying to do something with creating closures in a loop, this would cause you some trouble.

But no more, because let actually scopes the way it looks like it should, the way variable declarations do in C and friends. As an added bonus, if you refer to a variable declared with let outside of where it’s valid, you’ll get a ReferenceError instead of a silent undefined value. Hooray!

There’s one other interesting quirk to let that I can’t find explicitly documented. Consider:

1
2
3
4
5
6
7
let closures = [];
for (let i = 0; i < 4; i++) {
    closures.push(function() { console.log(i); });
}
for (let j = 0; j < closures.length; j++) {
    closures[j]();
}

If this code had used var i, then it would print 4 four times, because the function-scoped var i means each closure is sharing the same i, whose final value is 4. With let, the output is 0 1 2 3, as you might expect, because each run through the loop gets its own i.

But wait, hang on.

The semantics of a C-style for are that the first expression is only evaluated once, at the very beginning. So there’s only one let i. In fact, it makes no sense for each run through the loop to have a distinct i, because the whole idea of the loop is to modify i each time with i++.

I assume this is simply a special case, since it’s what everyone expects. We expect it so much that I can’t find anyone pointing out that the usual explanation for why it works makes no sense. It has the interesting side effect that for no longer de-sugars perfectly to a while, since this will print all 4s:

1
2
3
4
5
6
7
8
9
closures = [];
let i = 0;
while (i < 4) {
    closures.push(function() { console.log(i); });
    i++;
}
for (let j = 0; j < closures.length; j++) {
    closures[j]();
}

This isn’t a problem — I’m glad let works this way! — it just stands out to me as interesting. Lua doesn’t need a special case here, since it uses an iterator protocol that produces values rather than mutating a visible state variable, so there’s no problem with having the loop variable be truly distinct on each run through the loop.

Classes

MDN docs — supported in Firefox 45, Chrome 42, Safari 9, Edge 13

Prototypical inheritance is pretty cool. The way JavaScript presents it is a little bit opaque, unfortunately, which seems to confuse a lot of people. JavaScript gives you enough functionality to make it work, and even makes it sound like a first-class feature with a property outright called prototype… but to actually use it, you have to do a bunch of weird stuff that doesn’t much look like constructing an object or type.

The funny thing is, people with almost any background get along with Python just fine, and Python uses prototypical inheritance! Nobody ever seems to notice this, because Python tucks it neatly behind a class block that works enough like a Java-style class. (Python also handles inheritance without using the prototype, so it’s a little different… but I digress. Maybe in another post.)

The point is, there’s nothing fundamentally wrong with how JavaScript handles objects; the ergonomics are just terrible.

Lo! They finally added a class keyword. Or, rather, they finally made the class keyword do something; it’s been reserved this entire time.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
class Vector {
    constructor(x, y) {
        this.x = x;
        this.y = y;
    }

    get magnitude() {
        return Math.sqrt(this.x * this.x + this.y * this.y);
    }

    dot(other) {
        return this.x * other.x + this.y * other.y;
    }
}

This is all just sugar for existing features: creating a Vector function to act as the constructor, assigning a function to Vector.prototype.dot, and whatever it is you do to make a property. (Oh, there are properties. I’ll get to that in a bit.)

The class block can be used as an expression, with or without a name. It also supports prototypical inheritance with an extends clause and has a super pseudo-value for superclass calls.

It’s a little weird that the inside of the class block has its own special syntax, with function omitted and whatnot, but honestly you’d have a hard time making a class block without special syntax.

One severe omission here is that you can’t declare values inside the block, i.e. you can’t just drop a bar = 3; in there if you want all your objects to share a default attribute. The workaround is to just do this.bar = 3; inside the constructor, but I find that unsatisfying, since it defeats half the point of using prototypes.

Properties

MDN docs — supported in Firefox 4, Chrome 5, IE 9, Safari 5.1

JavaScript historically didn’t have a way to intercept attribute access, which is a travesty. And by “intercept attribute access”, I mean that you couldn’t design a value foo such that evaluating foo.bar runs some code you wrote.

Exciting news: now it does. Or, rather, you can intercept specific attributes, like in the class example above. The above magnitude definition is equivalent to:

1
2
3
4
5
6
7
Object.defineProperty(Vector.prototype, 'magnitude', {
    configurable: true,
    enumerable: true,
    get: function() {
        return Math.sqrt(this.x * this.x + this.y * this.y);
    },
});

Beautiful.

And what even are these configurable and enumerable things? It seems that every single key on every single object now has its own set of three Boolean twiddles:

  • configurable means the property itself can be reconfigured with another call to Object.defineProperty.
  • enumerable means the property appears in for..in or Object.keys().
  • writable means the property value can be changed, which only applies to properties with real values rather than accessor functions.

The incredibly wild thing is that for properties defined by Object.defineProperty, configurable and enumerable default to false, meaning that by default accessor properties are immutable and invisible. Super weird.

Nice to have, though. And luckily, it turns out the same syntax as in class also works in object literals.

1
2
3
4
5
6
Vector.prototype = {
    get magnitude() {
        return Math.sqrt(this.x * this.x + this.y * this.y);
    },
    ...
};

Alas, I’m not aware of a way to intercept arbitrary attribute access.

Another feature along the same lines is Object.seal(), which marks all of an object’s properties as non-configurable and prevents any new properties from being added to the object. The object is still mutable, but its “shape” can’t be changed. And of course you can just make the object completely immutable if you want, via setting all its properties non-writable, or just using Object.freeze().

I have mixed feelings about the ability to irrevocably change something about a dynamic runtime. It would certainly solve some gripes of former Haskell-minded colleagues, and I don’t have any compelling argument against it, but it feels like it violates some unwritten contract about dynamic languages — surely any structural change made by user code should also be able to be undone by user code?

Slurpy arguments

MDN docs — supported in Firefox 15, Chrome 47, Edge 12, Safari 10

Officially this feature is called “rest parameters”, but that’s a terrible name, no one cares about “arguments” vs “parameters”, and “slurpy” is a good word. Bless you, Perl.

1
2
3
function foo(a, b, ...args) {
    // ...
}

Now you can call foo with as many arguments as you want, and every argument after the second will be collected in args as a regular array.

You can also do the reverse with the spread operator:

1
2
3
4
5
let args = [];
args.push(1);
args.push(2);
args.push(3);
foo(...args);

It even works in array literals, even multiple times:

1
2
let args2 = [...args, ...args];
console.log(args2);  // [1, 2, 3, 1, 2, 3]

Apparently there’s also a proposal for allowing the same thing with objects inside object literals.

Default arguments

MDN docs — supported in Firefox 15, Chrome 49, Edge 14, Safari 10

Yes, arguments can have defaults now. It’s more like Sass than Python — default expressions are evaluated once per call, and later default expressions can refer to earlier arguments. I don’t know how I feel about that but whatever.

1
2
3
function foo(n = 1, m = n + 1, list = []) {
    ...
}

Also, unlike Python, you can have an argument with a default and follow it with an argument without a default, since the default default (!) is and always has been defined as undefined. Er, let me just write it out.

1
2
3
function bar(a = 5, b) {
    ...
}

Arrow functions

MDN docs — supported in Firefox 22, Chrome 45, Edge 12, Safari 10

Perhaps the most humble improvement is the arrow function. It’s a slightly shorter way to write an anonymous function.

1
2
3
(a, b, c) => { ... }
a => { ... }
() => { ... }

An arrow function does not set this or some other magical values, so you can safely use an arrow function as a quick closure inside a method without having to rebind this. Hooray!

Otherwise, arrow functions act pretty much like regular functions; you can even use all the features of regular function signatures.

Arrow functions are particularly nice in combination with all the combinator-style array functions that were added a while ago, like Array.forEach.

1
2
3
[7, 8, 9].forEach(value => {
    console.log(value);
});

Symbol

MDN docs — supported in Firefox 36, Chrome 38, Edge 12, Safari 9

This isn’t quite what I’d call an exciting feature, but it’s necessary for explaining the next one. It’s actually… extremely weird.

symbol is a new kind of primitive (like number and string), not an object (like, er, Number and String). A symbol is created with Symbol('foo'). No, not new Symbol('foo'); that throws a TypeError, for, uh, some reason.

The only point of a symbol is as a unique key. You see, symbols have one very special property: they can be used as object keys, and will not be stringified. Remember, only strings can be keys in JavaScript — even the indices of an array are, semantically speaking, still strings. Symbols are a new exception to this rule.

Also, like other objects, two symbols don’t compare equal to each other: Symbol('foo') != Symbol('foo').

The result is that symbols solve one of the problems that plauges most object systems, something I’ve talked about before: interfaces. Since an interface might be implemented by any arbitrary type, and any arbitrary type might want to implement any number of arbitrary interfaces, all the method names on an interface are effectively part of a single global namespace.

I think I need to take a moment to justify that. If you have IFoo and IBar, both with a method called method, and you want to implement both on the same type… you have a problem. Because most object systems consider “interface” to mean “I have a method called method, with no way to say which interface’s method you mean. This is a hard problem to avoid, because IFoo and IBar might not even come from the same library. Occasionally languages offer a clumsy way to “rename” one method or the other, but the most common approach seems to be for interface designers to avoid names that sound “too common”. You end up with redundant mouthfuls like IFoo.foo_method.

This incredibly sucks, and the only languages I’m aware of that avoid the problem are the ML family and Rust. In Rust, you define all the methods for a particular trait (interface) in a separate block, away from the type’s “own” methods. It’s pretty slick. You can still do obj.method(), and as long as there’s only one method among all the available traits, you’ll get that one. If not, there’s syntax for explicitly saying which trait you mean, which I can’t remember because I’ve never had to use it.

Symbols are JavaScript’s answer to this problem. If you want to define some interface, you can name its methods with symbols, which are guaranteed to be unique. You just have to make sure you keep the symbol around somewhere accessible so other people can actually use it. (Or… not?)

The interesting thing is that JavaScript now has several of its own symbols built in, allowing user objects to implement features that were previously reserved for built-in types. For example, you can use the Symbol.hasInstance symbol — which is simply where the language is storing an existing symbol and is not the same as Symbol('hasInstance')! — to override instanceof:

1
2
3
4
5
6
7
8
// oh my god don't do this though
class EvenNumber {
    static [Symbol.hasInstance](obj) {
        return obj % 2 == 0;
    }
}
console.log(2 instanceof EvenNumber);  // true
console.log(3 instanceof EvenNumber);  // false

Oh, and those brackets around Symbol.hasInstance are a sort of reverse-quoting — they indicate an expression to use where the language would normally expect a literal identifier. I think they work as object keys, too, and maybe some other places.

The equivalent in Python is to implement a method called __instancecheck__, a name which is not special in any way except that Python has reserved all method names of the form __foo__. That’s great for Python, but doesn’t really help user code. JavaScript has actually outclassed (ho ho) Python here.

Of course, obj[BobNamespace.some_method]() is not the prettiest way to call an interface method, so it’s not perfect. I imagine this would be best implemented in user code by exposing a polymorphic function, similar to how Python’s len(obj) pretty much just calls obj.__len__().

I only bring this up because it’s the plumbing behind one of the most incredible things in JavaScript that I didn’t even know about until I started writing this post. I’m so excited oh my gosh. Are you ready? It’s:

Iteration protocol

MDN docs — supported in Firefox 27, Chrome 39, Safari 10; still experimental in Edge

Yes! Amazing! JavaScript has first-class support for iteration! I can’t even believe this.

It works pretty much how you’d expect, or at least, how I’d expect. You give your object a method called Symbol.iterator, and that returns an iterator.

What’s an iterator? It’s an object with a next() method that returns the next value and whether the iterator is exhausted.

Wait, wait, wait a second. Hang on. The method is called next? Really? You didn’t go for Symbol.next? Python 2 did exactly the same thing, then realized its mistake and changed it to __next__ in Python 3. Why did you do this?

Well, anyway. My go-to test of an iterator protocol is how hard it is to write an equivalent to Python’s enumerate(), which takes a list and iterates over its values and their indices. In Python it looks like this:

1
2
3
4
5
for i, value in enumerate(['one', 'two', 'three']):
    print(i, value)
# 0 one
# 1 two
# 2 three

It’s super nice to have, and I’m always amazed when languages with “strong” “support” for iteration don’t have it. Like, C# doesn’t. So if you want to iterate over a list but also need indices, you need to fall back to a C-style for loop. And if you want to iterate over a lazy or arbitrary iterable but also need indices, you need to track it yourself with a counter. Ridiculous.

Here’s my attempt at building it in JavaScript.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
function enumerate(iterable) {
    // Return a new iter*able* object with a Symbol.iterator method that
    // returns an iterator.
    return {
        [Symbol.iterator]: function() {
            let iterator = iterable[Symbol.iterator]();
            let i = 0;

            return {
                next: function() {
                    let nextval = iterator.next();
                    if (! nextval.done) {
                        nextval.value = [i, nextval.value];
                        i++;
                    }
                    return nextval;
                },
            };
        },
    };
}
for (let [i, value] of enumerate(['one', 'two', 'three'])) {
    console.log(i, value);
}
// 0 one
// 1 two
// 2 three

Incidentally, for..of (which iterates over a sequence, unlike for..in which iterates over keys — obviously) is finally supported in Edge 12. Hallelujah.

Oh, and let [i, value] is destructuring assignment, which is also a thing now and works with objects as well. You can even use the splat operator with it! Like Python! (And you can use it in function signatures! Like Python! Wait, no, Python decided that was terrible and removed it in 3…)

1
let [x, y, ...others] = ['apple', 'orange', 'cherry', 'banana'];

It’s a Halloween miracle. 🎃

Generators

MDN docs — supported in Firefox 26, Chrome 39, Edge 13, Safari 10

That’s right, JavaScript has goddamn generators now. It’s basically just copying Python and adding a lot of superfluous punctuation everywhere. Not that I’m complaining.

Also, generators are themselves iterable, so I’m going to cut to the chase and rewrite my enumerate() with a generator.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
function enumerate(iterable) {
    return {
        [Symbol.iterator]: function*() {
            let i = 0;
            for (let value of iterable) {
                yield [i, value];
                i++;
            }
        },
    };
}
for (let [i, value] of enumerate(['one', 'two', 'three'])) {
    console.log(i, value);
}
// 0 one
// 1 two
// 2 three

Amazing. function* is a pretty strange choice of syntax, but whatever? I guess it also lets them make yield only act as a keyword inside a generator, for ultimate backwards compatibility.

JavaScript generators support everything Python generators do: yield* yields every item from a subsequence, like Python’s yield from; generators can return final values; you can pass values back into the generator if you iterate it by hand. No, really, I wasn’t kidding, it’s basically just copying Python. It’s great. You could now built asyncio in JavaScript!

In fact, they did that! JavaScript now has async and await. An async function returns a Promise, which is also a built-in type now. Amazing.

Sets and maps

MDN docs for MapMDN docs for Set — supported in Firefox 13, Chrome 38, IE 11, Safari 7.1

I did not save the best for last. This is much less exciting than generators. But still exciting.

The only data structure in JavaScript is the object, a map where the strings are keys. (Or now, also symbols, I guess.) That means you can’t readily use custom values as keys, nor simulate a set of arbitrary objects. And you have to worry about people mucking with Object.prototype, yikes.

But now, there’s Map and Set! Wow.

Unfortunately, because JavaScript, Map couldn’t use the indexing operators without losing the ability to have methods, so you have to use a boring old method-based API. But Map has convenient methods that plain objects don’t, like entries() to iterate over pairs of keys and values. In fact, you can use a map with for..of to get key/value pairs. So that’s nice.

Perhaps more interesting, there’s also now a WeakMap and WeakSet, where the keys are weak references. I don’t think JavaScript had any way to do weak references before this, so that’s pretty slick. There’s no obvious way to hold a weak value, but I guess you could substitute a WeakSet with only one item.

Template literals

MDN docs — supported in Firefox 34, Chrome 41, Edge 12, Safari 9

Template literals are JavaScript’s answer to string interpolation, which has historically been a huge pain in the ass because it doesn’t even have string formatting in the standard library.

They’re just strings delimited by backticks instead of quotes. They can span multiple lines and contain expressions.

1
2
console.log(`one plus
two is ${1 + 2}`);

Someone decided it would be a good idea to allow nesting more sets of backticks inside a ${} expression, so, good luck to syntax highlighters.

However, someone also had the most incredible idea ever, which was to add syntax allowing user code to do the interpolation — so you can do custom escaping, when absolutely necessary, which is virtually never, because “escaping” means you’re building a structured format by slopping strings together willy-nilly instead of using some API that works with the structure.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// OF COURSE, YOU SHOULDN'T BE DOING THIS ANYWAY; YOU SHOULD BUILD HTML WITH
// THE DOM API AND USE .textContent FOR LITERAL TEXT.  BUT AS AN EXAMPLE:
function html(literals, ...values) {
    let ret = [];
    literals.forEach((literal, i) => {
        if (i > 0) {
            // Is there seriously still not a built-in function for doing this?
            // Well, probably because you SHOULDN'T BE DOING IT
            ret.push(values[i - 1]
                .replace(/&/g, '&amp;')
                .replace(/</g, '&lt;')
                .replace(/>/g, '&gt;')
                .replace(/"/g, '&quot;')
                .replace(/'/g, '&apos;'));
        }
        ret.push(literal);
    });
    return ret.join('');
}
let username = 'Bob<script>';
let result = html`<b>Hello, ${username}!</b>`;
console.log(result);
// <b>Hello, Bob&lt;script&gt;!</b>

It’s a shame this feature is in JavaScript, the language where you are least likely to need it.

Trailing commas

Remember how you couldn’t do this for ages, because ass-old IE considered it a syntax error and would reject the entire script?

1
2
3
4
5
{
    a: 'one',
    b: 'two',
    c: 'three',  // <- THIS GUY RIGHT HERE
}

Well now it’s part of the goddamn spec and if there’s anything in this post you can rely on, it’s this. In fact you can use AS MANY GODDAMN TRAILING COMMAS AS YOU WANT. But only in arrays.

1
[1, 2, 3,,,,,,,,,,,,,,,,,,,,,,,,,]

Apparently that has the bizarre side effect of reserving extra space at the end of the array, without putting values there.

And more, probably

Like strict mode, which makes a few silent “errors” be actual errors, forces you to declare variables (no implicit globals!), and forbids the completely bozotic with block.

Or String.trim(), which trims whitespace off of strings.

Or… Math.sign()? That’s new? Seriously? Well, okay.

Or the Proxy type, which lets you customize indexing and assignment and calling. Oh. I guess that is possible, though this is a pretty weird way to do it; why not just use symbol-named methods?

You can write Unicode escapes for astral plane characters in strings (or identifiers!), as \u{XXXXXXXX}.

There’s a const now? I extremely don’t care, just name it in all caps and don’t reassign it, come on.

There’s also a mountain of other minor things, which you can peruse at your leisure via MDN or the ECMAScript compatibility tables (note the links at the top, too).

That’s all I’ve got. I still wouldn’t say I’m a big fan of JavaScript, but it’s definitely making an effort to clean up some goofy inconsistencies and solve common problems. I think I could even write some without yelling on Twitter about it now.

On the other hand, if you’re still stuck supporting IE 10 for some reason… well, er, my condolences.

timeShift(GrafanaBuzz, 1w) Issue 16

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/10/06/timeshiftgrafanabuzz-1w-issue-16/

Welcome to another issue of TimeShift. In addition to the roundup of articles and plugin updates, we had a big announcement this week – Early Bird tickets to GrafanaCon EU are now available! We’re also accepting CFPs through the end of October, so if you have a topic in mind, don’t wait until the last minute, please send it our way. Speakers who are selected will receive a comped ticket to the conference.


Early Bird Tickets Now Available

We’ve released a limited number of Early Bird tickets before General Admission tickets are available. Take advantage of this discount before they’re sold out!

Get Your Early Bird Ticket Now

Interested in speaking at GrafanaCon? We’re looking for technical and non-tecnical talks of all sizes. Submit a CFP Now.


From the Blogosphere

Get insights into your Azure Cosmos DB: partition heatmaps, OMS, and More: Microsoft recently announced the ability to access a subset of Azure Cosmos DB metrics via Azure Monitor API. Grafana Labs built an Azure Monitor Plugin for Grafana 4.5 to visualize the data.

How to monitor Docker for Mac/Windows: Brian was tired of guessing about the performance of his development machines and test environment. Here, he shows how to monitor Docker with Prometheus to get a better understanding of a dev environment in his quest to monitor all the things.

Prometheus and Grafana to Monitor 10,000 servers: This article covers enokido’s process of choosing a monitoring platform. He identifies three possible solutions, outlines the pros and cons of each, and discusses why he chose Prometheus.

GitLab Monitoring: It’s fascinating to see Grafana dashboards with production data from companies around the world. For instance, we’ve previously highlighted the huge number of dashboards Wikimedia publicly shares. This week, we found that GitLab also has public dashboards to explore.

Monitoring a Docker Swarm Cluster with cAdvisor, InfluxDB and Grafana | The Laboratory: It’s important to know the state of your applications in a scalable environment such as Docker Swarm. This video covers an overview of Docker, VM’s vs. containers, orchestration and how to monitor Docker Swarm.

Introducing Telemetry: Actionable Time Series Data from Counters: Learn how to use counters from mulitple disparate sources, devices, operating systems, and applications to generate actionable time series data.

ofp_sniffer Branch 1.2 (docker/influxdb/grafana) Upcoming Features: This video demo shows off some of the upcoming features for OFP_Sniffer, an OpenFlow sniffer to help network troubleshooting in production networks.


Grafana Plugins

Plugin authors add new features and bugfixes all the time, so it’s important to always keep your plugins up to date. To update plugins from on-prem Grafana, use the Grafana-cli tool, if you are using Hosted Grafana, you can update with 1 click! If you have questions or need help, hit up our community site, where the Grafana team and members of the community are happy to help.

UPDATED PLUGIN

PNP for Nagios Data Source – The latest release for the PNP data source has some fixes and adds a mathematical factor option.

Update

UPDATED PLUGIN

Google Calendar Data Source – This week, there was a small bug fix for the Google Calendar annotations data source.

Update

UPDATED PLUGIN

BT Plugins – Our friends at BT have been busy. All of the BT plugins in our catalog received and update this week. The plugins are the Status Dot Panel, the Peak Report Panel, the Trend Box Panel and the Alarm Box Panel.

Changes include:

  • Custom dashboard links now work in Internet Explorer.
  • The Peak Report panel no longer supports click-to-sort.
  • The Status Dot panel tooltips now look like Grafana tooltips.


This week’s MVC (Most Valuable Contributor)

Each week we highlight some of the important contributions from our amazing open source community. This week, we’d like to recognize a contributor who did a lot of work to improve Prometheus support.

pdoan017
Thanks to Alin Sinpaleanfor his Prometheus PR – that aligns the step and interval parameters. Alin got a lot of feedback from the Prometheus community and spent a lot of time and energy explaining, debating and iterating before the PR was ready.
Thank you!


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Wow – Excited to be a part of exploring data to find out how Mexico City is evolving.

We Need Your Help!

Do you have a graph that you love because the data is beautiful or because the graph provides interesting information? Please get in touch. Tweet or send us an email with a screenshot, and we’ll tell you about this fun experiment.

Tell Me More


What do you think?

That’s a wrap! How are we doing? Submit a comment on this article below, or post something at our community forum. Help us make these weekly roundups better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Porn Copyright Trolls Terrify 60-Year-Old But Age Shouldn’t Matter

Post Syndicated from Andy original https://torrentfreak.com/porn-copyright-trolls-terrify-60-year-old-but-age-shouldnt-matter-171002/

Of all the anti-piracy tactics deployed over the years, the one that has proven most controversial is so-called copyright-trolling.

The idea is that rather than take content down, copyright holders make use of its online availability to watch people who are sharing that material while gathering their IP addresses.

From there it’s possible to file a lawsuit to obtain that person’s identity but these days they’re more likely to short-cut the system, by asking ISPs to forward notices with cash settlement demands attached.

When subscribers receive these demands, many feel compelled to pay. However, copyright trolls are cunning beasts, and while they initially ask for payment for a single download, they very often have several other claims up their sleeves. Once people have paid one, others come out of the woodwork.

That’s what appears to have happened to a 60-year-old Canadian woman called ‘Debra’. In an email sent via her ISP, she was contacted by local anti-piracy outfit Canipre, who accused her of downloading and sharing porn. With threats that she could be ‘fined’ up to CAD$20,000 for her alleged actions, she paid the company $257.40, despite claiming her innocence.

Of course, at this point the company knew her name and address and this week the company contacted her again, accusing her of another five illegal porn downloads alongside demands for more cash.

“I’m not sleeping,” Debra told CBC. “I have depression already and this is sending me over the edge.”

If the public weren’t so fatigued by this kind of story, people in Debra’s position might get more attention and more help, but they don’t. To be absolutely brutal, the only reason why this story is getting press is due to a few factors.

Firstly, we’re talking here about a woman accused of downloading porn. While far from impossible, it’s at least statistically less likely than if it was a man. Two, Debra is 60-years-old. That doesn’t preclude her from being Internet savvy but it does tip the odds in her favor somewhat. Thirdly, Debra suffers from depression and claims she didn’t carry out those downloads.

On the balance of probabilities, on which these cases live or die, she sounds believable. Had she been a 20-year-old man, however, few people would believe ‘him’ and this is exactly the environment companies like Canipre, Rightscorp, and similar companies bank on.

Debra says she won’t pay the additional fines but Canipre is adamant that someone in her house pirated the porn, despite her husband not being savvy enough to download. The important part here is that Debra says she did not commit an offense and with all the technology in the world, Canpire cannot prove that she did.

“How long is this going to terrorize me?” Debra says. “I’m a good Canadian citizen.”

But Debra isn’t on her own and she’s positively spritely compared to Christine McMillan, who last year at the age of 86-years-old was accused of illegally downloading zombie game Metro 2033. Again, those accusations came from Canipre and while the case eventually went quiet, you can safely bet the company backed off.

So who is to blame for situations like Debra’s and Christine’s? It’s a difficult question.

Clearly, copyright holders feel they’re within their rights to try and claw back compensation for their perceived losses but they already have a legal system available to them, if they want to use it. Instead, however, in Canada they’re abusing the so-called notice-and-notice system, which requires ISPs to forward infringement notices from copyright holders to subscribers.

The government knows there is a problem. Law professor Michael Geist previously obtained a government report, which expresses concern over the practice. Its summary is shown below.

Advice summary

While the notice-and-notice regime requires ISPs to forward educational copyright infringement notices, most ISPs complain that companies like Canipre add on cash settlement demands.

“Internet intermediaries complain…that the current legislative framework does not expressly prohibit this practice and that they feel compelled to forward on such notices to their subscribers when they receive them from copyright holders,” recent advice to the Minister of Innovation, Science and Economic Development reads.

That being said, there’s nothing stopping ISPs from passing on the educational notices as required by law but insisting that all demands for cash payments are removed. It’s a position that could even get support from the government, if enough pressure was applied.

“The sending of such notices could lead to abuses, given that consumers may be pressured into making payments even in situations where they have not engaged in any acts that violate copyright laws,” government advice notes.

Given the growing problem, it appears that ISPs have the power here so maybe it’s time they protected their customers. In the meantime, consumers have responsibilities too, not only by refraining from infringing copyright, but by becoming informed of their rights.

“[T]here is no legal obligation to pay any settlement offered by a copyright owner, and the regime does not impose any obligations on a subscriber who receives a notice, including no obligation to contact the copyright owner or the Internet intermediary,” government advice notes.

Hopefully, in future, people won’t have to be old or ill to receive sympathy for being wrongly accused and threatened in their own homes. But until then, people should pressure their ISPs to do more while staying informed.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Judge Recommends ISP and Search Engine Blocking of Sci-Hub in the US

Post Syndicated from Ernesto original https://torrentfreak.com/judge-recommends-isp-search-engine-blocking-sci-hub-us-171003/

Earlier this year the American Chemical Society (ACS), a leading source of academic publications in the field of chemistry, filed a lawsuit against Sci-Hub and its operator Alexandra Elbakyan.

The non-profit organization publishes tens of thousands of articles a year in its peer-reviewed journals. Because many of these are available for free on Sci-Hub, ACS wants to be compensated.

Sci-Hub was made aware of the legal proceedings but did not appear in court. As a result, a default was entered against the site. In addition to millions of dollars in damages, ACS also requested third-party Internet intermediaries to take action against the site.

While the request is rather unprecedented for the US, as it includes search engine and ISP blocking, Magistrate Judge John Anderson has included these measures in his recommendations.

Judge Anderson agrees that Sci-Hub is guilty of copyright and trademark infringement. In addition to $4,800,000 in statutory damages, he recommends a broad injunction that would require search engines, ISPs, domain registrars and other services to block Sci-Hub’s domain names.

“… the undersigned recommends that it be ordered that any person or entity in privity with Sci-Hub and with notice of the injunction, including any Internet search engines, web hosting and Internet service providers, domain name registrars, and domain name registries, cease facilitating access to any or all domain names and websites through which Sci-Hub engages in unlawful access to, use, reproduction, and distribution of ACS’s trademarks or copyrighted works.”

The recommendation

In addition to the above, domain registries and registrars will also be required to suspend Sci-Hub’s domain names. This also happened previously in a different lawsuit, but Sci-Hub swiftly moved to a new domain at the time.

“Finally, the undersigned recommends that it be ordered that the domain name registries and/or registrars for Sci-Hub’s domain names and websites, or their technical administrators, shall place the domain names on registryHold/serverHold or such other status to render the names/sites non-resolving,” the recommendation adds.”

If the U.S. District Court Judge adopts this recommendation, it would mean that Internet providers such as Comcast could be ordered to block users from accessing Sci-Hub. That’s a big deal since pirate site blockades are not common in the United States.

This would likely trigger a response from affected Internet services, who generally want to avoid being dragged into these cases. They would certainly don’t want such far-reaching measure to be introduced through a default order.

Sci-Hub itself doesn’t seem to be too bothered by the blocking prospect or the millions in damages it faces. The site has a Tor version which can’t be blocked by Internet providers, so determined scientists will still be able to access the site if they want.

Magistrate Judge John Anderson’s full findings of fact and recommendations are available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

PlayerUnknown’s Battlegrounds on a Game Boy?!

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/playerunknowns-battlegrounds-game-boy/

My evenings spent watching the Polygon Awful Squad play PlayerUnknown’s Battlegrounds for hours on end have made me mildly obsessed with this record-breaking Steam game.

PlayerUnknown's Battlegrounds Raspberry Pi

So when Michael Darby’s latest PUBG-inspired Game Boy build appeared in my notifications last week, I squealed with excitement and quickly sent the link to my team…while drinking a cocktail by a pool in Turkey ☀️🍹

PUBG ON A GAMEBOY

https://314reactor.com/ https://www.hackster.io/314reactor https://twitter.com/the_mikey_d

PlayerUnknown’s Battlegrounds

For those unfamiliar with the game: PlayerUnknown’s Battlegrounds, or PUBG for short, is a Battle-Royale-style multiplayer online video game in which individuals or teams fight to the death on an island map. As players collect weapons, ammo, and transport, their ‘safe zone’ shrinks, forcing a final face-off until only one character remains.

The game has been an astounding success on Steam, the digital distribution platform which brings PUBG to the masses. It records daily player counts of over a million!

PlayerUnknown's Battlegrounds Raspberry Pi

Yeah, I’d say one or two people seem to enjoy it!

PUBG on a Game Boy?!

As it’s a fairly complex game, let’s get this out of the way right now: no, Michael is not running the entire game on a Nintendo Game Boy. That would be magic silly impossible. Instead, he’s streaming the game from his home PC to a Raspberry Pi Zero W fitted within the hacked handheld console.

Michael removed the excess plastic inside an old Game Boy Color shell to make space for a Zero W, LiPo battery, and TFT screen. He then soldered the necessary buttons to GPIO pins, and wrote a Python script to control them.

PlayerUnknown's Battlegrounds Raspberry Pi

The maker battleground

The full script can be found here, along with a more detailed tutorial for the build.

In order to stream PUBG to the Zero W, Michael uses the open-source NVIDIA steaming service Moonlight. He set his PC’s screen resolution to 800×600 and its frame rate to 30, so that streaming the game to the TFT screen works perfectly, albeit with no sound.

PlayerUnknown's Battlegrounds Raspberry Pi

The end result is a rather impressive build that has confused YouTube commenters since he uploaded footage for it last week. The video has more than 60000 views to date, so it appears we’re not the only ones impressed with Michael’s make.

314reactor

If you’re a regular reader of our blog, you may recognise Michael’s name from his recent Nerf blaster mod. And fans of Raspberry Pi may also have seen his Pi-powered Windows 98 wristwatch earlier in the year. He blogs at 314reactor, where you can read more about his digital making projects.

Windows 98 Wrist watch Raspberry Pi PlayerUnknown's Battlegrounds

Player Two has entered the game

Now it’s your turn. Have you used a Raspberry Pi to create a gaming system? I’m not just talking arcades and RetroPie here. We want to see everything, from Pi-powered board games to tech on the football field.

Share your builds in the comments below and while you’re at it, what game would you like to stream to a handheld device?

The post PlayerUnknown’s Battlegrounds on a Game Boy?! appeared first on Raspberry Pi.

HDClub, Russia’s Leading HD-Only Torrent Site, Returns as EliteHD

Post Syndicated from Ernesto original https://torrentfreak.com/hdclub-russias-leading-hd-only-torrent-site-returns-as-elitehd-170930/

With around 170,000 users, HDClub was known for high-quality releases that often leaked to public sites like The Pirate Bay.

Describing itself as “The HighDefinition BitTorrent Community”, HDClub specialized in HD productions including Blu-ray and 3D content, covering movies, TV shows, music videos, and animation.

The site was the largest of its kind in Russia and had been around for a long time. It celebrated its tenth anniversary a few months ago and during this time it amassed over 170,000 members, which is quite significant for a private community.

However, last month the fun was over. As a total surprise to most of the members, HDTorrents’ operators decided to shut down the site. A Russian language announcement now present on its main page explains the reasons for the site’s demise.

“Recently, we received several dozens of complaints from rightsholders weekly, and our community is subjected to attacks and espionage. In parallel, there is a tightening of Internet legislation in Russia, Ukraine and EU countries,” the announcement explained.

This grim outlook was, however, paired with a glimmer of hope. “There are talks on preserving the heritage of the club,” the site teased.

This was not a false promise, it turned out this week. The former foundation of HDClub now forms the basis of a new tracker. EliteHD takes over where HDClub left off with a working copy of the code, torrents and user database.

“Welcome to the closed tracker elitehd.org. We will try to increase the best HD collection and ensure your safety and confidentiality,” EliteHD’s operators posted in a Russian announcement earlier this week.

“The new site received a full copy of the database and the code of the closed HDClub. The user base has been thoroughly cleaned, there will be no free registration,” it adds.

EliteHD’s torrents

“Thoroughly cleaned” means that around 80,000 accounts were removed and the new maximum is currently set at 100,000 registered users. The torrent database is intact though. There are over 26,000 HD torrents in the database totaling more than 500 terabytes of data.

The site’s operators note that members can continue to seed old torrents as well. All they have to do is change the torrent’s announce URL in their client, and uploads should pick up again.

In recent weeks there have been other private trackers which tried to get former HDClub users on board, but it will be hard to compete with a site that has the real database and code.

EliteHD specifically warns people not to fall for fakes and ‘unofficial’ incarnations of its predecessor. “We strongly recommend that you beware of numerous fake projects and “successors,” the site operators stress.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Browser hacking for 280 character tweets

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/09/browser-hacking-for-280-character-tweets.html

Twitter has raised the limit to 280 characters for a select number of people. However, they left open a hole, allowing anybody to make large tweets with a little bit of hacking. The hacking skills needed are basic hacking skills, which I thought I’d write up in a blog post.


Specifically, the skills you will exercise are:

  • basic command-line shell
  • basic HTTP requests
  • basic browser DOM editing

The short instructions

The basic instructions were found in tweets like the following:
These instructions are clear to the average hacker, but of course, a bit difficult for those learning hacking, hence this post.

The command-line

The basics of most hacking start with knowledge of the command-line. This is the “Terminal” app under macOS or cmd.exe under Windows. Almost always when you see hacking dramatized in the movies, they are using the command-line.
In the beginning, the command-line is all computers had. To do anything on a computer, you had to type a “command” telling it what to do. What we see as the modern graphical screen is a layer on top of the command-line, one that translates clicks of the mouse into the raw commands.
On most systems, the command-line is known as “bash”. This is what you’ll find on Linux and macOS. Windows historically has had a different command-line that uses slightly different syntax, though in the last couple years, they’ve also supported “bash”. You’ll have to install it first, such as by following these instructions.
You’ll see me use command that may not be yet installed on your “bash” command-line, like nc and curl. You’ll need to run a command to install them, such as:
sudo apt-get install nc curl
The thing to remember about the command-line is that the mouse doesn’t work. You can’t click to move the cursor as you normally do in applications. That’s because the command-line predates the mouse by decades. Instead, you have to use arrow keys.
I’m not going to spend much effort discussing the command-line, as a complete explanation is beyond the scope of this document. Instead, I’m assuming the reader either already knows it, or will learn-from-example as we go along.

Web requests

The basics of how the web works are really simple. A request to a web server is just a small packet of text, such as the following, which does a search on Google for the search-term “penguin” (presumably, you are interested in knowing more about penguins):
GET /search?q=penguin HTTP/1.0
Host: www.google.com
User-Agent: human
The command we are sending to the server is GET, meaning get a page. We are accessing the URL /search, which on Google’s website, is how you do a search. We are then sending the parameter q with the value penguin. We also declare that we are using version 1.0 of the HTTP (hyper-text transfer protocol).
Following the first line there are a number of additional headers. In one header, we declare the Host name that we are accessing. Web servers can contain many different websites, with different names, so this header is usually imporant.
We also add the User-Agent header. The “user-agent” means the “browser” that you use, like Edge, Chrome, Firefox, or Safari. It allows servers to send content optimized for different browsers. Since we are sending web requests without a browser here, we are joking around saying human.
Here’s what happens when we use the nc program to send this to a google web server:
The first part is us typing, until we hit the [enter] key to create a blank line. After that point is the response from the Google server. We get back a result code (OK), followed by more headers from the server, and finally the contents of the webpage, which goes on from many screens. (We’ll talk about what web pages look like below).
Note that a lot of HTTP headers are optional and really have little influence on what’s going on. They are just junk added to web requests. For example, we see Google report a P3P header is some relic of 2002 that nobody uses anymore, as far as I can tell. Indeed, if you follow the URL in the P3P header, Google pretty much says exactly that.
I point this out because the request I show above is a simplified one. In practice, most requests contain a lot more headers, especially Cookie headers. We’ll see that later when making requests.

Using cURL instead

Sending the raw HTTP request to the server, and getting raw HTTP/HTML back, is annoying. The better way of doing this is with the tool known as cURL, or plainly, just curl. You may be familiar with the older command-line tools wget. cURL is similar, but more flexible.
To use curl for the experiment above, we’d do something like the following. We are saving the web page to “penguin.html” instead of just spewing it on the screen.
Underneath, cURL builds an HTTP header just like the one we showed above, and sends it to the server, getting the response back.

Web-pages

Now let’s talk about web pages. When you look at the web page we got back from Google while searching for “penguin”, you’ll see that it’s intimidatingly complex. I mean, it intimidates me. But it all starts from some basic principles, so we’ll look at some simpler examples.
The following is text of a simple web page:
<html>
<body>
<h1>Test</h1>
<p>This is a simple web page</p>
</body>
</html>
This is HTML, “hyper-text markup language”. As it’s name implies, we “markup” text, such as declaring the first text as a level-1 header (H1), and the following text as a paragraph (P).
In a web browser, this gets rendered as something that looks like the following. Notice how a header is formatted differently from a paragraph. Also notice that web browsers can use local files as well as make remote requests to web servers:
You can right-mouse click on the page and do a “View Source”. This will show the raw source behind the web page:
Web pages don’t just contain marked-up text. They contain two other important features, style information that dictates how things appear, and script that does all the live things that web pages do, from which we build web apps.
So let’s add a little bit of style and scripting to our web page. First, let’s view the source we’ll be adding:
In our header (H1) field, we’ve added the attribute to the markup giving this an id of mytitle. In the style section above, we give that element a color of blue, and tell it to align to the center.
Then, in our script section, we’ve told it that when somebody clicks on the element “mytitle”, it should send an “alert” message of “hello”.
This is what our web page now looks like, with the center blue title:
When we click on the title, we get a popup alert:
Thus, we see an example of the three components of a webpage: markup, style, and scripting.

Chrome developer tools

Now we go off the deep end. Right-mouse click on “Test” (not normal click, but right-button click, to pull up a menu). Select “Inspect”.
You should now get a window that looks something like the following. Chrome splits the screen in half, showing the web page on the left, and it’s debug tools on the right.
This looks similar to what “View Source” shows, but it isn’t. Instead, it’s showing how Chrome interpreted the source HTML. For example, our style/script tags should’ve been marked up with a head (header) tag. We forgot it, but Chrome adds it in anyway.
What Google is showing us is called the DOM, or document object model. It shows us all the objects that make up a web page, and how they fit together.
For example, it shows us how the style information for #mytitle is created. It first starts with the default style information for an h1 tag, and then how we’ve changed it with our style specifications.
We can edit the DOM manually. Just double click on things you want to change. For example, in this screen shot, I’ve changed the style spec from blue to red, and I’ve changed the header and paragraph test. The original file on disk hasn’t changed, but I’ve changed the DOM in memory.
This is a classic hacking technique. If you don’t like things like paywalls, for example, just right-click on the element blocking your view of the text, “Inspect” it, then delete it. (This works for some paywalls).
This edits the markup and style info, but changing the scripting stuff is a bit more complicated. To do that, click on the [Console] tab. This is the scripting console, and allows you to run code directly as part of the webpage. We are going to run code that resets what happens when we click on the title. In this case, we are simply going to change the message to “goodbye”.
Now when we click on the title, we indeed get the message:
Again, a common way to get around paywalls is to run some code like that that change which functions will be called.

Putting it all together

Now let’s put this all together in order to hack Twitter to allow us (the non-chosen) to tweet 280 characters. Review Dildog’s instructions above.
The first step is to get to Chrome Developer Tools. Dildog suggests F12. I suggest right-clicking on the Tweet button (or Reply button, as I use in my example) and doing “Inspect”, as I describe above.
You’ll now see your screen split in half, with the DOM toward the right, similar to how I describe above. However, Twitter’s app is really complex. Well, not really complex, it’s all basic stuff when you come right down to it. It’s just so much stuff — it’s a large web app with lots of parts. So we have to dive in without understanding everything that’s going on.
The Tweet/Reply button we are inspecting is going to look like this in the DOM:
The Tweet/Reply button is currently greyed out because it has the “disabled” attribute. You need to double click on it and remove that attribute. Also, in the class attribute, there is also a “disabled” part. Double-click, then click on that and removed just that disabled as well, without impacting the stuff around it. This should change the button from disabled to enabled. It won’t be greyed out, and it’ll respond when you click on it.
Now click on it. You’ll get an error message, as shown below:
What we’ve done here is bypass what’s known as client-side validation. The script in the web page prevented sending Tweets longer than 140 characters. Our editing of the DOM changed that, allowing us to send a bad request to the server. Bypassing client-side validation this way is the source of a lot of hacking.
But Twitter still does server-side validation as well. They know any client-side validation can be bypassed, and are in on the joke. They tell us hackers “You’ll have to be more clever”. So let’s be more clever.
In order to make longer 280 characters tweets work for select customers, they had to change something on the server-side. The thing they added was adding a “weighted_character_count=true” to the HTTP request. We just need to repeat the request we generated above, adding this parameter.
In theory, we can do this by fiddling with the scripting. The way Dildog describes does it a different way. He copies the request out of the browser, edits it, then send it via the command-line using curl.
We’ve used the [Elements] and [Console] tabs in Chrome’s DevTools. Now we are going to use the [Network] tab. This lists all the requests the web page has made to the server. The twitter app is constantly making requests to refresh the content of the web page. The request we made trying to do a long tweet is called “create”, and is red, because it failed.
Google Chrome gives us a number of ways to duplicate the request. The most useful is that it copies it as a full cURL command we can just paste onto the command-line. We don’t even need to know cURL, it takes care of everything for us. On Windows, since you have two command-lines, it gives you a choice to use the older Windows cmd.exe, or the newer bash.exe. I use the bash version, since I don’t know where to get the Windows command-line version of cURL.exe.
There’s a lot of going on here. The first thing to notice is the long xxxxxx strings. That’s actually not in the original screenshot. I edited the picture. That’s because these are session-cookies. If inserted them into your browser, you’d hijack my Twitter session, and be able to tweet as me (such as making Carlos Danger style tweets). Therefore, I have to remove them from the example.
At the top of the screen is the URL that we are accessing, which is https://twitter.com/i/tweet/create. Much of the rest of the screen uses the cURL -H option to add a header. These are all the HTTP headers that I describe above. Finally, at the bottom, is the –data section, which contains the data bits related to the tweet, especially the tweet itself.
We need to edit either the URL above to read https://twitter.com/i/tweet/create?weighted_character_count=true, or we need to add &weighted_character_count=true to the –data section at the bottom (either works). Remember: mouse doesn’t work on command-line, so you have to use the cursor-keys to navigate backwards in the line. Also, since the line is larger than the screen, it’s on several visual lines, even though it’s all a single line as far as the command-line is concerned.
Now just hit [return] on your keyboard, and the tweet will be sent to the server, which at the moment, works. Presto!
Twitter will either enable or disable the feature for everyone in a few weeks, at which point, this post won’t work. But the reason I’m writing this is to demonstrate the basic hacking skills. We manipulate the web pages we receive from servers, and we manipulate what’s sent back from our browser back to the server.

Easier: hack the scripting

Instead of messing with the DOM and editing the HTTP request, the better solution would be to change the scripting that does both DOM client-side validation and HTTP request generation. The only reason Dildog above didn’t do that is that it’s a lot more work trying to find where all this happens.
Others have, though. @Zemnmez did just that, though his technique works for the alternate TweetDeck client (https://tweetdeck.twitter.com) instead of the default client. Go copy his code from here, then paste it into the DevTools scripting [Console]. It’ll go in an replace some scripting functions, such like my simpler example above.
The console is showing a stream of error messages, because TweetDeck has bugs, ignore those.
Now you can effortlessly do long tweets as normal, without all the messing around I’ve spent so much text in this blog post describing.
Now, as I’ve mentioned this before, you are only editing what’s going on in the current web page. If you refresh this page, or close it, everything will be lost. You’ll have to re-open the DevTools scripting console and repaste the code. The easier way of doing this is to use the [Sources] tab instead of [Console] and use the “Snippets” feature to save this bit of code in your browser, to make it easier next time.
The even easier way is to use Chrome extensions like TamperMonkey and GreaseMonkey that’ll take care of this for you. They’ll save the script, and automatically run it when they see you open the TweetDeck webpage again.
An even easier way is to use one of the several Chrome extensions written in the past day specifically designed to bypass the 140 character limit. Since the purpose of this blog post is to show you how to tamper with your browser yourself, rather than help you with Twitter, I won’t list them.

Conclusion

Tampering with the web-page the server gives you, and the data you send back, is a basic hacker skill. In truth, there is a lot to this. You have to get comfortable with the command-line, using tools like cURL. You have to learn how HTTP requests work. You have to understand how web pages are built from markup, style, and scripting. You have to be comfortable using Chrome’s DevTools for messing around with web page elements, network requests, scripting console, and scripting sources.
So it’s rather a lot, actually.
My hope with this page is to show you a practical application of all this, without getting too bogged down in fully explaining how every bit works.

Julia Reda MEP Likened to Nazi in Sweeping Anti-Pirate Rant

Post Syndicated from Andy original https://torrentfreak.com/julia-reda-mep-likened-to-nazi-in-sweeping-anti-pirate-rant-170926/

The debate over copyright and enforcement thereof is often polarized, with staunch supporters on one side, objectors firmly on the other, and never the twain shall meet.

As a result, there have been some heated battles over the years, with pro-copyright bodies accusing pirates of theft and pirates accusing pro-copyright bodies of monopolistic tendencies. While neither claim is particularly pleasant, they have become staples of this prolonged war of words and as such, many have become desensitized to their original impact.

This morning, however, musician and staunch pro-copyright activist David Lowery published an article which pours huge amounts of gas on the fire. The headline goes straight for the jugular, asking: Why is it Every Time We Turn Over a Pirate Rock White Nationalists, Nazi’s and Bigots Scurry Out?

Lowery’s opening gambit in his piece on The Trichordist is that one only has to scratch below the surface of the torrent and piracy world in order to find people aligned with the above-mentioned groups.

“Why is it every time we dig a little deeper into the pro-piracy and torrenting movement we find key figures associated with ‘white nationalists,’ Nazi memorabilia collectors, actual Nazis or other similar bigots? And why on earth do politicians, journalists and academics sing the praises of these people?” Lowery asks.

To prove his point, the Camper Van Beethoven musician digs up the fact that former Pirate Bay financier Carl Lündstrom had some fairly unsavory neo-fascist views. While this is not in doubt, Lowery is about 10 tens years too late if he wants to tar The Pirate Bay with the extremist brush.

“It’s called guilt by association,” Pirate Bay co-founder Peter Sunde explained in 2007.

“One of our previous ISPs [owned by Lündstrom] (with clients like The Red Cross, Save the Children foundation etc) gave us cheap bandwidth since one of the guys in TPB worked there; and one of the owners [has a reputation] for his political opinions. That does NOT make us in any way associated to what political views anyone else might or might not have.”

After dealing with TPB but failing to include the above explanation, Lowery moves on to a more recent target, Megaupload founder Kim Dotcom. Dotcom owns an extremely rare signed copy of Hitler’s autobiographical manifesto, Mein Kampf (My Struggle) and once wore a German World War II helmet. It’s a mistake Prince Harry made in 2005 too.

“I’ve bought memorabilia from Churchill, from Stalin, from Hitler,” Dotcom said in response to the historical allegations. “Let me make absolutely clear, OK. I’m not buying into the Nazi ideology. I’m totally against what the Nazis did.”

With Dotcom dealt with, Lowery then turns his attention to the German Pirate Party’s Julia Reda. As a Member of the European Parliament, Reda has made it her mission to deal with overreaching copyright law, which has made her a bit of a target. That being said, would anyone really try to shoehorn her into the “White Nationalists, Nazi’s and Bigots” bracket?

They would.

In his piece, Lowery highlights comments made by Reda last year, when she complained about the copyright situation developing around the diary written by Anne Frank, which detailed the horrors of living in occupied countries during World War II.

Anne Frank died in 1945 which means that the book was elevated into the public domain in the Netherlands on January 1, 2016, 70 years after her death. A copy was made available at Wikisource, a digital library of free texts maintained by the Wikimedia Foundation, which also operates Wikipedia.

However, in early February that same year, Anne Frank’s diary became unavailable, since U.S. copyright law dictates that works are protected for 95 years from date of publication.

“Today, in an unfortunate example of the overreach of the United States’ current copyright law, the Wikimedia Foundation removed the Dutch-language text of The Diary of a Young Girl,” said Jacob Rogers, Legal Counsel for the Wikimedia Foundation

“We took this action to comply with the United States’ Digital Millennium Copyright Act (DMCA), as we believe the diary is still under US copyright protection under the law as it is currently written,” he added.

Lowery ignores this background in its entirety. He actually ignores all of it in an effort to paint a picture of Reda engaging in some far-right agenda. Lowery even places emphasis on Reda’s nationality to force his point home.

“I don’t really know what to make of her except to say that this German politician really should find something other than the Anne Frank Diary and the Anne Frank Foundation to use as an example of a work that should be freely available in the public domain,” he writes.

“Think of all the copyrighted works out there for which she might reasonably argue a claim of public domain. She decided to pick the Anne Frank diary. Hmm.”

Lowery then accuses Reda of urging people on Twitter to pirate the book, in order to hurt the fight against anti-Semitism and somehow deprive Jewish people of an income.

“After all sales of the book are used by the Anne Frank Foundation to fight anti-semitism. It’s really quite a bad look for any MP, German or not. (Even if it is just the make-believe LARPing RPG EU Parliament),” Lowery writes.

“Or maybe that is the point? Defund the Anne Frank Foundation. Cause you know I read in the twittersphere that copyright producing media conglomerates are controlled by you-know-who.”

At this point, Lowery moves on to Fight For the Future, stating that their lack of racial diversity caused them to stumble into a racially charged copyright dispute involving the famous Martin Luther King speech.

The whole article can be read here but hopefully, most readers will recognize that America needs less division right now, not more hatred.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

FRED-209 Nerf gun tank

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/nerf-gun-tank-fred-209/

David Pride, known to many of you as an active member of our maker community, has done it again! His FRED-209 build combines a Nerf gun, 3D printing, a Raspberry Pi Zero, and robotics to make one neat remotely controlled Nerf tank.

FRED-209 – 3D printed Raspberry Pi Nerf Tank

Uploaded by David Pride on 2017-09-17.

A Nerf gun for FRED-209

David says he worked on FRED-209 over the summer in order to have some fun with Nerf guns, which weren’t around when he was a kid. He purchased an Elite Stryfe model at a car boot sale, and took it apart to see what made it tick. Then he set about figuring out how to power it with motors and a servo.

Nerf Elite Stryfe components for the FRED-209 Nerf tank of David Pride

To control the motors, David used a ZeroBorg add-on board for the Pi Zero, and he set up a PlayStation 3 controller to pilot his tank. These components were also part of a robot that David entered into the Pi Wars competition, so he had already written code for them.

3D printing for FRED-209

During prototyping for his Nerf tank, which David named after ED-209 from RoboCop, he used lots of eBay loot and several 3D-printed parts. He used the free OpenSCAD software package to design the parts he wanted to print. If you’re a novice at 3D printing, you might find the printing advice he shares in the write-up on his blog very useful.

3D-printed lid of FRED-209 nerf gun tank by David Pride

David found the 3D printing of the 24cm-long lid of FRED-209 tricky

On eBay, David found some cool-looking chunky wheels, but these turned out to be too heavy for the motors. In the end, he decided to use a Rover 5 chassis, which changed the look of FRED-209 from ‘monster truck’ to ‘tank’.

FRED-209 Nerf tank by David Pride

Next step: teach it to use stairs

The final result looks awesome, and David’s video demonstrates that it shoots very accurately as well. A make like this might be a great defensive project for our new apocalypse-themed Pioneers challenge!

Taking FRED-209 further

David will be uploading code and STL files for FRED-209 soon, so keep an eye on his blog or Twitter for updates. He’s also bringing the Nerf tank to the Cotswold Raspberry Jam this weekend. If you’re attending the event, make sure you catch him and try FRED-209 out yourself.

Never one to rest on his laurels, David is already working on taking his build to the next level. He wants to include a web interface controller and a camera, and is working on implementing OpenCV to give the Nerf tank the ability to autonomously detect targets.

Pi Wars 2018

I have a feeling we might get to see an advanced version of David’s project at next year’s Pi Wars!

The 2018 Pi Wars have just been announced. They will take place on 21-22 April at the Cambridge Computer Laboratory, and you have until 3 October to apply to enter the competition. What are you waiting for? Get making! And as always, do share your robot builds with us via social media.

The post FRED-209 Nerf gun tank appeared first on Raspberry Pi.

DevOps Cafe Episode 75 – Barbara Bouldin

Post Syndicated from DevOpsCafeAdmin original http://devopscafe.org/show/2017/9/20/devops-cafe-episode-75-barbara-bouldin.html

A lot has changed (but some things haven’t) 

John and Damon chat with Barbara Bouldin about her first-hand view of the good — and the ugly — through the past few decades of the technology industry. From Bell Labs to the breakup of AT&T (“Ma Bell”) to enterprise software to transforming government agencies today, Barbara’s journey has been an interesting ride.

  

Direct download

Follow John Willis on Twitter: @botchagalupe
Follow Damon Edwards on Twitter: @damonedwards 
Follow Barbara Bouldin on Twitter: @bbouldin771

Notes:

 

Please tweet or leave comments or questions below and we’ll read them on the show!

Stream Ripping Piracy Goes From Bad to Worse, Music Industry Reports

Post Syndicated from Ernesto original https://torrentfreak.com/stream-ripping-piracy-goes-from-bad-to-worse-music-industry-reports-170919/

Free music is easy to find nowadays. Just head over to YouTube and you can find millions of tracks including many of the most recent releases.

While the music industry profits from the advertisements on many of these videos, it’s not happy with the current state of affairs. Record labels complain about a “value gap” and go as far as accusing the video streaming platform of operating a DMCA protection racket.

YouTube doesn’t agree with this stance and points to the billions of dollars it pays copyright holders. Still, the music industry is far from impressed.

Today, IFPI has released a new music consumer insight report that highlights this issue once again, while pointing out that YouTube accounts for more than half of all music video streaming.

“User upload services, such as YouTube, are heavily used by music consumers and yet do not return fair value to those who are investing in and creating the music. The Value Gap remains the single biggest threat facing the music world today and we are campaigning for a legislative solution,” IFPI CEO Frances Moore writes.

The report also zooms in on piracy and “stream ripping” in particular, which is another YouTube and Google related issue. While this phenomenon is over a decade old, it’s now the main source of music piracy, the report states.

A survey conducted in the world’s leading music industry markets reveals that 35% of all Internet users are stream rippers, up from 30% last year. In total, 40% of all respondents admitted to obtaining unlicensed music.

35% stream ripping (source IFPI)

This means that the vast majority of all music pirates use stream ripping tools. This practice is particularly popular among those in the youngest age group, where more than half of all Internet users admit to ripping music, and it goes down as age increases.

Adding another stab at Google, the report further notes that more than half of all pirates use the popular search engine to find unlicensed music.

Stream rippers are young (source IFPI)

TorrentFreak spoke to former RIAA executive Neil Turkewitz, who has been very vocal about the stream ripping problem. He now heads his own consulting group that focuses on expanding economic cultural prosperity, particularly online.

Stream ripping is a “double whammy,” Turkewitz says, as it’s undermining both streaming and distribution markets. This affects the bottom line of labels and artists, so YouTube should do more to block stream rippers and converters from exploiting the service.

“YouTube and Alphabet talk of their commitment to expanding opportunities for creators. This is an opportunity to prove it,” Turkewitz informs TF.

“Surely the company that, as Eric Schmidt likes to say, ‘knows what people want before they know it’ has the capacity to develop tools to address problems that inhibit the development of a robust online market that sustains creators.”

While stream ripping remains rampant, there is a positive development the music industry can cling to.

Two weeks ago the major record labels managed to take down YouTube-MP3, the largest ripping site of all. While this is a notable success, there are many sites and tools like it that continue business as usual.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Inside the MPAA, Netflix & Amazon Global Anti-Piracy Alliance

Post Syndicated from Andy original https://torrentfreak.com/inside-the-mpaa-netflix-amazon-global-anti-piracy-alliance-170918/

The idea of collaboration in the anti-piracy arena isn’t new but an announcement this summer heralded what is destined to become the largest project the entertainment industry has ever seen.

The Alliance for Creativity and Entertainment (ACE) is a coalition of 30 companies that reads like a who’s who of the global entertainment market. In alphabetical order its members are:

Amazon, AMC Networks, BBC Worldwide, Bell Canada and Bell Media, Canal+ Group, CBS Corporation, Constantin Film, Foxtel, Grupo Globo, HBO, Hulu, Lionsgate, Metro-Goldwyn-Mayer (MGM), Millennium Media, NBCUniversal, Netflix, Paramount Pictures, SF Studios, Sky, Sony Pictures Entertainment, Star India, Studio Babelsberg, STX Entertainment, Telemundo, Televisa, Twentieth Century Fox, Univision Communications Inc., Village Roadshow, The Walt Disney Company, and Warner Bros. Entertainment Inc.

The aim of the project is clear. Instead of each company considering its anti-piracy operations as a distinct island, ACE will bring them all together while presenting a united front to decision and lawmakers. At the core of the Alliance will be the MPAA.

“ACE, with its broad coalition of creators from around the world, is designed, specifically, to leverage the best possible resources to reduce piracy,”
outgoing MPAA chief Chris Dodd said in June.

“For decades, the MPAA has been the gold standard for antipiracy enforcement. We are proud to provide the MPAA’s worldwide antipiracy resources and the deep expertise of our antipiracy unit to support ACE and all its initiatives.”

Since then, ACE and its members have been silent on the project. Today, however, TorrentFreak can pull back the curtain, revealing how the agreement between the companies will play out, who will be in control, and how much the scheme will cost.

Power structure: Founding Members & Executive Committee Members

Netflix, Inc., Amazon Studios LLC, Paramount Pictures Corporation, Sony Pictures Entertainment, Inc., Twentieth Century Fox Film Corporation, Universal City Studios LLC, Warner Bros. Entertainment Inc., and Walt Disney Studios Motion Pictures, are the ‘Founding Members’ (Governing Board) of ACE.

These companies are granted full voting rights on ACE business, including the approval of initiatives and public policy, anti-piracy strategy, budget-related matters, plus approval of legal action. Not least, they’ll have the power to admit or expel ACE members.

All actions taken by the Governing Board (never to exceed nine members) need to be approved by consensus, with each Founding Member able to vote for or against decisions. Members are also allowed to abstain but one persistent objection will be enough to stop any matter being approved.

The second tier – ‘Executive Committee Members’ – is comprised of all the other companies in the ACE project (as listed above, minus the Governing Board). These companies will not be allowed to vote on ACE initiatives but can present ideas and strategies. They’ll also be allowed to suggest targets for law enforcement action while utilizing the MPAA’s anti-piracy resources.

Rights of all members

While all members of ACE can utilize the alliance’s resources, none are barred from simultaneously ‘going it alone’ on separate anti-piracy initiatives. None of these strategies and actions need approval from the Founding Members, provided they’re carried out in a company’s own name and at its own expense.

Information obtained by TorrentFreak indicates that the MPAA also reserves the right to carry out anti-piracy actions in its own name or on behalf of its member studios. The pattern here is different, since the MPAA’s global anti-piracy resources are the same resources being made available to the ACE alliance and for which members have paid to share.

Expansion of ACE

While ACE membership is already broad, the alliance is prepared to take on additional members, providing certain criteria are met. Crucially, any prospective additions must be owners or producers of movies and/or TV shows. The Governing Board will then vet applicants to ensure that they meet the criteria for acceptance as a new Executive Committee Members.

ACE Operations

The nine Governing Board members will meet at least four times a year, with each nominating a senior executive to serve as its representative. The MPAA’s General Counsel will take up the position of non-voting member of the Governing Board and will chair its meetings.

Matters to be discussed include formulating and developing the alliance’s ‘Global Anti-Piracy Action Plan’ and approving and developing the budget. ACE will also form an Anti-Piracy Working Group, which is scheduled to meet at least once a month.

On a daily basis, the MPAA and its staff will attend to the business of the ACE alliance. The MPAA will carry out its own work too but when presenting to outside third parties, it will clearly state which “hat” it is currently wearing.

Much deliberation has taken place over who should be the official spokesperson for ACE. Documents obtained by TF suggest that the MPAA planned to hire a consulting firm to find a person for the role, seeking a professional with international experience who had never been previously been connected with the MPAA.

They appear to have settled on Zoe Thorogood, who previously worked for British Prime Minister David Cameron.

Money, money, money

Of course, the ACE program isn’t going to fund itself, so all members are required to contribute to the operation. The MPAA has opened a dedicated bank account under its control specifically for the purpose, with members contributing depending on status.

Founding/Governing Board Members will be required to commit $5m each annually. However, none of the studios that are MPAA members will have to hand over any cash, since they already fund the MPAA, whose anti-piracy resources ACE is built.

“Each Governing Board Member will contribute annual dues in an amount equal to $5 million USD. Payment of dues shall be made bi-annually in equal shares, payable at
the beginning of each six (6) month period,” the ACE agreement reads.

“The contribution of MPAA personnel, assets and resources…will constitute and be considered as full payment of each MPAA Member Studio’s Governing Board dues.”

That leaves just Netflix and Amazon paying the full amount of $5m in cash each.

From each company’s contribution, $1m will be paid into legal trust accounts allocated to each Governing Board member. If ACE-agreed litigation and legal expenses exceed that amount for the year, members will be required to top up their accounts to cover their share of the costs.

For the remaining 21 companies on the Executive Committee, annual dues are $200,000 each, to be paid in one installment at the start of the financial year – $4.2m all in. Of all dues paid by all members from both tiers, half will be used to boost anti-piracy resources, over and above what the MPAA will spend on the same during 2017.

“Fifty percent (50%) of all dues received from Global Alliance Members other than
the MPAA Member Studios…shall, as agreed by the Governing Board, be used (a) to increase the resources spent on online antipiracy over and above….the amount of MPAA’s 2017 Content Protection Department budget for online antipiracy initiatives/operations,” an internal ACE document reads.

Intellectual property

As the project moves forward, the Alliance expects to gain certain knowledge and experience. On the back of that, the MPAA hopes to grow its intellectual property portfolio.

“Absent written agreement providing otherwise, any and all data, intellectual property, copyrights, trademarks, or know-how owned and/or contributed to the Global Alliance by MPAA, or developed or created by the MPAA or the Global Alliance during the Term of this Charter, shall remain and/or become the exclusive property of the MPAA,” the ACE agreement reads.

That being said, all Governing Board Members will also be granted “perpetual, irrevocable, non-exclusive licenses” to use the same under certain rules, even in the event they leave the ACE initiative.

Terms and extensions

Any member may withdraw from the Alliance at any point, but there will be no refunds. Additionally, any financial commitment previously made to litigation will have to be honored by the member.

The ACE agreement has an initial term of two years but Governing Board Members will meet not less than three months before it is due to expire to vote on any extension.

To be continued……

With the internal structure of ACE now revealed, all that remains is to discover the contents of the initiative’s ‘Global Anti-Piracy Action Plan’. To date, that document has proven elusive but with an operation of such magnitude, future leaks are a distinct possibility.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

People can’t read (Equifax edition)

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/09/people-cant-read-equifax-edition.html

One of these days I’m going to write a guide for journalists reporting on the cyber. One of the items I’d stress is that they often fail to read the text of what is being said, but instead read some sort of subtext that wasn’t explicitly said. This is valid sometimes — as the subtext is what the writer intended all along, even if they didn’t explicitly write it. Other times, though the imagined subtext is not what the writer intended at all.

A good example is the recent Equifax breach. The original statement says:

Equifax Inc. (NYSE: EFX) today announced a cybersecurity incident potentially impacting approximately 143 million U.S. consumers.

The word consumers was widely translated to customers, as in this Bloomberg story:

Equifax Inc. said its systems were struck by a cyberattack that may have affected about 143 million U.S. customers of the credit reporting agency

But these aren’t the same thing. Equifax is a credit rating agency, keeping data on people who are not its own customers. It’s an important difference.

Another good example is yesterday’s quote “confirming” that the “Apache Struts” vulnerability was to blame:

Equifax has been intensely investigating the scope of the intrusion with the assistance of a leading, independent cybersecurity firm to determine what information was accessed and who has been impacted. We know that criminals exploited a U.S. website application vulnerability. The vulnerability was Apache Struts CVE-2017-5638.

But it doesn’t confirm Struts was responsible. Blaming Struts is certainly the subtext of this paragraph, but it’s not the text. It mentions that criminals had exploited the Struts vulnerability, but don’t actually connect the dots to the breach we are all talking about.

There’s probably reasons for this. While it’s easy for forensics to find evidence of Struts exploitation in logfiles, it’s much harder to connect this to the breach. While they suspect Struts, they may not actually be able to confirm it. Or, maybe they are trying to cover things up, where they feel failing to patch is a lesser crime than what they really did.

It’s at this point journalists should earn their pay. Instead rewriting what they read on the Internet, they could do legwork and call up Equifax PR and ask.

The purpose of this post isn’t to discuss Equifax, but the tendency of people to “read between the lines”, to read some subtext that wasn’t actually expressed in the text. Sometimes the subtext is legitimately there, such as how Equifax clearly intends people to blame Struts thought they don’t say it outright. Sometimes the subtext isn’t there, such as how Equifax doesn’t mean it’s own customers, only “U.S. consumers”. Journalists need to be careful about making assumptions about the subtext.


Update: The Equifax CSO has a degree in music. Some people have criticized this. Most people have defended this, pointing out that almost nobody has an “infosec” degree in our industry, and many of the top people have no degree at all. Among others, @thegrugq has pointed out that infosec degrees are only a few years old — they weren’t around 20 years ago when today’s corporate officers were getting their degrees.

Again, we have the text/subtext problem, where people interpret infosec degrees as being the same as computer-science degrees, the later of which have existed for decades. Some, as in this case, consider them to be wildly different. Others consider them to be nearly the same.

AWS Partner Webinar Series – September & October 2017

Post Syndicated from Sara Rodas original https://aws.amazon.com/blogs/aws/aws-partner-webinar-series-september-october-2017/

The wait is over. September and October’s Partner Webinars have officially arrived! In case you missed the intro last month, the AWS Partner Webinar Series is a selection of live and recorded presentations covering a broad range of topics at varying technical levels and scale. A little different from our AWS Online TechTalks, each AWS Partner Webinar is hosted by an AWS solutions architect and an AWS Competency Partner who has successfully helped customers evaluate and implement the tools, techniques, and technologies of AWS.

 

 

September & October Partner Webinars:

 

SAP Migration
Velocity: How EIS Reduced Costs by 20% and Optimized SAP by Leveraging the Cloud
September 19, 2017 | 10:00 AM PDT

 

Mactores: SAP on AWS: How UCT is Experiencing Better Performance on AWS While Saving 60% in Infrastructure Costs with Mactores
September 19, 2017 | 1:00 PM PDT

 

Accenture: Reduce Operating Costs and Accelerate Efficiency by Migrating Your SAP Applications to AWS with Accenture
September 20, 2017 | 10:00 AM PDT

 

Capgemini: Accelerate your SAP HANA Migration with Capgemini & AWS FAST
September 21, 2017 | 10:00 AM PDT

 

Salesforce
Salesforce IoT: Monetize your IOT Investment with Salesforce and AWS
September 27, 2017 | 10:00 am PDT

 

Salesforce Heroku: Build Engaging Applications with Salesforce Heroku and AWS
October 10, 2017 | 10:00 AM PDT

 

Windows Migration
Cascadeo: How a National Transportation Software Provider Migrated a Mission-Critical Test Infrastructure to AWS with Cascadeo
September 26, 2017 | 10:00 AM PDT

 

Datapipe: Optimize App Performance and Security by Managing Microsoft Workloads on AWS with Datapipe
September 27, 2017 | 10:00 AM PDT

 

Datavail: Datavail Accelerates AWS Adoption for Sony DADC New Media Solutions
September 28, 2017 | 10:00 AM PDT

 

Life Sciences

SAP, Deloitte & Turbot: Life Sciences Compliance on AWS
October 4, 2017 | 10:00 AM PDT

 

Healthcare

AWS, ClearData & Cloudticity: Healthcare Compliance on AWS 
October 5, 2017 | 10:00 AM PDT

 

Storage

N2WS: Learn How Goodwill Industries Ensures 24/7 Data Availability on AWS
October 10, 2017 | 8:00 AM PDT

 

Big Data

Zoomdata: Taking Complexity Out of Data Science with AWS and Zoomdata
October 10, 2017 | 10:00 AM PDT

 

Attunity: Cardinal Health: Moving Data to AWS in Real-Time with Attunity 
October 11, 2017 | 11:00 AM PDT

 

Splunk: How TrueCar Gains Actionable Insights with Splunk Cloud
October 18, 2017 | 9:00 AM PDT