Tag Archives: Library

Eevee gained 2791 experience points

Post Syndicated from Eevee original https://eev.ee/blog/2018/01/15/eevee-gained-2791-experience-points/

Eevee grew to level 31!

A year strongly defined by mixed success! Also, a lot of video games.

I ran three game jams, resulting in a total of 157 games existing that may not have otherwise, which is totally mindblowing?!

For GAMES MADE QUICK???, glip and I made NEON PHASE, a short little exploratory platformer. Honestly, I should give myself more credit for this and the rest of the LÖVE games I’ve based on the same codebase — I wove a physics engine (and everything else!) from scratch and it has held up remarkably well for a variety of different uses.

I successfully finished an HD version of Isaac’s Descent using my LÖVE engine, though it doesn’t have anything new over the original and I’ve only released it as a tech demo on Patreon.

For Strawberry Jam (NSFW!) we made fox flux (slightly NSFW!), which felt like a huge milestone: the first game where I made all the art! I mean, not counting Isaac’s Descent, which was for a very limited platform. It’s a pretty arbitrary milestone, yes, but it feels significant. I’ve been working on expanding the game into a longer and slightly less buggy experience, but the art is taking the longest by far. I must’ve spent weeks on player sprites alone.

We then set about working on Bolthaven, a sequel of sorts to NEON PHASE, and got decently far, and then abandond it. Oops.

We then started a cute little PICO-8 game, and forgot about it. Oops.

I was recruited to help with Chaos Composer, a more ambitious game glip started with someone else in Unity. I had to get used to Unity, and we squabbled a bit, but the game is finally about at the point where it’s “playable” and “maps” can be designed? It’s slightly on hold at the moment while we all finish up some other stuff, though.

We made a birthday game for two of our friends whose birthdays were very close together! Only they got to see it.

For Ludum Dare 38, we made Lunar Depot 38, a little “wave shooter” or whatever you call those? The AI is pretty rough, seeing as this was the first time I’d really made enemies and I had 72 hours to figure out how to do it, but I still think it’s pretty fun to play and I love the circular world.

I made Roguelike Simulator as an experiment with making something small and quick with a simple tool, and I had a lot of fun! I definitely want to do more stuff like this in the future.

And now we’re working on a game about Star Anise, my cat’s self-insert, which is looking to have more polish and depth than anything we’ve done so far! We’ve definitely come a long way in a year.

Somewhere along the line, I put out a call for a “potluck” project, where everyone would give me sprites of a given size without knowing what anyone else had contributed, and I would then make a game using only those sprites. Unfortunately, that stalled a few times: I tried using the Phaser JS library, but we didn’t get along; I tried LÖVE, but didn’t know where to go with the game; and then I decided to use this as an experiment with procedural generation, and didn’t get around to it. I still feel bad that everyone did work for me and I didn’t follow through, but I don’t know whether this will ever become a game.

veekun, alas, consumed months of my life. I finally got Sun and Moon loaded, but it took weeks of work since I was basically reinventing all the tooling we’d ever had from scratch, without even having most of that tooling available as a reference. It was worth it in the end, at least: Ultra Sun and Ultra Moon only took a few days to get loaded. But veekun itself is still missing some obvious Sun/Moon features, and the whole site needs an overhaul, and I just don’t know if I want to dedicate that much time to it when I have so much other stuff going on that’s much more interesting to me right now.

I finally turned my blog into more of a website, giving it a neat front page that lists a bunch of stuff I’ve done. I made a release category at last, though I’m still not quite in the habit of using it.

I wrote some blog posts, of course! I think the most interesting were JavaScript got better while I wasn’t looking and Object models. I was also asked to write a couple pieces for money for a column that then promptly shut down.

On a whim, I made a set of Eevee mugshots for Doom, which I think is a decent indication of my (pixel) art progress over the year?

I started idchoppers, a Doom parsing and manipulation library written in Rust, though it didn’t get very far and I’ve spent most of the time fighting with Rust because it won’t let me implement all my extremely bad ideas. It can do a couple things, at least, like flip maps very quickly and render maps to SVG.

I did toy around with music a little, but not a lot.

I wrote two short twines for Flora. They’re okay. I’m working on another; I think it’ll be better.

I didn’t do a lot of art overall, at least compared to the two previous years; most of my art effort over the year has gone into fox flux, which requires me to learn a whole lot of things. I did dip my toes into 3D modelling, most notably producing my current Twitter banner as well as this cool Star Anise animation. I wouldn’t mind doing more of that; maybe I’ll even try to make a low-poly pixel-textured 3D game sometime.

I restarted my book with a much better concept, though so far I’ve only written about half a chapter. Argh. I see that the vast majority of the work was done within the span of a single week, which is bad since that means I only worked on it for a week, but good since that means I can actually do a pretty good amount of work in only a week. I also did a lot of squabbling with tooling, which is hopefully mostly out of the way now.

My computer broke? That was an exciting week.


A lot of stuff, but the year as a whole still feels hit or miss. All the time I spent on veekun feels like a black void in the middle of the year, which seems like a good sign that I maybe don’t want to pour even more weeks into it in the near future.

Mostly, I want to do: more games, more art, more writing, more music.

I want to try out some tiny game making tools and make some tiny games with them — partly to get exposure to different things, partly to get more little ideas out into the world regularly, and partly to get more practice at letting myself have ideas. I have a couple tools in mind and I guess I’ll aim at a microgame every two months or so? I’d also like to finish the expanded fox flux by the end of the year, of course, though at the moment I can’t even gauge how long it might take.

I seriously lapsed on drawing last year, largely because fox flux pixel art took me so much time. So I want to draw more, and I want to get much faster at pixel art. It would probably help if I had a more concrete goal for drawing, so I might try to draw some short comics and write a little visual novel or something, which would also force me to aim for consistency.

I want to work on my book more, of course, but I also want to try my hand at a bit more fiction. I’ve had a blast writing dialogue for our games! I just shy away from longer-form writing for some reason — which seems ridiculous when a large part of my audience found me through my blog. I do think I’ve had some sort of breakthrough in the last month or two; I suddenly feel a good bit more confident about writing in general and figuring out what I want to say? One recent post I know I wrote in a single afternoon, which virtually never happens because I keep rewriting and rearranging stuff. Again, a visual novel would be a good excuse to practice writing fiction without getting too bogged down in details.

And, ah, music. I shy heavily away from music, since I have no idea what I’m doing, and also I seem to spend a lot of time fighting with tools. (Surprise.) I tried out SunVox for the first time just a few days ago and have been enjoying it quite a bit for making sound effects, so I might try it for music as well. And once again, visual novel background music is a pretty low-pressure thing to compose for. Hell, visual novels are small games, too, so that checks all the boxes. I guess I’ll go make a visual novel.

Here’s to twenty gayteen!

AWS Glue Now Supports Scala Scripts

Post Syndicated from Mehul Shah original https://aws.amazon.com/blogs/big-data/aws-glue-now-supports-scala-scripts/

We are excited to announce AWS Glue support for running ETL (extract, transform, and load) scripts in Scala. Scala lovers can rejoice because they now have one more powerful tool in their arsenal. Scala is the native language for Apache Spark, the underlying engine that AWS Glue offers for performing data transformations.

Beyond its elegant language features, writing Scala scripts for AWS Glue has two main advantages over writing scripts in Python. First, Scala is faster for custom transformations that do a lot of heavy lifting because there is no need to shovel data between Python and Apache Spark’s Scala runtime (that is, the Java virtual machine, or JVM). You can build your own transformations or invoke functions in third-party libraries. Second, it’s simpler to call functions in external Java class libraries from Scala because Scala is designed to be Java-compatible. It compiles to the same bytecode, and its data structures don’t need to be converted.

To illustrate these benefits, we walk through an example that analyzes a recent sample of the GitHub public timeline available from the GitHub archive. This site is an archive of public requests to the GitHub service, recording more than 35 event types ranging from commits and forks to issues and comments.

This post shows how to build an example Scala script that identifies highly negative issues in the timeline. It pulls out issue events in the timeline sample, analyzes their titles using the sentiment prediction functions from the Stanford CoreNLP libraries, and surfaces the most negative issues.

Getting started

Before we start writing scripts, we use AWS Glue crawlers to get a sense of the data—its structure and characteristics. We also set up a development endpoint and attach an Apache Zeppelin notebook, so we can interactively explore the data and author the script.

Crawl the data

The dataset used in this example was downloaded from the GitHub archive website into our sample dataset bucket in Amazon S3, and copied to the following locations:

s3://aws-glue-datasets-<region>/examples/scala-blog/githubarchive/data/

Choose the best folder by replacing <region> with the region that you’re working in, for example, us-east-1. Crawl this folder, and put the results into a database named githubarchive in the AWS Glue Data Catalog, as described in the AWS Glue Developer Guide. This folder contains 12 hours of the timeline from January 22, 2017, and is organized hierarchically (that is, partitioned) by year, month, and day.

When finished, use the AWS Glue console to navigate to the table named data in the githubarchive database. Notice that this data has eight top-level columns, which are common to each event type, and three partition columns that correspond to year, month, and day.

Choose the payload column, and you will notice that it has a complex schema—one that reflects the union of the payloads of event types that appear in the crawled data. Also note that the schema that crawlers generate is a subset of the true schema because they sample only a subset of the data.

Set up the library, development endpoint, and notebook

Next, you need to download and set up the libraries that estimate the sentiment in a snippet of text. The Stanford CoreNLP libraries contain a number of human language processing tools, including sentiment prediction.

Download the Stanford CoreNLP libraries. Unzip the .zip file, and you’ll see a directory full of jar files. For this example, the following jars are required:

  • stanford-corenlp-3.8.0.jar
  • stanford-corenlp-3.8.0-models.jar
  • ejml-0.23.jar

Upload these files to an Amazon S3 path that is accessible to AWS Glue so that it can load these libraries when needed. For this example, they are in s3://glue-sample-other/corenlp/.

Development endpoints are static Spark-based environments that can serve as the backend for data exploration. You can attach notebooks to these endpoints to interactively send commands and explore and analyze your data. These endpoints have the same configuration as that of AWS Glue’s job execution system. So, commands and scripts that work there also work the same when registered and run as jobs in AWS Glue.

To set up an endpoint and a Zeppelin notebook to work with that endpoint, follow the instructions in the AWS Glue Developer Guide. When you are creating an endpoint, be sure to specify the locations of the previously mentioned jars in the Dependent jars path as a comma-separated list. Otherwise, the libraries will not be loaded.

After you set up the notebook server, go to the Zeppelin notebook by choosing Dev Endpoints in the left navigation pane on the AWS Glue console. Choose the endpoint that you created. Next, choose the Notebook Server URL, which takes you to the Zeppelin server. Log in using the notebook user name and password that you specified when creating the notebook. Finally, create a new note to try out this example.

Each notebook is a collection of paragraphs, and each paragraph contains a sequence of commands and the output for that command. Moreover, each notebook includes a number of interpreters. If you set up the Zeppelin server using the console, the (Python-based) pyspark and (Scala-based) spark interpreters are already connected to your new development endpoint, with pyspark as the default. Therefore, throughout this example, you need to prepend %spark at the top of your paragraphs. In this example, we omit these for brevity.

Working with the data

In this section, we use AWS Glue extensions to Spark to work with the dataset. We look at the actual schema of the data and filter out the interesting event types for our analysis.

Start with some boilerplate code to import libraries that you need:

%spark

import com.amazonaws.services.glue.DynamicRecord
import com.amazonaws.services.glue.GlueContext
import com.amazonaws.services.glue.util.GlueArgParser
import com.amazonaws.services.glue.util.Job
import com.amazonaws.services.glue.util.JsonOptions
import com.amazonaws.services.glue.types._
import org.apache.spark.SparkContext

Then, create the Spark and AWS Glue contexts needed for working with the data:

@transient val spark: SparkContext = SparkContext.getOrCreate()
val glueContext: GlueContext = new GlueContext(spark)

You need the transient decorator on the SparkContext when working in Zeppelin; otherwise, you will run into a serialization error when executing commands.

Dynamic frames

This section shows how to create a dynamic frame that contains the GitHub records in the table that you crawled earlier. A dynamic frame is the basic data structure in AWS Glue scripts. It is like an Apache Spark data frame, except that it is designed and optimized for data cleaning and transformation workloads. A dynamic frame is well-suited for representing semi-structured datasets like the GitHub timeline.

A dynamic frame is a collection of dynamic records. In Spark lingo, it is an RDD (resilient distributed dataset) of DynamicRecords. A dynamic record is a self-describing record. Each record encodes its columns and types, so every record can have a schema that is unique from all others in the dynamic frame. This is convenient and often more efficient for datasets like the GitHub timeline, where payloads can vary drastically from one event type to another.

The following creates a dynamic frame, github_events, from your table:

val github_events = glueContext
                    .getCatalogSource(database = "githubarchive", tableName = "data")
                    .getDynamicFrame()

The getCatalogSource() method returns a DataSource, which represents a particular table in the Data Catalog. The getDynamicFrame() method returns a dynamic frame from the source.

Recall that the crawler created a schema from only a sample of the data. You can scan the entire dataset, count the rows, and print the complete schema as follows:

github_events.count
github_events.printSchema()

The result looks like the following:

The data has 414,826 records. As before, notice that there are eight top-level columns, and three partition columns. If you scroll down, you’ll also notice that the payload is the most complex column.

Run functions and filter records

This section describes how you can create your own functions and invoke them seamlessly to filter records. Unlike filtering with Python lambdas, Scala scripts do not need to convert records from one language representation to another, thereby reducing overhead and running much faster.

Let’s create a function that picks only the IssuesEvents from the GitHub timeline. These events are generated whenever someone posts an issue for a particular repository. Each GitHub event record has a field, “type”, that indicates the kind of event it is. The issueFilter() function returns true for records that are IssuesEvents.

def issueFilter(rec: DynamicRecord): Boolean = { 
    rec.getField("type").exists(_ == "IssuesEvent") 
}

Note that the getField() method returns an Option[Any] type, so you first need to check that it exists before checking the type.

You pass this function to the filter transformation, which applies the function on each record and returns a dynamic frame of those records that pass.

val issue_events =  github_events.filter(issueFilter)

Now, let’s look at the size and schema of issue_events.

issue_events.count
issue_events.printSchema()

It’s much smaller (14,063 records), and the payload schema is less complex, reflecting only the schema for issues. Keep a few essential columns for your analysis, and drop the rest using the ApplyMapping() transform:

val issue_titles = issue_events.applyMapping(Seq(("id", "string", "id", "string"),
                                                 ("actor.login", "string", "actor", "string"), 
                                                 ("repo.name", "string", "repo", "string"),
                                                 ("payload.action", "string", "action", "string"),
                                                 ("payload.issue.title", "string", "title", "string")))
issue_titles.show()

The ApplyMapping() transform is quite handy for renaming columns, casting types, and restructuring records. The preceding code snippet tells the transform to select the fields (or columns) that are enumerated in the left half of the tuples and map them to the fields and types in the right half.

Estimating sentiment using Stanford CoreNLP

To focus on the most pressing issues, you might want to isolate the records with the most negative sentiments. The Stanford CoreNLP libraries are Java-based and offer sentiment-prediction functions. Accessing these functions through Python is possible, but quite cumbersome. It requires creating Python surrogate classes and objects for those found on the Java side. Instead, with Scala support, you can use those classes and objects directly and invoke their methods. Let’s see how.

First, import the libraries needed for the analysis:

import java.util.Properties
import edu.stanford.nlp.ling.CoreAnnotations
import edu.stanford.nlp.neural.rnn.RNNCoreAnnotations
import edu.stanford.nlp.pipeline.{Annotation, StanfordCoreNLP}
import edu.stanford.nlp.sentiment.SentimentCoreAnnotations
import scala.collection.convert.wrapAll._

The Stanford CoreNLP libraries have a main driver that orchestrates all of their analysis. The driver setup is heavyweight, setting up threads and data structures that are shared across analyses. Apache Spark runs on a cluster with a main driver process and a collection of backend executor processes that do most of the heavy sifting of the data.

The Stanford CoreNLP shared objects are not serializable, so they cannot be distributed easily across a cluster. Instead, you need to initialize them once for every backend executor process that might need them. Here is how to accomplish that:

val props = new Properties()
props.setProperty("annotators", "tokenize, ssplit, parse, sentiment")
props.setProperty("parse.maxlen", "70")

object myNLP {
    lazy val coreNLP = new StanfordCoreNLP(props)
}

The properties tell the libraries which annotators to execute and how many words to process. The preceding code creates an object, myNLP, with a field coreNLP that is lazily evaluated. This field is initialized only when it is needed, and only once. So, when the backend executors start processing the records, each executor initializes the driver for the Stanford CoreNLP libraries only one time.

Next is a function that estimates the sentiment of a text string. It first calls Stanford CoreNLP to annotate the text. Then, it pulls out the sentences and takes the average sentiment across all the sentences. The sentiment is a double, from 0.0 as the most negative to 4.0 as the most positive.

def estimatedSentiment(text: String): Double = {
    if ((text == null) || (!text.nonEmpty)) { return Double.NaN }
    val annotations = myNLP.coreNLP.process(text)
    val sentences = annotations.get(classOf[CoreAnnotations.SentencesAnnotation])
    sentences.foldLeft(0.0)( (csum, x) => { 
        csum + RNNCoreAnnotations.getPredictedClass(x.get(classOf[SentimentCoreAnnotations.SentimentAnnotatedTree])) 
    }) / sentences.length
}

Now, let’s estimate the sentiment of the issue titles and add that computed field as part of the records. You can accomplish this with the map() method on dynamic frames:

val issue_sentiments = issue_titles.map((rec: DynamicRecord) => { 
    val mbody = rec.getField("title")
    mbody match {
        case Some(mval: String) => { 
            rec.addField("sentiment", ScalarNode(estimatedSentiment(mval)))
            rec }
        case _ => rec
    }
})

The map() method applies the user-provided function on every record. The function takes a DynamicRecord as an argument and returns a DynamicRecord. The code above computes the sentiment, adds it in a top-level field, sentiment, to the record, and returns the record.

Count the records with sentiment and show the schema. This takes a few minutes because Spark must initialize the library and run the sentiment analysis, which can be involved.

issue_sentiments.count
issue_sentiments.printSchema()

Notice that all records were processed (14,063), and the sentiment value was added to the schema.

Finally, let’s pick out the titles that have the lowest sentiment (less than 1.5). Count them and print out a sample to see what some of the titles look like.

val pressing_issues = issue_sentiments.filter(_.getField("sentiment").exists(_.asInstanceOf[Double] < 1.5))
pressing_issues.count
pressing_issues.show(10)

Next, write them all to a file so that you can handle them later. (You’ll need to replace the output path with your own.)

glueContext.getSinkWithFormat(connectionType = "s3", 
                              options = JsonOptions("""{"path": "s3://<bucket>/out/path/"}"""), 
                              format = "json")
            .writeDynamicFrame(pressing_issues)

Take a look in the output path, and you can see the output files.

Putting it all together

Now, let’s create a job from the preceding interactive session. The following script combines all the commands from earlier. It processes the GitHub archive files and writes out the highly negative issues:

import com.amazonaws.services.glue.DynamicRecord
import com.amazonaws.services.glue.GlueContext
import com.amazonaws.services.glue.util.GlueArgParser
import com.amazonaws.services.glue.util.Job
import com.amazonaws.services.glue.util.JsonOptions
import com.amazonaws.services.glue.types._
import org.apache.spark.SparkContext
import java.util.Properties
import edu.stanford.nlp.ling.CoreAnnotations
import edu.stanford.nlp.neural.rnn.RNNCoreAnnotations
import edu.stanford.nlp.pipeline.{Annotation, StanfordCoreNLP}
import edu.stanford.nlp.sentiment.SentimentCoreAnnotations
import scala.collection.convert.wrapAll._

object GlueApp {

    object myNLP {
        val props = new Properties()
        props.setProperty("annotators", "tokenize, ssplit, parse, sentiment")
        props.setProperty("parse.maxlen", "70")

        lazy val coreNLP = new StanfordCoreNLP(props)
    }

    def estimatedSentiment(text: String): Double = {
        if ((text == null) || (!text.nonEmpty)) { return Double.NaN }
        val annotations = myNLP.coreNLP.process(text)
        val sentences = annotations.get(classOf[CoreAnnotations.SentencesAnnotation])
        sentences.foldLeft(0.0)( (csum, x) => { 
            csum + RNNCoreAnnotations.getPredictedClass(x.get(classOf[SentimentCoreAnnotations.SentimentAnnotatedTree])) 
        }) / sentences.length
    }

    def main(sysArgs: Array[String]) {
        val spark: SparkContext = SparkContext.getOrCreate()
        val glueContext: GlueContext = new GlueContext(spark)

        val dbname = "githubarchive"
        val tblname = "data"
        val outpath = "s3://<bucket>/out/path/"

        val github_events = glueContext
                            .getCatalogSource(database = dbname, tableName = tblname)
                            .getDynamicFrame()

        val issue_events =  github_events.filter((rec: DynamicRecord) => {
            rec.getField("type").exists(_ == "IssuesEvent")
        })

        val issue_titles = issue_events.applyMapping(Seq(("id", "string", "id", "string"),
                                                         ("actor.login", "string", "actor", "string"), 
                                                         ("repo.name", "string", "repo", "string"),
                                                         ("payload.action", "string", "action", "string"),
                                                         ("payload.issue.title", "string", "title", "string")))

        val issue_sentiments = issue_titles.map((rec: DynamicRecord) => { 
            val mbody = rec.getField("title")
            mbody match {
                case Some(mval: String) => { 
                    rec.addField("sentiment", ScalarNode(estimatedSentiment(mval)))
                    rec }
                case _ => rec
            }
        })

        val pressing_issues = issue_sentiments.filter(_.getField("sentiment").exists(_.asInstanceOf[Double] < 1.5))

        glueContext.getSinkWithFormat(connectionType = "s3", 
                              options = JsonOptions(s"""{"path": "$outpath"}"""), 
                              format = "json")
                    .writeDynamicFrame(pressing_issues)
    }
}

Notice that the script is enclosed in a top-level object called GlueApp, which serves as the script’s entry point for the job. (You’ll need to replace the output path with your own.) Upload the script to an Amazon S3 location so that AWS Glue can load it when needed.

To create the job, open the AWS Glue console. Choose Jobs in the left navigation pane, and then choose Add job. Create a name for the job, and specify a role with permissions to access the data. Choose An existing script that you provide, and choose Scala as the language.

For the Scala class name, type GlueApp to indicate the script’s entry point. Specify the Amazon S3 location of the script.

Choose Script libraries and job parameters. In the Dependent jars path field, enter the Amazon S3 locations of the Stanford CoreNLP libraries from earlier as a comma-separated list (without spaces). Then choose Next.

No connections are needed for this job, so choose Next again. Review the job properties, and choose Finish. Finally, choose Run job to execute the job.

You can simply edit the script’s input table and output path to run this job on whatever GitHub timeline datasets that you might have.

Conclusion

In this post, we showed how to write AWS Glue ETL scripts in Scala via notebooks and how to run them as jobs. Scala has the advantage that it is the native language for the Spark runtime. With Scala, it is easier to call Scala or Java functions and third-party libraries for analyses. Moreover, data processing is faster in Scala because there’s no need to convert records from one language runtime to another.

You can find more example of Scala scripts in our GitHub examples repository: https://github.com/awslabs/aws-glue-samples. We encourage you to experiment with Scala scripts and let us know about any interesting ETL flows that you want to share.

Happy Glue-ing!

 


Additional Reading

If you found this post useful, be sure to check out Simplify Querying Nested JSON with the AWS Glue Relationalize Transform and Genomic Analysis with Hail on Amazon EMR and Amazon Athena.

 


About the Authors

Mehul Shah is a senior software manager for AWS Glue. His passion is leveraging the cloud to build smarter, more efficient, and easier to use data systems. He has three girls, and, therefore, he has no spare time.

 

 

 

Ben Sowell is a software development engineer at AWS Glue.

 

 

 

 
Vinay Vivili is a software development engineer for AWS Glue.

 

 

 

Netflix, Amazon and Hollywood Sue Kodi-Powered Dragon Box Over Piracy

Post Syndicated from Ernesto original https://torrentfreak.com/netflix-amazon-and-hollywood-sue-kodi-powered-dragon-box-over-piracy-180111/

More and more people are starting to use Kodi-powered set-top boxes to stream video content to their TVs.

While Kodi itself is a neutral platform, sellers who ship devices with unauthorized add-ons give it a bad reputation.

In recent months these boxes have become the prime target for copyright enforcers, including the Alliance for Creativity and Entertainment (ACE), an anti-piracy partnership between Hollywood studios, Netflix, Amazon, and more than two dozen other companies.

After suing Tickbox last year a group of key ACE members have now filed a similar lawsuit against Dragon Media Inc, which sells the popular Dragon Box. The complaint, filed at a California federal court, also lists the company’s owner Paul Christoforo and reseller Jeff Williams among the defendants.

According to ACE, these type of devices are nothing more than pirate tools, allowing buyers to stream copyright infringing content. That also applies to Dragon Box, they inform the court.

“Defendants market and sell ‘Dragon Box,’ a computer hardware device that Defendants urge their customers to use as a tool for the mass infringement of the copyrighted motion pictures and television shows,” the complaint, picked up by HWR, reads.

The movie companies note that the defendants distribute and promote the Dragon Box as a pirate tool, using phrases such as “Watch your Favourites Anytime For FREE” and “stop paying for Netflix and Hulu.”

Dragon Box

When users follow the instructions Dragon provides they get free access to copyrighted movies, TV-shows and live content, ACE alleges. The complaint further points out that the device uses the open source Kodi player paired with pirate addons.

“The Dragon Media application provides Defendants’ customers with a customized configuration of the Kodi media player and a curated selection of the most popular addons for accessing infringing content,” the movie companies write.

“These addons are designed and maintained for the overarching purpose of scouring the Internet for illegal sources of copyrighted content and returning links to that content. When Dragon Box customers click those links, those customers receive unauthorized streams of popular motion pictures and television shows.”

One of the addons that are included with the download and installation of the Dragon software is Covenant.

This addon can be accessed through a preinstalled shortcut which is linked under the “Videos” menu. Users are then able to browse through a large library of curated content, including a separate category of movies that are still in theaters.

In theaters

According to a statement from Dragon owner Christoforo, business is going well. The company claims to have “over 250,000 customers in 50 states and 4 countries and growing” as well as “374 sellers” across the world.

With this lawsuit, however, the company’s future has suddenly become uncertain.

The movie companies ask the California District for an injunction to shut down the infringing service and impound all Dragon Box devices. In addition, they’re requesting statutory damages which can go up to several million dollars.

At the time of writing the Dragon Box website is still in on air and the company has yet to comment on the allegations.

A copy of the complaint is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

TVAddons and ZemTV Ask Court to Dismiss U.S. Piracy Lawsuit

Post Syndicated from Ernesto original https://torrentfreak.com/tvaddons-and-zemtv-ask-court-to-dismiss-u-s-piracy-lawsuit-180108/

Last year, American satellite and broadcast provider Dish Network targeted two well-known players in the third-party Kodi add-on ecosystem.

In a complaint filed in a federal court in Texas, add-on ZemTV and the TVAddons library were accused of copyright infringement. As a result, both are facing up to $150,000 in damages for each offense.

While the case was filed in Texas, neither of the defendants live there, or even in the United States. The owner and operator of TVAddons is Adam Lackman, who resides in Montreal, Canada. ZemTV’s developer Shahjahan Durrani is even further away in London, UK.

Their limited connection to Texas is reason for the case to be dismissed, according to the legal team of the two defendants. They are represented by attorneys Erin Russel and Jason Sweet, who asked the Court to drop the case late last week.

According to their motion, the Texas District Court does not have jurisdiction over the two defendants.

“Lackman and Durrani have never been residents or citizens of Texas; they have never owned property in Texas; they have never voted in Texas; they have never personally visited Texas; they have never directed any business activity of any kind to anyone in Texas […] and they have never earned income in Texas,” the motion reads.

Technically, defendants can be sued in a district they have never been, as long as they “directed actions” at the state or its citizens.

According to Dish, this is the case here since both defendants made their services available to local residents, among other things. However, the defense team argues that’s not enough to establish jurisdiction in this case.

“Plaintiff’s conclusory allegation that Lackman and Durrani marketed, made available, and distributed ZemTV service and the ZemTV add-on to consumers in the State of Texas and the Southern District of Texas is misleading at best,” the attorneys write.

If the case proceeds this would go against the US constitution, violating the defendants’ due process rights. Whether the infringement claims hold ground or not, Dish has no right to sue, according to the defense.

“Defendants are citizens of Canada and Great Britain and have not had sufficient contacts in the State of Texas for this Court to exercise personal jurisdiction over them. To do so would violate the Due Process Clause of the United States Constitution.”

The Court must now decide whether the case can proceed or not. TorrentFreak reached out to TVAddons but the service wishes to refrain from commenting on the proceeding at the moment.

Previously, TVAddons made it clear that it sees the Dish lawsuit as an attempt to destroy the Kodi addon community. One of the methods of attack it mentioned, was to sue people in foreign jurisdictions.

“Most people don’t have money lying around to hire lawyers in places they’ve never even visited. This means that if a company sues you in a foreign country and you can’t afford a lawyer, you’re screwed even if you did nothing wrong,” TVAddons wrote at the time.

A copy of the motion to dismiss is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Blockchain Startup White Rabbit Calls on Pirate Sites to Do Business, Legally

Post Syndicated from Andy original https://torrentfreak.com/blockchain-startup-white-rabbit-calls-on-pirate-sites-to-do-business-legally-180102/

For as long as piracy has been mainstream, people have tried to find ways to monetize the system. While many have had good intentions, only models focusing on the negative (copyright trolling, for example) have enjoyed any level of success.

Blockchain startup White Rabbit is hoping to buck that trend but it’s not going to be easy. Then again, nothing worthwhile is, so what do they have to offer?

White Rabbit begins with the assumption that while they love their pirate sites, a many as 60% of pirates would happily reward creators if it was made easy enough. The startup deals with this by inviting pirates to carry on using the kinds of unauthorized sites and services they’re using already, but with a twist.

By installing the White Rabbit browser plug-in, the company will be able to see what content the user is accessing. It will then attempt to match that download to deals it’s made with the companies behind those movies or TV shows. They’ll then get paid a set amount.

“White Rabbit is a content ecosystem accessed through a plugin that recognizes the film and series you stream. The streaming sites are P2P or open server, meaning users can choose where they want to stream,” White Rabbit CEO Alan R. Milligan informs TF.

“We already have a library of films that have won and been nominated for Oscars, Cannes, Berlin and Venice film festival best film prizes – but will continue adding more films and series as we near launch.”

It’s envisioned that this mechanism will prove popular with reluctant pirates since instead of paying Netflix, Amazon, and dozens of other services, users can pay for content through one channel. And, since White Rabbit uses blockchain technology, rights holders can be ensured complete financial transparency, with user payments going straight to them without delay, cutting out the middleman.

“Users are anonymous but can offer filmmakers, artists or other content right holders (investors, distributors, sales agents) our tokens (WRT) as good faith that they are willing to pay for the content. Should the rights holders accept, we enter into a contract with the rights holder that allows them to receive revenue – and accept P2P streaming. We find, and research shows, that most people that are forced to piracy [do so] because they are just not able to access content,” Milligan adds.

White Rabbit’s CEO, who is a filmmaker himself, also sees opportunities to bring fans and filmmakers closer together. Once users have paid for content, they continue to get access via something called the Rabbit Hole, an interface which provides extras that are normally found on a DVD, such as deleted scenes etc.

The team behind White Rabbit describe themselves as “responsible rebels” hoping to spark a revolution. While that’s clearly the goal, by any measure there is a mountain to climb, not least on the content front.

When TorrentFreak first started speaking with the startup in October last year, we were told they were “closing in on 500 films” with contracts, although they wouldn’t elaborate on who might be on board. Nevertheless, that is quite a lot of movies, especially given the mainstream studios’ hatred of pirate sites and anything they might be involved in.

However, subsequent discussion suggests that those with more niche tastes might be White Rabbit’s initial target audience.

“I believe timing is of big relevance and right now a lot of producers are scared of where they´re going to go now that Netflix is enforcing its 50/50 policy. There are also so many amazing films out there that get no or little digital distribution at all,” Milligan says.

“As a Norwegian film producer there is little chance of the film being streamed in my home country – even if we won awards in Cannes and Venice. My latest film Valley of Shadows got US digital distribution, but in Norway – nada.

“My colleagues around the world are suffering the same way, not to mention all the fans who cant watch local films and series. So the indie part of the industry – which is most of us (and still representing 20-30% of cinema sales) – are very ready for change.”

But while indie producers could benefit nicely from White Rabbit, Milligan highlights problems that the big studios have, and suggests that they might like to see the startup succeed too.

“The studios will likely want to see our business model work – but they also have a problem with Netflix which has become a studio. So they´re competitors now, but Netflix has a 100M subscriber advantage. Will they all break out and create each their streaming site for their content only? That would be terrible for fans,” he notes.

That would indeed be a huge problem and it’s an issue we’ve raised here on TF on several occasions. However, if White Rabbit is to succeed, it needs to overcome significant hurdles. We raised just a handful of these with its CEO. First up, Partner Streaming Sites (PSS).

PSS sites appear to be pirate sites that will partner with White Rabbit, so the latter can tap into the formers’ userbases. When White Rabbit users stream ‘pirate’ content from a PSS, that content will be monetized, with the creator getting paid quickly and transparently. At that point, it seems, the content will become non-infringing.

But while that sounds intriguing in theory, plenty of questions remain. White Rabbit says it will share “up to $1M” from its token sale “with the most innovative, brand conscious, film and series loving streaming sites either already out there, planned or about to launch.”

The start-up says the best projects could get $100,000 each but, since its goal is to convert pirates, that necessarily means doing business with pirate sites.

So we asked; how will it be possible to do business with people that are regularly described as criminals? How will it then become possible to secure deals with filmmakers that will undoubtedly come under huge pressure from industry players not to participate in the White Rabbit scheme?

“What we are trying to do is to change digital distribution to everyone´s benefit. We have no interest in financing illegal content, we are interested in spurring innovation in streaming, access for fans and due payment for the rights holders,” Milligan explains.

“That´s what PSS can help us achieve using the WRT (White Rabbit Token) – that helps us find out who wants to be part of this model. No revenue exchanges hands until rights holders accept the token. What is important for rights holders is that we generate more revenue for them than current business models, and we haven´t even included the Rabbit Hole revenue yet.”

So what happens if a White Rabbit user tries to stream something that isn’t part of the program? According to Milligan, PSS sites must remove the content and let White Rabbit users know they must get the content legally elsewhere.

Clearly, the vast majority of pirate site users aren’t White Rabbit users now, nor will they be so in the future, so the removal of content is massively counter-productive for pirate sites. Indeed, it’s this reluctance to take down infringing content that causes them most of their problems.

So, hypothetically, what happens when the operators of streaming site X (that previously partnered with White Rabbit) get arrested and their site shut down for distributing Hollywood content that isn’t part of the program?

“PSS´s would never distribute illegal content, we are offering an opportunity to monetize. We are allowing a platform to those that see monetized P2P as beneficial to their income stream,” Milligan says.

“Hollywood is tricky though, I admit. The proof is in the pudding, so if we have to prove the value through indie and arthouse films first that´s OK. That is still 30% of the multi-billion dollar film market, so we are OK to start with that.”

The final issue is the price and where revenue goes. White Rabbit envisions a user paying $2 for film and $1 for a TV show, although producers are free to set their own price. That means 11 TV shows or five movies per month, given the Netflix model/budget of roughly $11.00 for the same period.

Revenue generated would then be split, with 75% going to the rightsholders, 15% to White Rabbit, and 10% to PSS sites. There’s also a provision for non-PSS sites to be a part of the program, but they would only get 5%, with the remaining 5% going to White Rabbit.

With an incredibly ambitious project like this, it’s easy to find reasons why it might not succeed or even fail to get off the ground. But the team behind the operation have lots of experience in relevant fields and from what we’ve seen are putting considerable effort into getting things moving, as their white paper (pdf) explains.

Currently, White Rabbit is seeking conversation with prospective Partner Streaming Sites, who will provide the content on which White Rabbit will survive. It will certainly be interesting to see which sites put themselves forward for consideration.

This is one of those projects that raises a dizzying volume of questions, with each living up to their billing as part of the Rabbit Hole. The big question is whether the Rabbit Hole will eventually lead to Wonderland or will render everyone who ventures inside feeling surreal and disorientated.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Is Your Kodi Setup Being Spied On?

Post Syndicated from Andy original https://torrentfreak.com/is-your-kodi-setup-being-spied-on-180101/

As quite possibly the most people media player on earth, Kodi is installed on millions of machines – around 38 million according to the MPAA. The software has a seriously impressive range of features but one, if not configured properly, raises security issues for Kodi users.

For many years, Kodi has had a remote control feature, whereby the software can be remotely managed via a web interface.

This means that you’re able to control your Kodi setup installed on a computer or set-top box using a convenient browser-based interface on another device, from the same room or indeed anywhere in the world. Earlier versions of the web interface look like the one in the image below.

The old Kodi web-interface – functional but basic

But while this is a great feature, people don’t always password-protect the web-interface, meaning that outsiders can access their Kodi setups, if they have that person’s IP address and a web-browser. In fact, the image shown above is from a UK Kodi user’s setup that was found in seconds using a specialist search engine.

While the old web-interface for Kodi was basically a remote control, things got more interesting in late 2016 when the much more functional Chorus2 interface was included in Kodi by default. It’s shown in the image below.

Chorus 2 Kodi Web-Interface

Again, the screenshot above was taken from the setup of a Kodi user whose setup was directly open to the Internet. In every way the web-interface of Kodi acts as a web page, allowing anyone with the user’s IP address (with :8080 appended to the end) to access the user’s setup. It’s no different than accessing Google with an IP address (216.58.216.142), instead of Google.com.

However, Chorus 2 is much more comprehensive that its predecessors which means that it’s possible for outsiders to browse potentially sensitive items, including their addons if a password hasn’t been enabled in the appropriate section in Kodi.

Kodi users probably don’t want this seen in public

While browsing someone’s addons isn’t the most engaging thing in the world, things get decidedly spicier when one learns that the Chorus 2 interface allows both authorized and unauthorized users to go much further.

For example, it’s possible to change Kodi’s system settings from the interface, including mischievous things such as disabling keyboards and mice. As seen (or not seen) in the redacted section in the image below, it can also give away system usernames, for example.

Access to Kodi settings – and more

But aside from screwing with people’s settings (which is both pointless and malicious), the Chorus 2 interface has a trick up its sleeve. If people’s Kodi setups contain video or music files (which is what Kodi was originally designed for), in many cases it’s possible to play these over the web interface.

In basic terms, someone with your IP address can view the contents of your video library on the other side of the world, with just a couple of clicks.

The image below shows that a Kodi setup has been granted access to some kind of storage (network or local disk, for example) and it can be browsed, revealing movies. (To protect the user, redactions have been made to remove home video titles, network, and drive names)

Network storage accessed via Chorus 2

The big question is, however, whether someone accessing a Kodi setup remotely can view these videos via a web browser. Answer: Absolutely.

Clicking through on each piece of media reveals a button to the right of its title. Clicking that reveals two options – ‘Queue in Kodi’ (to play on the installation itself) or ‘Download’, which plays/stores the content via a remote browser located anywhere in the world. Chrome works like a charm.

Queue to Kodi or watch remotely in a browser

While this is ‘fun’ and potentially useful for outsiders looking for content, it’s not great if it’s your system that’s open to the world. The good news is that something can be done about it.

In their description for Chorus 2, the Kodi team explain all of its benefits of the interface but it appears many people don’t take their advice to introduce a new password. The default password and username are both ‘kodi’ which is terrible for security if people leave things the way they are.

If you run Kodi, now is probably the time to fix the settings, disable the web interface if you don’t use it, or enable stronger password protection if you do.

Change that password – now

Just recently, Kodi addon repository TVAddons issued a warning to people using jailbroken Apple TV 2 devices. That too was a default password issue and one that can be solved relatively easily.

“People need to realize that their Kodi boxes are actually mini computers and need to be treated as such,” a TVAddons spokesperson told TF.

“When you install a build, or follow a guide from an unreputable source, you’re opening yourself up to potential risk. Since Kodi boxes aren’t normally used to handle sensitive data, people seem to disregard the potential risks that are posed to their network.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

The "Extended Random" Feature in the BSAFE Crypto Library

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/12/the_extended_ra.html

Matthew Green wrote a fascinating blog post about the NSA’s efforts to increase the amount of random data exposed in the TLS protocol, and how it interacts with the NSA’s backdoor into the DUAL_EC_PRNG random number generator to weaken TLS.

How to Encrypt Amazon S3 Objects with the AWS SDK for Ruby

Post Syndicated from Doug Schwartz original https://aws.amazon.com/blogs/security/how-to-encrypt-amazon-s3-objects-with-the-aws-sdk-for-ruby/

AWS KMS image

Recently, Amazon announced some new Amazon S3 encryption and security features. The AWS Blog post showed how to use the Amazon S3 console to take advantage of these new features. However, if you have a large number of Amazon S3 buckets, using the console to implement these features could take hours, if not days. As an alternative, I created documentation topics in the AWS SDK for Ruby Developer Guide that include code examples showing you how to use the new Amazon S3 encryption features using the AWS SDK for Ruby.

What are my encryption options?

You can encrypt Amazon S3 bucket objects on a server or on a client:

  • When you encrypt objects on a server, you request that Amazon S3 encrypt the objects before saving them to disk in data centers and decrypt the objects when you download them. The main advantage of this approach is that Amazon S3 manages the entire encryption process.
  • When you encrypt objects on a client, you encrypt the objects before you upload them to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools. Use this option when:
    • Company policy and standards require it.
    • You already have a development process in place that meets your needs.

    Encrypting on the client has always been available, but you should know the following points:

    • You must be diligent about protecting your encryption keys, which is analogous to having a burglar-proof lock on your front door. If you leave a key under the mat, your security is compromised.
    • If you lose your encryption keys, you won’t be able to decrypt your data.

    If you encrypt objects on the client, we strongly recommend that you use an AWS Key Management Service (AWS KMS) managed customer master key (CMK)

How to use encryption on a server

You can specify that Amazon S3 automatically encrypts objects as you upload them to a bucket or require that objects uploaded to an Amazon S3 bucket include encryption on a server before they are uploaded to an Amazon S3 bucket.

The advantage of these settings is that when you specify them, you ensure that objects uploaded to Amazon S3 are encrypted. Alternatively, you can have Amazon S3 encrypt individual objects on the server as you upload them to a bucket or encrypt them on the server with your own key as you upload them to a bucket.

The AWS SDK for Ruby Developer Guide now contains the following topics that explain your encryption options on a server:

How to use encryption on a client

You can encrypt objects on a client before you upload them to a bucket and decrypt them after you download them from a bucket by using the Amazon S3 encryption client.

The AWS SDK for Ruby Developer Guide now contains the following topics that explain your encryption options on the client:

Note: The Amazon S3 encryption client in the AWS SDK for Ruby is compatible with other Amazon S3 encryption clients, but it is not compatible with other AWS client-side encryption libraries, including the AWS Encryption SDK and the Amazon DynamoDB encryption client for Java. Each library returns a different ciphertext (“encrypted message”) format, so you can’t use one library to encrypt objects and a different library to decrypt them. For more information, see Protecting Data Using Client-Side Encryption.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions about encrypting objects on servers and clients, start a new thread on the Amazon S3 forum or contact AWS Support.

– Doug

OWASP Dependency Check Maven Plugin – a Must-Have

Post Syndicated from Bozho original https://techblog.bozho.net/owasp-dependency-check-maven-plugin-must/

I have to admit with a high degree of shame that I didn’t know about the OWASP dependency check maven plugin. And seems to have been around since 2013. And apparently a thousand projects on GitHub are using it already.

In the past I’ve gone manually through dependencies to check them against vulnerability databases, or in many cases I was just blissfully ignorant about any vulnerabilities that my dependencies had.

The purpose of this post is just that – to recommend the OWASP dependency check maven plugin as a must-have in practically every maven project. (There are dependency-check tools for other build systems as well).

When you add the plugin it generates a report. Initially you can go and manually upgrade the problematic dependencies (I upgraded two of those in my current project), or suppress the false positives (e.g. the cassandra library is marked as vulnerable, whereas the actual vulnerability is that Cassandra binds an unauthenticated RMI endpoint, which I’ve addressed via my stack setup, so the library isn’t an issue).

Then you can configure a threshold for vulnerabilities and fail the build if new ones appear – either by you adding a vulnerable dependency, or in case a vulnerability is discovered in an existing dependency.

All of that is shown in the examples page and is pretty straightforward. I’d suggest adding the plugin immediately, it’s a must-have:

<plugin>
	<groupId>org.owasp</groupId>
	<artifactId>dependency-check-maven</artifactId>
	<version>3.0.2</version>
	<executions>
		<execution>
			<goals>
				<goal>check</goal>
			</goals>
		</execution>
	</executions>
</plugin>

Now, checking dependencies for vulnerabilities is just one small aspect of having your software secure and it shouldn’t give you a false sense of security (a sort-of “I have my dependencies checked, therefore my system is secure” fallacy). But it’s an important aspect. And having that check automated is a huge gain.

The post OWASP Dependency Check Maven Plugin – a Must-Have appeared first on Bozho's tech blog.

A Christmas Carol – When Piracy Became Irrelevant

Post Syndicated from Ernesto original https://torrentfreak.com/a-christmas-carol-when-piracy-became-irrelevant-171225/

This is the story of Walter Scroogle, CEO of one of the major movie studios in Hollywood.

It begins on Christmas Eve, with Scroogle arriving home late at night. This wasn’t anything unusual, not even on a day like this. The rest of the family didn’t expect anything else and were already asleep in anticipation of the forthcoming festivities.

As Scroogle sits down he suddenly notices a pile of Amazon store boxes in the corner of the room. “Thank God, he mumbles, Christmas is saved.” But it was a close call.

It had been such a hectic week at work. So busy that Scroogle had totally forgot to get this year’s Christmas presents, one of the few things he arranges around the house. Luckily, modern day technology was there to save the day.

Amazon had everything in stock. The new bike for his son. The tablet for his daughter. The earrings for his wife, as well as many more gifts. It took Scroogle roughly fifteen minutes to get all his Christmas shopping done. And with same-day delivery, it arrived a few hours later. “Thank God,” he said again.

While a major disaster had been avoided, there were still a few presents to wrap. Apparently, there are no machines that can carefully wrap bikes. So there he was, at 2am Christmas morning, grumpily making the final preparations for the big day.
Halfway through Scroogle decided to lie on the couch for a bit. Just a short while, he promised himself, not knowing what was to come.

After a minute or two an exhausted Scroogle dozed off into his first dream.

Inspired by what had occurred earlier that day, he dreamt about forgetting the Chrismas presents. While he appeared to be his current self, the world was different. There were no Internet stores or even large malls. He had to drive to at least ten different stores to get what he wanted, and given that it was Sunday, most were closed.

Needless to say, it was a disaster.

The second dream was related to something at work. A few hours earlier he’d hosted a board meeting, discussing the new streaming platform the company had launched that morning. Scoogle was one of the pioneers. He was also the one who hastily decided to pull all their content from third-party platforms, Netflix included. “Everything will be pulled tonight. We want to be exclusive again,” he’s said.

The dream presented Scroogle with the darker side of his decision. It showed several families who, on Chrismas Eve, noticed that their favorite movies were no longer available. Christmas classics were removed without warning, forcing these families to take out another subscription. If the new service was even available in their region.

“Not the best way to spend time on Christmas,” Scroogle thought.

Scroogle was still fast asleep when the third and final dream kicked in. It was the future, apparently, as commercials for the streaming service and next year’s Christmas blockbuster were being shown on a TV. There was a Christmas tree and a family clearly amusing itself. Unfortunately for Scroogle, things didn’t turn out to be so bright.

After the commercial, the latest blockbuster played, but it was not on the company’s streaming platform. Instead, it was playing through what appeared to be some kind of piracy streaming device. “They are stealing our stuff,” he thought. “And on Christmas Day too! These people are cheapskates.”

Then, a deep-sounding voiceover became apparent.

“No, Scroogle, the family actually pays for seven streaming services already. However, their budgets are not endless. The increasing fragmentation in the current legal streaming landscape is pushing people toward these devices. If only they could get it all from under one roof.”

Then Scroogle woke up. At first, he hadn’t really processed what had happened. It was nearly 4am and he still had a few gifts to wrap, those that were so conveniently ordered a few hours earlier and delivered to his house, from one supplier. But, when it sank in, he knew that there was one thing he had to do before the family woke up.

Scroogle wrote an email to the board admitting that he was wrong. “We have to keep all our content on other services and think this through after the holidays. It’s inclusive from now on, not exclusive. People should be able to see our great films without hassle,” he typed.

Sent.

A year later the new Christmas blockbuster was indeed released. It came out together with a brand new streaming platform, one where users could watch a massive library of all the best classics from the major studios, independents, and other services under one roof. Even the latest movies could be easily played on demand for a small one-time fee.

Streaming records were broken that day, and people were smiling.

Merry Christmas everyone!

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Deezer Tries to Shut Down ‘Hacked’ Pirate Versions

Post Syndicated from Ernesto original https://torrentfreak.com/deezer-tries-to-shut-down-hacked-pirate-versions-171223/

Nowadays there are dozens of ways for people to pirate free music. Torrent sites, direct downloading portals, stream ripping, you name it.

While the music industry tries to crack down on these unauthorized services, there are also plenty of problems close to home.

Legitimate streaming platforms such as Spotify and Tidal has been used to rip music from, and the same is true for the French streaming giant Deezer.

Through various applications, the public can freely access and download the entire Deezer library, completely hassle-free.

Take Deezloader, for example, which makes it surprisingly easy to grab high-quality tracks, complete with proper titles and tags. Want to download a full album in one click? No problem. A custom playlist of dozens of songs? Done.

Deezloader

Deezer is obviously not happy with these applications. Through DMCA notices the company does its best to take them down. This week it sent a notice to the developer platform GitHub, targetting several of these tools.

“The following projects, in the paragraph below, make available a hacked version of our Deezer application or a method to unlawfully download the music catalogue of Deezer, in total violation of our rights and of the rights of our music licensors,” Deezer wrote.

“..therefore ask that you immediately take down the projects corresponding to the URLs below and all of the related forks by others members who have had access or even contributed to such projects.”

GitHub was quick to respond and removed access to (forks of) applications such as Deezloader, DeezerDownload, Deeze, Deezerio, Deezit, and Deedown. Instead, users who try to access these repositories now see the following notice.

Deezgone?

While the DMCA notice helps to make these projects unavailable, at least on GitHub, the applications still work. They’re also widely available through other sites and forums.

These tools have been around for a while and despite Deezer’s most recent efforts, the music’s still playing

Deezer refers to the pirate applications as “hacked” versions and appears to be unable block them from accessing its own servers. That’s a worrying prospect for the company.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

[$] An introduction to the BPF Compiler Collection

Post Syndicated from corbet original https://lwn.net/Articles/742082/rss

In the previous article of this series, I discussed how to use eBPF to safely run code supplied by
user space
inside of the kernel. Yet one of eBPF’s biggest challenges
for newcomers is that writing programs requires compiling and linking to
the eBPF library from the kernel source. Kernel developers might always
have a copy of the kernel source within reach, but that’s not so for
engineers working on production or customer machines.

“Pirate” Streaming Service Sued by “Legal” Competitor

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-streaming-service-sued-by-legal-competitor-171222/

In recent years there has been a boom in video streaming services, some operating with proper licenses and others without.

In a few cases, the line between legal and illegal is hard to spot for the public. When the latest Hollywood blockbusters are available for free it’s quite clear, but there are also slick-looking paid subscription services that operate without proper licenses.

The latter is what eTVnet is accused of. The streaming service targets Russian speakers in the United States and is accessible via the web or streaming boxes such as Roku. However, it does so without proper licenses, a complaint filed at a Massachusetts federal court alleges.

“On information and belief, the eTVnet Conspirators have illegally copied thousands of movies and are distributing them to paying customers illegally,” the complaint reads.

“By populating their streaming video service with stolen, illegally copied, and infringing copyrighted content, the eTVnet Conspirators have unlawfully and unfairly gained an advantage over their competitors, including Plaintiff.”

While this reads like a typical copyright infringement lawsuit, it isn’t. The complaint was filed by the Scottish company Alamite Ventures, which operates TUA.tv, a competing streaming service in the US.

The company filed suit against eTVnet, which is incorporated in Canada, as well as two owners and operators of the streaming service. They stand accused of civil conspiracy, unfair business practices, and false or misleading representations of fact under the Lanham Act.

“The eTVnet Conspirators deceptively market the eTVnet Service as a legal and fully licensed service,” Alamite Ventures notes.

ETVNET

There obviously can’t be a claim for copyright infringement damages, since Alamite is not a copyright holder, but the complaint does mention that major US companies such as HBO, Disney and Netflix are harmed as well.

“Given the staggering amount of copyright infringement committed by the eTVnet Conspirators, the damage to United States-based copyright owners easily eclipses $100,000,000,” it reads.

There is no calculation or evidence to back the $100 million claim, which seems quite substantial. However, according to Alamite Ventures there is no doubt that eTVnet is willingly operating a pirate service.

“…the eTVnet Conspirators know that their actions are illegal and have instituted a sophisticated scheme to avoid getting caught.”

“For example, the eTVnet Conspirators do not allow new users to access the stolen content library until they can verify that the new users are not “spies” or affiliated with content producers or law enforcement.”

Alamite Ventures hopes the court will agree and requests damages, as well as the shutdown of eTVnet in the United States.

A copy of the full complaint is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

HiveMQ 3.3.2 released

Post Syndicated from The HiveMQ Team original https://www.hivemq.com/blog/hivemq-3-3-2-released/

The HiveMQ team is pleased to announce the availability of HiveMQ 3.3.2. This is a maintenance release for the 3.3 series and brings the following improvements:

  • Improved unsubscribe performance
  • Webinterface now shows a warning if different HiveMQ versions are in a cluster
  • At Startup HiveMQ shows the enabled cipher suites and protocols for TLS listeners in the log
  • The Web UI Dashboard now shows MQTT Publishes instead of all MQTT messages in the graphs
  • Updated integrated native SSL/TLS library to latest version
  • Improved message ordering while the cluster topology changes
  • Fixed a cosmetic NullPointerException with background cleanup jobs
  • Fixed an issue where Web UI Popups could not be closed on IE/Edge and Safari
  • Fixed an issue which could lead to an IllegalArgumentException with a QoS 0 message in a rare edge-case
  • Improved persistence migrations for updating single HiveMQ deployments
  • Fixed a reference counting issue
  • Fixed an issue with rolling upgrades if the AsyncMetricService is used while the update is in progress

You can download the new HiveMQ version here.

We recommend to upgrade if you are an HiveMQ 3.3.x user.

Have a great day,
The HiveMQ Team

Amazon Linux 2 – Modern, Stable, and Enterprise-Friendly

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-linux-2-modern-stable-and-enterprise-friendly/

I’m getting ready to wrap up my work for the year, cleaning up my inbox and catching up on a few recent AWS launches that happened at and shortly after AWS re:Invent.

Last week we launched Amazon Linux 2. This is modern version of Linux, designed to meet the security, stability, and productivity needs of enterprise environments while giving you timely access to new tools and features. It also includes all of the things that made the Amazon Linux AMI popular, including AWS integration, cloud-init, a secure default configuration, regular security updates, and AWS Support. From that base, we have added many new features including:

Long-Term Support – You can use Amazon Linux 2 in situations where you want to stick with a single major version of Linux for an extended period of time, perhaps to avoid re-qualifying your applications too frequently. This build (2017.12) is a candidate for LTS status; the final determination will be made based on feedback in the Amazon Linux Discussion Forum. Long-term support for the Amazon Linux 2 LTS build will include security updates, bug fixes, user-space Application Binary Interface (ABI), and user-space Application Programming Interface (API) compatibility for 5 years.

Extras Library – You can now get fast access to fresh, new functionality while keeping your base OS image stable and lightweight. The Amazon Linux Extras Library eliminates the age-old tradeoff between OS stability and access to fresh software. It contains open source databases, languages, and more, each packaged together with any needed dependencies.

Tuned Kernel – You have access to the latest 4.9 LTS kernel, with support for the latest EC2 features and tuned to run efficiently in AWS and other virtualized environments.

SystemdAmazon Linux 2 includes the systemd init system, designed to provide better boot performance and increased control over individual services and groups of interdependent services. For example, you can indicate that Service B must be started only after Service A is fully started, or that Service C should start on a change in network connection status.

Wide AvailabiltyAmazon Linux 2 is available in all AWS Regions in AMI and Docker image form. Virtual machine images for Hyper-V, KVM, VirtualBox, and VMware are also available. You can build and test your applications on your laptop or in your own data center and then deploy them to AWS.

Launching an Instance
You can launch an instance in all of the usual ways – AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, RunInstances, and via a AWS CloudFormation template. I’ll use the Console:

I’m interested in the Extras Library; here’s how I see which topics (lists of packages) are available:

As you can see, the library includes languages, editors, and web tools that receive frequent updates. Each topic contains all of dependencies that are needed to install the package on Amazon Linux 2. For example, the Rust topic includes the cmake build system for Rust, cargo for Rust package maintenance, and the LLVM-based compiler toolchain for Rust.

Here’s how I install a topic (Emacs 25.3):

SNS Updates
Many AWS customers use the Amazon Linux AMIs as a starting point for their own AMIs. If you do this and would like to kick off your build process whenever a new AMI is released, you can subscribe to an SNS topic:

You can be notified by email, invoke a AWS Lambda function, and so forth.

Available Now
Amazon Linux 2 is available now and you can start using it in the cloud and on-premises today! To learn more, read the Amazon Linux 2 LTS Candidate (2017.12) Release Notes.

Jeff;

 

MQTT 5: Is it time to upgrade to MQTT 5 yet?

Post Syndicated from The HiveMQ Team original https://www.hivemq.com/blog/mqtt-5-time-to-upgrade-yet/

MQTT 5 - Is it time to upgrade yet?

Is it time to upgrade to MQTT 5 yet?

Welcome to this week’s blog post! After last week’s Introduction to MQTT 5, many readers wondered when the successor to MQTT 3.1.1 is ready for prime time and can be used in future and existing projects.

Before we try to answer the question in more detail, we’d love to hear your thoughts about upgrading to MQTT 5. We prepared a small survey below. Let us know how your MQTT 5 upgrading plans are!

The MQTT 5 OASIS Standard

As of late December 2017, the MQTT 5 specification is not available as an official “Committee Specification” yet. In other words: MQTT 5 is not available yet officially. The foundation for every implementation of the standard is, that the Technical Committee at OASIS officially releases the standard.

The good news: Although no official version of the standard is available yet, fundamental changes to the current state of the specification are not expected.
The Public Review phase of the “Committee Specification Draft 2” finished without any major comments or issues. We at HiveMQ expect the MQTT 5 standard to be released in very late December 2017 or January 2018.

Current state of client libraries

To start using MQTT 5, you need two participants: An MQTT 5 client library implementation in your programming language(s) of choice and an MQTT 5 broker implementation (like HiveMQ). If both components support the new standard, you are good to go and can use the new version in your projects.

When it comes to MQTT libraries, Eclipse Paho is the one-stop shop for MQTT clients in most programming languages. A recent Paho mailing list entry stated that Paho plans to release MQTT 5 client libraries end of June 2018 for the following programming languages:

  • C (+ embedded C)
  • Java
  • Go
  • C++

If you’re feeling adventurous, at least the Java Paho client has preliminary MQTT 5 support available. You can play around with the API and get a feel about the upcoming Paho version. Just build the library from source and test it, but be aware that this is not safe for production use.

There is also a very basic test broker implementation available at Eclipse Paho which can be used for playing around. This is of course only for very basic tests and does not support all MQTT 5 features yet. If you’re planning to write your own library, this may be a good tool to test your implementation against.

There are of other individual MQTT library projects which may be worth to check out. As of December 2017 most of these libraries don’t have an MQTT 5 roadmap published yet.

HiveMQ and MQTT 5

You can’t use the new version of the MQTT protocol only by having a client that is ready for MQTT 5. The counterpart, the MQTT broker, also needs to fully support the new protocol version. At the time of writing, no broker is MQTT 5 ready yet.

HiveMQ was the first broker to fully support version 3.1.1 of MQTT and of course here at HiveMQ we are committed to give our customers the advantage of the new features of version 5 of the protocol as soon as possible and viable.

We are going to provide an Early Access version of the upcoming HiveMQ generation with MQTT 5 support by May/June 2018. If you’re a library developer or want to go live with the new protocol version as soon as possible: The Early Access version is for you. Add yourself to the Early Access Notification List and we’ll notify you when the Early Access version is available.

We expect to release the upcoming HiveMQ generation in the third quarter of 2018 with full support of ALL MQTT 5 features at scale in an interoperable way with previous MQTT versions.

When is MQTT 5 ready for prime time?

MQTT is typically used in mission critical environments where it’s not acceptable that parts of the infrastructure, broker or client, are unreliable or have some rough edges. So it’s typically not advisable to be the very first to try out new things in a critical production environment.

Here at HiveMQ we expect that the first users will go live to production in late Q3 (September) 2018 and in the subsequent months. After the releases of the Paho library in June and the HiveMQ Early Access version, the adoption of MQTT 5 is expected to increase rapidly.

So, is MQTT 5 ready for prime time yet (as of December 2017)? No.

Will the new version of the protocol be suitable for production environments in the second half of 2018: Yes, definitely.

Upcoming topics in this series

We will continue this blog post series in January after the European Christmas Holidays. To kick-off the technical part of the series, we will take a look at the foundational changes of the MQTT protocol. And after that, we will release one blog post per week that will thoroughly review and inspect one new feature in detail together with best practices and fun trivia.

If you want us to send the next and all upcoming articles directly into your inbox, just use the newsletter subscribe form below.

P.S. Don’t forget to let us know if MQTT 5 is of interest for you by participating in this quick poll.

Have an awesome week,
The HiveMQ Team

Power data ingestion into Splunk using Amazon Kinesis Data Firehose

Post Syndicated from Tarik Makota original https://aws.amazon.com/blogs/big-data/power-data-ingestion-into-splunk-using-amazon-kinesis-data-firehose/

In late September, during the annual Splunk .conf, Splunk and Amazon Web Services (AWS) jointly announced that Amazon Kinesis Data Firehose now supports Splunk Enterprise and Splunk Cloud as a delivery destination. This native integration between Splunk Enterprise, Splunk Cloud, and Amazon Kinesis Data Firehose is designed to make AWS data ingestion setup seamless, while offering a secure and fault-tolerant delivery mechanism. We want to enable customers to monitor and analyze machine data from any source and use it to deliver operational intelligence and optimize IT, security, and business performance.

With Kinesis Data Firehose, customers can use a fully managed, reliable, and scalable data streaming solution to Splunk. In this post, we tell you a bit more about the Kinesis Data Firehose and Splunk integration. We also show you how to ingest large amounts of data into Splunk using Kinesis Data Firehose.

Push vs. Pull data ingestion

Presently, customers use a combination of two ingestion patterns, primarily based on data source and volume, in addition to existing company infrastructure and expertise:

  1. Pull-based approach: Using dedicated pollers running the popular Splunk Add-on for AWS to pull data from various AWS services such as Amazon CloudWatch or Amazon S3.
  2. Push-based approach: Streaming data directly from AWS to Splunk HTTP Event Collector (HEC) by using AWS Lambda. Examples of applicable data sources include CloudWatch Logs and Amazon Kinesis Data Streams.

The pull-based approach offers data delivery guarantees such as retries and checkpointing out of the box. However, it requires more ops to manage and orchestrate the dedicated pollers, which are commonly running on Amazon EC2 instances. With this setup, you pay for the infrastructure even when it’s idle.

On the other hand, the push-based approach offers a low-latency scalable data pipeline made up of serverless resources like AWS Lambda sending directly to Splunk indexers (by using Splunk HEC). This approach translates into lower operational complexity and cost. However, if you need guaranteed data delivery then you have to design your solution to handle issues such as a Splunk connection failure or Lambda execution failure. To do so, you might use, for example, AWS Lambda Dead Letter Queues.

How about getting the best of both worlds?

Let’s go over the new integration’s end-to-end solution and examine how Kinesis Data Firehose and Splunk together expand the push-based approach into a native AWS solution for applicable data sources.

By using a managed service like Kinesis Data Firehose for data ingestion into Splunk, we provide out-of-the-box reliability and scalability. One of the pain points of the old approach was the overhead of managing the data collection nodes (Splunk heavy forwarders). With the new Kinesis Data Firehose to Splunk integration, there are no forwarders to manage or set up. Data producers (1) are configured through the AWS Management Console to drop data into Kinesis Data Firehose.

You can also create your own data producers. For example, you can drop data into a Firehose delivery stream by using Amazon Kinesis Agent, or by using the Firehose API (PutRecord(), PutRecordBatch()), or by writing to a Kinesis Data Stream configured to be the data source of a Firehose delivery stream. For more details, refer to Sending Data to an Amazon Kinesis Data Firehose Delivery Stream.

You might need to transform the data before it goes into Splunk for analysis. For example, you might want to enrich it or filter or anonymize sensitive data. You can do so using AWS Lambda. In this scenario, Kinesis Data Firehose buffers data from the incoming source data, sends it to the specified Lambda function (2), and then rebuffers the transformed data to the Splunk Cluster. Kinesis Data Firehose provides the Lambda blueprints that you can use to create a Lambda function for data transformation.

Systems fail all the time. Let’s see how this integration handles outside failures to guarantee data durability. In cases when Kinesis Data Firehose can’t deliver data to the Splunk Cluster, data is automatically backed up to an S3 bucket. You can configure this feature while creating the Firehose delivery stream (3). You can choose to back up all data or only the data that’s failed during delivery to Splunk.

In addition to using S3 for data backup, this Firehose integration with Splunk supports Splunk Indexer Acknowledgments to guarantee event delivery. This feature is configured on Splunk’s HTTP Event Collector (HEC) (4). It ensures that HEC returns an acknowledgment to Kinesis Data Firehose only after data has been indexed and is available in the Splunk cluster (5).

Now let’s look at a hands-on exercise that shows how to forward VPC flow logs to Splunk.

How-to guide

To process VPC flow logs, we implement the following architecture.

Amazon Virtual Private Cloud (Amazon VPC) delivers flow log files into an Amazon CloudWatch Logs group. Using a CloudWatch Logs subscription filter, we set up real-time delivery of CloudWatch Logs to an Kinesis Data Firehose stream.

Data coming from CloudWatch Logs is compressed with gzip compression. To work with this compression, we need to configure a Lambda-based data transformation in Kinesis Data Firehose to decompress the data and deposit it back into the stream. Firehose then delivers the raw logs to the Splunk Http Event Collector (HEC).

If delivery to the Splunk HEC fails, Firehose deposits the logs into an Amazon S3 bucket. You can then ingest the events from S3 using an alternate mechanism such as a Lambda function.

When data reaches Splunk (Enterprise or Cloud), Splunk parsing configurations (packaged in the Splunk Add-on for Kinesis Data Firehose) extract and parse all fields. They make data ready for querying and visualization using Splunk Enterprise and Splunk Cloud.

Walkthrough

Install the Splunk Add-on for Amazon Kinesis Data Firehose

The Splunk Add-on for Amazon Kinesis Data Firehose enables Splunk (be it Splunk Enterprise, Splunk App for AWS, or Splunk Enterprise Security) to use data ingested from Amazon Kinesis Data Firehose. Install the Add-on on all the indexers with an HTTP Event Collector (HEC). The Add-on is available for download from Splunkbase.

HTTP Event Collector (HEC)

Before you can use Kinesis Data Firehose to deliver data to Splunk, set up the Splunk HEC to receive the data. From Splunk web, go to the Setting menu, choose Data Inputs, and choose HTTP Event Collector. Choose Global Settings, ensure All tokens is enabled, and then choose Save. Then choose New Token to create a new HEC endpoint and token. When you create a new token, make sure that Enable indexer acknowledgment is checked.

When prompted to select a source type, select aws:cloudwatch:vpcflow.

Create an S3 backsplash bucket

To provide for situations in which Kinesis Data Firehose can’t deliver data to the Splunk Cluster, we use an S3 bucket to back up the data. You can configure this feature to back up all data or only the data that’s failed during delivery to Splunk.

Note: Bucket names are unique. Thus, you can’t use tmak-backsplash-bucket.

aws s3 create-bucket --bucket tmak-backsplash-bucket --create-bucket-configuration LocationConstraint=ap-northeast-1

Create an IAM role for the Lambda transform function

Firehose triggers an AWS Lambda function that transforms the data in the delivery stream. Let’s first create a role for the Lambda function called LambdaBasicRole.

Note: You can also set this role up when creating your Lambda function.

$ aws iam create-role --role-name LambdaBasicRole --assume-role-policy-document file://TrustPolicyForLambda.json

Here is TrustPolicyForLambda.json.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

 

After the role is created, attach the managed Lambda basic execution policy to it.

$ aws iam attach-role-policy 
  --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole 
  --role-name LambdaBasicRole

 

Create a Firehose Stream

On the AWS console, open the Amazon Kinesis service, go to the Firehose console, and choose Create Delivery Stream.

In the next section, you can specify whether you want to use an inline Lambda function for transformation. Because incoming CloudWatch Logs are gzip compressed, choose Enabled for Record transformation, and then choose Create new.

From the list of the available blueprint functions, choose Kinesis Data Firehose CloudWatch Logs Processor. This function unzips data and place it back into the Firehose stream in compliance with the record transformation output model.

Enter a name for the Lambda function, choose Choose an existing role, and then choose the role you created earlier. Then choose Create Function.

Go back to the Firehose Stream wizard, choose the Lambda function you just created, and then choose Next.

Select Splunk as the destination, and enter your Splunk Http Event Collector information.

Note: Amazon Kinesis Data Firehose requires the Splunk HTTP Event Collector (HEC) endpoint to be terminated with a valid CA-signed certificate matching the DNS hostname used to connect to your HEC endpoint. You receive delivery errors if you are using a self-signed certificate.

In this example, we only back up logs that fail during delivery.

To monitor your Firehose delivery stream, enable error logging. Doing this means that you can monitor record delivery errors.

Create an IAM role for the Firehose stream by choosing Create new, or Choose. Doing this brings you to a new screen. Choose Create a new IAM role, give the role a name, and then choose Allow.

If you look at the policy document, you can see that the role gives Kinesis Data Firehose permission to publish error logs to CloudWatch, execute your Lambda function, and put records into your S3 backup bucket.

You now get a chance to review and adjust the Firehose stream settings. When you are satisfied, choose Create Stream. You get a confirmation once the stream is created and active.

Create a VPC Flow Log

To send events from Amazon VPC, you need to set up a VPC flow log. If you already have a VPC flow log you want to use, you can skip to the “Publish CloudWatch to Kinesis Data Firehose” section.

On the AWS console, open the Amazon VPC service. Then choose VPC, Your VPC, and choose the VPC you want to send flow logs from. Choose Flow Logs, and then choose Create Flow Log. If you don’t have an IAM role that allows your VPC to publish logs to CloudWatch, choose Set Up Permissions and Create new role. Use the defaults when presented with the screen to create the new IAM role.

Once active, your VPC flow log should look like the following.

Publish CloudWatch to Kinesis Data Firehose

When you generate traffic to or from your VPC, the log group is created in Amazon CloudWatch. The new log group has no subscription filter, so set up a subscription filter. Setting this up establishes a real-time data feed from the log group to your Firehose delivery stream.

At present, you have to use the AWS Command Line Interface (AWS CLI) to create a CloudWatch Logs subscription to a Kinesis Data Firehose stream. However, you can use the AWS console to create subscriptions to Lambda and Amazon Elasticsearch Service.

To allow CloudWatch to publish to your Firehose stream, you need to give it permissions.

$ aws iam create-role --role-name CWLtoKinesisFirehoseRole --assume-role-policy-document file://TrustPolicyForCWLToFireHose.json


Here is the content for TrustPolicyForCWLToFireHose.json.

{
  "Statement": {
    "Effect": "Allow",
    "Principal": { "Service": "logs.us-east-1.amazonaws.com" },
    "Action": "sts:AssumeRole"
  }
}

 

Attach the policy to the newly created role.

$ aws iam put-role-policy 
    --role-name CWLtoKinesisFirehoseRole 
    --policy-name Permissions-Policy-For-CWL 
    --policy-document file://PermissionPolicyForCWLToFireHose.json

Here is the content for PermissionPolicyForCWLToFireHose.json.

{
    "Statement":[
      {
        "Effect":"Allow",
        "Action":["firehose:*"],
        "Resource":["arn:aws:firehose:us-east-1:YOUR-AWS-ACCT-NUM:deliverystream/ FirehoseSplunkDeliveryStream"]
      },
      {
        "Effect":"Allow",
        "Action":["iam:PassRole"],
        "Resource":["arn:aws:iam::YOUR-AWS-ACCT-NUM:role/CWLtoKinesisFirehoseRole"]
      }
    ]
}

Finally, create a subscription filter.

$ aws logs put-subscription-filter 
   --log-group-name " /vpc/flowlog/FirehoseSplunkDemo" 
   --filter-name "Destination" 
   --filter-pattern "" 
   --destination-arn "arn:aws:firehose:us-east-1:YOUR-AWS-ACCT-NUM:deliverystream/FirehoseSplunkDeliveryStream" 
   --role-arn "arn:aws:iam::YOUR-AWS-ACCT-NUM:role/CWLtoKinesisFirehoseRole"

When you run the AWS CLI command preceding, you don’t get any acknowledgment. To validate that your CloudWatch Log Group is subscribed to your Firehose stream, check the CloudWatch console.

As soon as the subscription filter is created, the real-time log data from the log group goes into your Firehose delivery stream. Your stream then delivers it to your Splunk Enterprise or Splunk Cloud environment for querying and visualization. The screenshot following is from Splunk Enterprise.

In addition, you can monitor and view metrics associated with your delivery stream using the AWS console.

Conclusion

Although our walkthrough uses VPC Flow Logs, the pattern can be used in many other scenarios. These include ingesting data from AWS IoT, other CloudWatch logs and events, Kinesis Streams or other data sources using the Kinesis Agent or Kinesis Producer Library. We also used Lambda blueprint Kinesis Data Firehose CloudWatch Logs Processor to transform streaming records from Kinesis Data Firehose. However, you might need to use a different Lambda blueprint or disable record transformation entirely depending on your use case. For an additional use case using Kinesis Data Firehose, check out This is My Architecture Video, which discusses how to securely centralize cross-account data analytics using Kinesis and Splunk.

 


Additional Reading

If you found this post useful, be sure to check out Integrating Splunk with Amazon Kinesis Streams and Using Amazon EMR and Hunk for Rapid Response Log Analysis and Review.


About the Authors

Tarik Makota is a solutions architect with the Amazon Web Services Partner Network. He provides technical guidance, design advice and thought leadership to AWS’ most strategic software partners. His career includes work in an extremely broad software development and architecture roles across ERP, financial printing, benefit delivery and administration and financial services. He holds an M.S. in Software Development and Management from Rochester Institute of Technology.

 

 

 

Roy Arsan is a solutions architect in the Splunk Partner Integrations team. He has a background in product development, cloud architecture, and building consumer and enterprise cloud applications. More recently, he has architected Splunk solutions on major cloud providers, including an AWS Quick Start for Splunk that enables AWS users to easily deploy distributed Splunk Enterprise straight from their AWS console. He’s also the co-author of the AWS Lambda blueprints for Splunk. He holds an M.S. in Computer Science Engineering from the University of Michigan.

 

 

 

Lorelei Joins The Operations Crew

Post Syndicated from Yev original https://www.backblaze.com/blog/lorelei-joins-operations-crew/

We’ve eclipsed the 400 Petabyte mark and our data center continues to grow. What does that mean? It means we need more great people working in our data centers making sure that the hard drives keep spinning and that sputtering drives are promptly dealt with. Lorelei is the newest Data Center Technician to join our ranks. Let’s learn a bit more about Lorelei, shall we?

What is your Backblaze Title?
DC Tech!! I’m the saucy one.

Where are you originally from?
San Francisco/Bowling Green, Ohio. Just moved up to Sacramento this year, and it’s so nice to have four seasons again. I’m drowning in leaves but I’m totally OK with it.

What attracted you to Backblaze?
I was a librarian in my previous life, mainly because I believe that information should be open to everyone. I was familiar with Backblaze prior to joining the team, and I’m a huge fan of their fresh approach to sharing information and openness. The interview process was also the coolest one I’ll ever have!

What do you expect to learn while being at Backblaze?
A lot about Linux!

Where else have you worked?
A chocolate factory and a popular culture library.

Where did you go to school?
CSU East Bay, Bowling Green State University (go Falcons), and Clarion.

Favorite place you’ve traveled?
Stockholm & Tokyo! I hope to travel more in Asia and Europe.

Favorite hobby?
Music is not magic, but music is…
Come sing with me @ karaoke!

Favorite food?
I love trying new food. I love anything that’s acidic, sweet, fresh, salty, flavorful. Fruit is the best food, but everything else is good too. I’m one of those Yelp people: always seeking & giving food recs!

Why do you like certain things?
I like things that make me happy and that make other people happy. Have fun & enjoy life. Yeeeeehaw.

Welcome to the team Lorelei. And thank you very much for leaving Yelp reviews. It’s nice to give back to the community!

The post Lorelei Joins The Operations Crew appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.