Tag Archives: office

Game night 2: Detention, Viatoree, Paletta

Post Syndicated from Eevee original https://eev.ee/blog/2018/01/16/game-night-2-detention-viatoree-paletta/

Game night continues with:

  • Detention
  • Viatoree
  • Paletta

These are impressions, not reviews. I try to avoid major/ending spoilers, but big plot points do tend to leave impressions.

Detention

longish · inventory horror · jan 2017 · lin/mac/win · $12 on steam · website

Inventory horror” is a hell of a genre.

I think this one came from a Twitter thread where glip asked for indie horror recommendations. It’s apparently well-known enough to have a Wikipedia article, but I hadn’t heard of it before.

I love love love the aesthetic here. It’s obviously 2Dish from a side view (though there’s plenty of parallax in a lot of places), and it’s all done with… papercraft? I think of it as papercraft. Everything is built out of painted chunks that look like they were cut out of paper. It’s most obvious when watching the protagonist move around; her legs and skirt swivel as she walks.

Less obvious are the occasional places where tiny details repeat in the background because a paper cutout was reused. I don’t bring that up as a dig on the art; on the contrary, I really liked noticing that once or twice. It made the world feel like it was made with a tileset (albeit with very large chunky tiles), like it’s slightly artificial. I’m used to seeing sidescrollers made from tiles, of course, but the tiles are usually colorful and cartoony pixel art; big gritty full-color tiles are unusual and eerie.

And that’s a good thing in a horror game! Detention’s setting is already slightly unreal, and it’s made all the moreso by my Western perspective: it takes place in a Taiwanese school in the 60’s, a time when Taiwan was apparently under martial law. The Steam page tells you this, but I didn’t even know that much when we started playing, so I’d effectively been dropped somewhere on the globe and left to collect the details myself. Even figuring out we were in Taiwan (rather than mainland China) felt like an insight.

Thinking back, it was kind of a breath of fresh air. Games can be pretty heavy-handed about explaining the setting, but I never got that feeling from Detention. There’s more than enough context to get what’s going on, but there are no “stop and look at the camera while monologuing some exposition” moments. The developers are based in Taiwan, so it’s possible the setting is plenty familiar to them, and my perception of it is a complete accident. Either way, it certainly made an impact. Death of the author and whatnot, I suppose.

One thing in particular that stood out: none of the Chinese text in the environment is directly translated. The protagonist’s thoughts still give away what it says — “this is the nurse’s office” and the like — but that struck me as pretty different from simply repeating the text in English as though I were reading a sign in an RPG. The text is there, perfectly legible, but I can’t read it; I can only ask the protagonist to read it and offer her thoughts. It drives home that I’m experiencing the world through the eyes of the protagonist, who is their own person with their own impression of everything. Again, this is largely an emergent property of the game’s being designed in a culture that is not mine, but I’m left wondering how much thought went into this style of localization.

The game itself sees you wandering through a dark and twisted version of the protagonist’s school, collecting items and solving puzzles with them. There’s no direct combat, though some places feature a couple varieties of spirits called lingered which you have to carefully avoid. As the game progresses, the world starts to break down, alternating between increasingly abstract and increasingly concrete as we find out who the protagonist is and why she’s here.

The payoff is very personal and left a lasting impression… though as I look at the Wikipedia page now, it looks like the ending we got was the non-canon bad ending?! Well, hell. The bad ending is still great, then.

The whole game has a huge Silent Hill vibe, only without the combat and fog. Frankly, the genre might work better without combat; personal demons are more intimidating and meaningful when you can’t literally shoot them with a gun until they’re dead.

FINAL SCORE: 拾

Viatoree

short · platformer · sep 2013 · win · free on itch

I found this because @itchio tweeted about it, and the phrase “atmospheric platform exploration game” is the second most beautiful sequence of words in the English language.

The first paragraph on the itch.io page tells you the setup. That paragraph also contains more text than the entire game. In short: there are five things, and you need to find them. You can walk, jump, and extend your arms straight up to lift yourself to the ceiling. That’s it. No enemies, no shooting, no NPCs (more or less).

The result is, indeed, an atmospheric platform exploration game. The foreground is entirely 1-bit pixel art, save for the occasional white pixel to indicate someone’s eyes, and the background is only a few shades of the same purple hue. The game becomes less about playing and more about just looking at the environmental detail, appreciating how much texture the game manages to squeeze out of chunky colorless pixels. The world is still alive, too, much moreso than most platformers; tiny critters appear here and there, doing some wandering of their own, completely oblivious to you.

The game is really short, but it… just… makes me happy. I’m happy that this can exist, that not only is it okay for someone to make a very compact and short game, but that the result can still resonate with me. Not everything needs to be a sprawling epic or ask me to dedicate hours of time. It takes a few tiny ideas, runs with them, does what it came to do, and ends there. I love games like this.

That sounds silly to write out, but it’s been hard to get into my head! I do like experimenting, but I also feel compelled to reach for the grandiose, and grandiose experiment sounds more like mad science than creative exploration. For whatever reason, Viatoree convinced me that it’s okay to do a small thing, in a way that no other jam game has. It was probably the catalyst that led me to make Roguelike Simulator, and I thank it for that.

Unfortunately, we collected four of the five macguffins before hitting upon on a puzzle we couldn’t make heads or tails of. After about ten minutes of fruitless searching, I decided to abandon this one unfinished, rather than bore my couch partner to tears. Maybe I’ll go take another stab at it after I post this.

FINAL SCORE: ●●●●○

Paletta

medium · puzzle story · nov 2017 · win · free on itch

Paletta, another RPG Maker work, won second place in the month-long Indie Game Maker Contest 2017. Nice! Apparently MOOP came in fourth in the same jam; also nice! I guess that’s why both of them ended up on the itch front page.

The game is set in a world drained of color, and you have to go restore it. Each land contains one lost color, and each color gives you a corresponding spell, which is generally used for some light puzzle-solving in further lands. It’s a very cute and light-hearted game, and it actually does an impressive job of obscuring its RPG Maker roots.

The world feels a little small to me, despite having fairly spacious maps. The progression is pretty linear: you enter one land, talk to a small handful of NPCs, solve the one puzzle, get the color, and move on. I think all the areas were continuously connected, too, which may have thrown me off a bit — these areas are described as though they were vast regions, but they’re all a hundred feet wide and nestled right next to each other.

I love playing with color as a concept, and I wish the game had run further with it somehow. Rescuing a color does add some color back to the world, but at times it seemed like the color that reappeared was somewhat arbitrary? It’s not like you rescue green and now all the green is back. Thinking back on it now, I wonder if each rescued color actually changed a fixed set of sprites from gray to colorized? But it’s been a month (oops) and now I’m not sure.

I’m not trying to pick on the authors for the brevity of their jam game and also first game they’ve ever finished. I enjoyed playing it and found it plenty charming! It just happens that this time, what left the biggest impression on me was a nebulous feeling that something was missing. I think that’s still plenty important to ponder.

FINAL SCORE: ❤️💛💚💙💜

Early Challenges: Managing Cash Flow

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/managing-cash-flow/

Cash flow projection charts

This post by Backblaze’s CEO and co-founder Gleb Budman is the eighth in a series about entrepreneurship. You can choose posts in the series from the list below:

  1. How Backblaze got Started: The Problem, The Solution, and the Stuff In-Between
  2. Building a Competitive Moat: Turning Challenges Into Advantages
  3. From Idea to Launch: Getting Your First Customers
  4. How to Get Your First 1,000 Customers
  5. Surviving Your First Year
  6. How to Compete with Giants
  7. The Decision on Transparency
  8. Early Challenges: Managing Cash Flow

Use the Join button above to receive notification of new posts in this series.

Running out of cash is one of the quickest ways for a startup to go out of business. When you are starting a company the question of where to get cash is usually the top priority, but managing cash flow is critical for every stage in the lifecycle of a company. As a primarily bootstrapped but capital-intensive business, managing cash flow at Backblaze was and still is a key element of our success and requires continued focus. Let’s look at what we learned over the years.

Raising Your Initial Funding

When starting a tech business in Silicon Valley, the default assumption is that you will immediately try to raise venture funding. There are certainly many advantages to raising funding — not the least of which is that you don’t need to be cash-flow positive since you have cash in the bank and the expectation is that you will have a “burn rate,” i.e. you’ll be spending more than you make.

Note: While you’re not expected to be cash-flow positive, that doesn’t mean you don’t have to worry about cash. Cash-flow management will determine your burn rate. Whether you can get to cash-flow breakeven or need to raise another round of funding is a direct byproduct of your cash flow management.

Also, raising funding takes time (most successful fundraising cycles take 3-6 months start-to-finish), and time at a startup is in short supply. Constantly trying to raise funding can take away from product development and pursuing growth opportunities. If you’re not successful in raising funding, you then have to either shut down or find an alternate method of funding the business.

Sources of Funding

Depending on the stage of the company, type of company, and other factors, you may have access to different sources of funding. Let’s list a number of them:

Customers

Sales — the best kind of funding. It is non-dilutive, doesn’t have to be paid back, and is a direct metric of the success of your company.

Pre-Sales — some customers may be willing to pay you for a product in beta, a test, or pre-pay for a product they’ll receive when finished. Pre-Sales income also is great because it shares the characteristics of cash from sales, but you get the cash early. It also can be a good sign that the product you’re building fills a market need. We started charging for Backblaze computer backup while it was still in private beta, which allowed us to not only collect cash from customers, but also test the billing experience and users’ real desire for the service.

Services — if you’re a service company and customers are paying you for that, great. You can effectively scale for the number of hours available in a day. As demand grows, you can add more employees to increase the total number of billable hours.

Note: If you’re a product company and customers are paying you to consult, that can provide much needed cash, and could provide feedback toward the right product. However, it can also distract from your core business, send you down a path where you’re building a product for a single customer, and addict you to a path that prevents you from building a scalable business.

Investors

Yourself — you likely are putting your time into the business, and deferring salary in the process. You may also put your own cash into the business either as an investment or a loan.

Angels — angels are ideal as early investors since they are used to investing in businesses with little to no traction. AngelList is a good place to find them, though finding people you’re connected with through someone that knows you well is best.

Crowdfunding — a component of the JOBS Act permitted entrepreneurs to raise money from nearly anyone since May 2016. The SEC imposes limits on both investors and the companies. This article goes into some depth on the options and sites available.

VCs — VCs are ideal for companies that need to raise at least a few million dollars and intend to build a business that will be worth over $1 billion.

Debt

Friends & Family — F&F are often the first people to give you money because they are investing in you. It’s great to have some early supporters, but it also can be risky to take money from people who aren’t used to the risks. The key advice here is to only take money from people who won’t mind losing it. If someone is talking about using their children’s college funds or borrowing from their 401k, say ‘no thank you’ — even if they’re sure they want to loan you money.

Bank Loans — a variety of loan types exist, but most either require the company to have been operational for a couple years, be able to borrow against money the company has or is making, or be able to get a personal guarantee from the founders whereby their own credit is on the line. Fundera provides a good overview of loan options and can help secure some, but most will not be an option for a brand new startup.

Grants

Government — in some areas there is the potential for government grants to facilitate research. The SBIR program facilitates some such grants.

At Backblaze, we used a number of these options:

• Investors/Yourself
We loaned a cumulative total of a couple hundred thousand dollars to the company and invested our time by going without a salary for a year and a half.
• Customers/Pre-Sales
We started selling the Backblaze service while it was still in beta.
• Customers/Sales
We launched v1.0 and kept selling.
• Investors/Angels
After a year and a half, we raised $370k from 11 angels. All of them were either people whom we knew personally or were a strong recommendation from a mutual friend.
• Debt/Loans
After a couple years we were able to get equipment leases whereby the Storage Pods and hard drives were used as collateral to secure the lease on them.
• Investors/VCs
Ater five years we raised $5m from TMT Investments to add to the balance sheet and invest in growth.

The variety and quantity of sources we used is by no means uncommon.

GAAP vs. Cash

Most companies start tracking financials based on cash, and as they scale they switch to GAAP (Generally Accepted Accounting Principles). Cash is easier to track — we got paid $XXXX and spent $YYY — and as often mentioned, is required for the business to stay alive. GAAP has more subtlety and complexity, but provides a clearer picture of how the business is really doing. Backblaze was on a ‘cash’ system for the first few years, then switched to GAAP. For this post, I’m going to focus on things that help cash flow, not GAAP profitability.

Stages of Cash Flow Management

All-spend

In a pure service business (e.g. solo proprietor law firm), you may have no expenses other than your time, so this stage doesn’t exist. However, in a product business there is a period of time where you are building the product and have nothing to sell. You have zero cash coming in, but have cash going out. Your cash-flow is completely negative and you need funds to cover that.

Sales-generating

Starting to see cash come in from customers is thrilling. I initially had our system set up to email me with every $5 payment we received. You’re making sales, but not covering expenses.

Ramen-profitable

But it takes a lot of $5 payments to pay for servers and salaries, so for a while expenses are likely to outstrip sales. Getting to ramen-profitable is a critical stage where sales cover the business expenses and are “paying enough for the founders to eat ramen.” This extends the runway for a business, but is not completely sustainable, since presumably the founders can’t (or won’t) live forever on a subsistence salary.

Business-profitable

This is the ultimate stage whereby the business is truly profitable, including paying everyone market-rate salaries. A business at this stage is self-sustaining. (Of course, market shifts and plenty of other challenges can kill the business, but cash-flow issues alone will not.)

Note, I’m using the word ‘profitable’ here to mean this is still on a cash-basis.

Backblaze was in the all-spend stage for just over a year, during which time we built the service and hadn’t yet made the service available to customers. Backblaze was in the sales-generating stage for nearly another year before the company was barely ramen-profitable where sales were covering the company expenses and paying the founders minimum wage. (I say ‘barely’ since minimum wage in the SF Bay Area is arguably never subsistence.) It took almost three more years before the company was business-profitable, paying everyone including the founders market-rate.

Cash Flow Forecasting

When raising funding it’s helpful to think of milestones reached. You don’t necessarily need enough cash on day one to last for the next 100 years of the company. Some good milestones to consider are how much cash you need to prove there is a market need, prove you can build a product to meet that need, or get to ramen-profitable.

Two things to consider:

1) Unit Economics (COGS)

If your product is 100% software, this may not be relevant. Once software is built it costs effectively nothing to deliver the product to one customer or one million customers. However, in most businesses there is some incremental cost to provide the product. If you’re selling a hardware device, perhaps you sell it for $100 but it costs you $50 to make it. This is called “COGS” (Cost of Goods Sold).

Many products rely on cloud services where the costs scale with growth. That model works great, but it’s still important to understand what the costs are for the cloud service you use per unit of product you sell.

Support is often done by the founders early-on in a business, but that is another real cost to factor in and estimate on a per-user basis. Taking all of the per unit costs combined, you may charge $10/month/user for your service, but if it costs you $7/month/user in cloud services, you’re only netting $3/month/user.

2) Operating Expenses (OpEx)

These are expenses that don’t scale with the number of product units you sell. Typically this includes research & development, sales & marketing, and general & administrative expenses. Presumably there is a certain level of these functions required to build the product, market it, sell it, and run the organization. You can choose to invest or cut back on these, but you’ll still make the same amount per product unit.

Incremental Net Profit Per Unit

If you’ve calculated your COGS and your unit economics are “upside down,” where the amount you charge is less than that it costs you to provide your service, it’s worth thinking hard about how that’s going to change over time. If it will not change, there is no scale that will make the business work. Presuming you do make money on each unit of product you sell — what is sometimes referred to as “Contribution Margin” — consider how many of those product units you need to sell to cover your operating expenses as described above.

Calculating Your Profit

The math on getting to ramen-profitable is simple:

(Number of Product Units Sold x Contribution Margin) - Operating Expenses = Profit

If your operating expenses include subsistence salaries for the founders and profit > $0, you’re ramen-profitable.

Improving Cash Flow

Having access to sources of cash, whether from selling to customers or other methods, is excellent. But needing less cash gives you more choices and allows you to either dilute less, owe less, or invest more.

There are two ways to improve cash flow:

1) Collect More Cash

The best way to collect more cash is to provide more value to your customers and as a result have them pay you more. Additional features/products/services can allow this. However, you can also collect more cash by changing how you charge for your product. If you have a subscription, changing from charging monthly to yearly dramatically improves your cash flow. If you have a product that customers use up, selling a year’s supply instead of selling them one-by-one can help.

2) Spend Less Cash

Reducing COGS is a fantastic way to spend less cash in a scalable way. If you can do this without harming the product or customer experience, you win. There are a myriad of ways to also reduce operating expenses, including taking sub-market salaries, using your home instead of renting office space, staying focused on your core product, etc.

Ultimately, collecting more and spending less cash dramatically simplifies the process of getting to ramen-profitable and later to business-profitable.

Be Careful (Why GAAP Matters)

A word of caution: while running out of cash will put you out of business immediately, overextending yourself will likely put you out of business not much later. GAAP shows how a business is really doing; cash doesn’t. If you only focus on cash, it is possible to commit yourself to both delivering products and repaying loans in the future in an unsustainable fashion. If you’re taking out loans, watch the total balance and monthly payments you’re committing to. If you’re asking customers for pre-payment, make sure you believe you can deliver on what they’ve paid for.

Summary

There are numerous challenges to building a business, and ensuring you have enough cash is amongst the most important. Having the cash to keep going lets you keep working on all of the other challenges. The frameworks above were critical for maintaining Backblaze’s cash flow and cash balance. Hopefully you can take some of the lessons we learned and apply them to your business. Let us know what works for you in the comments below.

The post Early Challenges: Managing Cash Flow appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Now Open – Third AWS Availability Zone in London

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-third-aws-availability-zone-in-london/

We expand AWS by picking a geographic area (which we call a Region) and then building multiple, isolated Availability Zones in that area. Each Availability Zone (AZ) has multiple Internet connections and power connections to multiple grids.

Today I am happy to announce that we are opening our 50th AWS Availability Zone, with the addition of a third AZ to the EU (London) Region. This will give you additional flexibility to architect highly scalable, fault-tolerant applications that run across multiple AZs in the UK.

Since launching the EU (London) Region, we have seen an ever-growing set of customers, particularly in the public sector and in regulated industries, use AWS for new and innovative applications. Here are a couple of examples, courtesy of my AWS colleagues in the UK:

Enterprise – Some of the UK’s most respected enterprises are using AWS to transform their businesses, including BBC, BT, Deloitte, and Travis Perkins. Travis Perkins is one of the largest suppliers of building materials in the UK and is implementing the biggest systems and business change in its history, including an all-in migration of its data centers to AWS.

Startups – Cross-border payments company Currencycloud has migrated its entire payments production, and demo platform to AWS resulting in a 30% saving on their infrastructure costs. Clearscore, with plans to disrupting the credit score industry, has also chosen to host their entire platform on AWS. UnderwriteMe is using the EU (London) Region to offer an underwriting platform to their customers as a managed service.

Public Sector -The Met Office chose AWS to support the Met Office Weather App, available for iPhone and Android phones. Since the Met Office Weather App went live in January 2016, it has attracted more than half a million users. Using AWS, the Met Office has been able to increase agility, speed, and scalability while reducing costs. The Driver and Vehicle Licensing Agency (DVLA) is using the EU (London) Region for services such as the Strategic Card Payments platform, which helps the agency achieve PCI DSS compliance.

The AWS EU (London) Region has achieved Public Services Network (PSN) assurance, which provides UK Public Sector customers with an assured infrastructure on which to build UK Public Sector services. In conjunction with AWS’s Standardized Architecture for UK-OFFICIAL, PSN assurance enables UK Public Sector organizations to move their UK-OFFICIAL classified data to the EU (London) Region in a controlled and risk-managed manner.

For a complete list of AWS Regions and Services, visit the AWS Global Infrastructure page. As always, pricing for services in the Region can be found on the detail pages; visit our Cloud Products page to get started.

Jeff;

US Govt Brands Torrent, Streaming & Cyberlocker Sites As Notorious Markets

Post Syndicated from Andy original https://torrentfreak.com/us-govt-brands-torrent-streaming-cyberlocker-sites-as-notorious-markets-180115/

In its annual “Out-of-Cycle Review of Notorious Markets” the office of the United States Trade Representative (USTR) has listed a long list of websites said to be involved in online piracy.

The list is compiled with high-level input from various trade groups, including the MPAA and RIAA who both submitted their recommendations (1,2) during early October last year.

With the word “allegedly” used more than two dozen times in the report, the US government notes that its report does not constitute cast-iron proof of illegal activity. However, it urges the countries from where the so-called “notorious markets” operate to take action where they can, while putting owners and facilitators on notice that their activities are under the spotlight.

“A goal of the List is to motivate appropriate action by owners, operators, and service providers in the private sector of these and similar markets, as well as governments, to reduce piracy and counterfeiting,” the report reads.

“USTR highlights the following marketplaces because they exemplify global counterfeiting and piracy concerns and because the scale of infringing activity in these marketplaces can cause significant harm to U.S. intellectual property (IP) owners, consumers, legitimate online platforms, and the economy.”

The report begins with a page titled “Issue Focus: Illicit Streaming Devices”. Unsurprisingly, particularly given their place in dozens of headlines last year, the segment focus on the set-top box phenomenon. The piece doesn’t list any apps or software tools as such but highlights the general position, claiming a cost to the US entertainment industry of $4-5 billion a year.

Torrent Sites

In common with previous years, the USTR goes on to list several of the world’s top torrent sites but due to changes in circumstances, others have been delisted. ExtraTorrent, which shut down May 2017, is one such example.

As the world’s most famous torrent site, The Pirate Bay gets a prominent mention, with the USTR noting that the site is of “symbolic importance as one of the longest-running and most vocal torrent sites. The USTR underlines the site’s resilience by noting its hydra-like form while revealing an apparent secret concerning its hosting arrangements.

“The Pirate Bay has allegedly had more than a dozen domains hosted in various countries around the world, applies a reverse proxy service, and uses a hosting provider in Vietnam to evade further enforcement action,” the USTR notes.

Other torrent sites singled out for criticism include RARBG, which was nominated for the listing by the movie industry. According to the USTR, the site is hosted in Bosnia and Herzegovina and has changed hosting services to prevent shutdowns in recent years.

1337x.to and the meta-search engine Torrentz2 are also given a prime mention, with the USTR noting that they are “two of the most popular torrent sites that allegedly infringe U.S. content industry’s copyrights.” Russia’s RuTracker is also targeted for criticism, with the government noting that it’s now one of the most popular torrent sites in the world.

Streaming & Cyberlockers

While torrent sites are still important, the USTR reserves considerable space in its report for streaming portals and cyberlocker-type services.

4Shared.com, a file-hosting site that has been targeted by dozens of millions of copyright notices, is reportedly no longer able to use major US payment providers. Nevertheless, the British Virgin Islands company still collects significant sums from premium accounts, advertising, and offshore payment processors, USTR notes.

Cyberlocker Rapidgator gets another prominent mention in 2017, with the USTR noting that the Russian-hosted platform generates millions of dollars every year through premium memberships while employing rewards and affiliate schemes.

Due to its increasing popularity as a hosting and streaming operation, Openload.co (Romania) is now a big target for the USTR. “The site is used frequently in combination with add-ons in illicit streaming devices. In November 2017, users visited Openload.co a staggering 270 million times,” the USTR writes.

Owned by a Swiss company and hosted in the Netherlands, the popular site Uploaded is also criticized by the US alongside France’s 1Fichier.com, which allegedly hosts pirate games while being largely unresponsive to takedown notices. Dopefile.pk, a Pakistan-based storage outfit, is also highlighted.

On the video streaming front, it’s perhaps no surprise that the USTR focuses on sites like FMovies (Sweden), GoStream (Vietnam), Movie4K.tv (Russia) and PrimeWire. An organization collectively known as the MovShare group which encompasses Nowvideo.sx, WholeCloud.net, NowDownload.cd, MeWatchSeries.to and WatchSeries.ac, among others, is also listed.

Unauthorized music / research papers

While most of the above are either focused on video or feature it as part of their repertoire, other sites are listed for their attention to music. Convert2MP3.net is named as one of the most popular stream-ripping sites in the world and is highlighted due to the prevalence of YouTube-downloader sites and the 2017 demise of YouTube-MP3.

“Convert2MP3.net does not appear to have permission from YouTube or other sites and does not have permission from right holders for a wide variety of music represented by major U.S. labels,” the USTR notes.

Given the amount of attention the site has received in 2017 as ‘The Pirate Bay of Research’, Libgen.io and Sci-Hub.io (not to mention the endless proxy and mirror sites that facilitate access) are given a detailed mention in this year’s report.

“Together these sites make it possible to download — all without permission and without remunerating authors, publishers or researchers — millions of copyrighted books by commercial publishers and university presses; scientific, technical and medical journal articles; and publications of technological standards,” the USTR writes.

Service providers

But it’s not only sites that are being put under pressure. Following a growing list of nominations in previous years, Swiss service provider Private Layer is again singled out as a rogue player in the market for hosting 1337x.to and Torrentz2.eu, among others.

“While the exact configuration of websites changes from year to year, this is the fourth consecutive year that the List has stressed the significant international trade impact of Private Layer’s hosting services and the allegedly infringing sites it hosts,” the USTR notes.

“Other listed and nominated sites may also be hosted by Private Layer but are using
reverse proxy services to obfuscate the true host from the public and from law enforcement.”

The USTR notes Switzerland’s efforts to close a legal loophole that restricts enforcement and looks forward to a positive outcome when the draft amendment is considered by parliament.

Perhaps a little surprisingly given its recent anti-piracy efforts and overtures to the US, Russia’s leading social network VK.com again gets a place on the new list. The USTR recognizes VK’s efforts but insists that more needs to be done.

Social networking and e-commerce

“In 2016, VK reached licensing agreements with major record companies, took steps to limit third-party applications dedicated to downloading infringing content from the site, and experimented with content recognition technologies,” the USTR writes.

“Despite these positive signals, VK reportedly continues to be a hub of infringing activity and the U.S. motion picture industry reports that they find thousands of infringing files on the site each month.”

Finally, in addition to traditional pirate sites, the US also lists online marketplaces that allegedly fail to meet appropriate standards. Re-added to the list in 2016 after a brief hiatus in 2015, China’s Alibaba is listed again in 2017. The development provoked an angry response from the company.

Describing his company as a “scapegoat”, Alibaba Group President Michael Evans said that his platform had achieved a 25% drop in takedown requests and has even been removing infringing listings before they make it online.

“In light of all this, it’s clear that no matter how much action we take and progress we make, the USTR is not actually interested in seeing tangible results,” Evans said in a statement.

The full list of sites in the Notorious Markets Report 2017 (pdf) can be found below.

– 1fichier.com – (cyberlocker)
– 4shared.com – (cyberlocker)
– convert2mp3.net – (stream-ripper)
– Dhgate.com (e-commerce)
– Dopefile.pl – (cyberlocker)
– Firestorm-servers.com (pirate gaming service)
– Fmovies.is, Fmovies.se, Fmovies.to – (streaming)
– Gostream.is, Gomovies.to, 123movieshd.to (streaming)
– Indiamart.com (e-commerce)
– Kinogo.club, kinogo.co (streaming host, platform)
– Libgen.io, sci-hub.io, libgen.pw, sci-hub.cc, sci-hub.bz, libgen.info, lib.rus.ec, bookfi.org, bookzz.org, booker.org, booksc.org, book4you.org, bookos-z1.org, booksee.org, b-ok.org (research downloads)
– Movshare Group – Nowvideo.sx, wholecloud.net, auroravid.to, bitvid.sx, nowdownload.ch, cloudtime.to, mewatchseries.to, watchseries.ac (streaming)
– Movie4k.tv (streaming)
– MP3VA.com (music)
– Openload.co (cyberlocker / streaming)
– 1337x.to (torrent site)
– Primewire.ag (streaming)
– Torrentz2, Torrentz2.me, Torrentz2.is (torrent site)
– Rarbg.to (torrent site)
– Rebel (domain company)
– Repelis.tv (movie and TV linking)
– RuTracker.org (torrent site)
– Rapidgator.net (cyberlocker)
– Taobao.com (e-commerce)
– The Pirate Bay (torrent site)
– TVPlus, TVBrowser, Kuaikan (streaming apps and addons, China)
– Uploaded.net (cyberlocker)
– VK.com (social networking)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Wanted: Junior Support Technician

Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-junior-support-technician/

Backblaze is growing and as we grow we want to make sure that our customers are very well taken care of! One of the departments that grows along with our customer base is the support department, which is located in our San Mateo, California headquarters. Want to jump start your career? Take a look below and if this sounds like you, apply to join our team!

Responsibilities:

  • Answer questions in the queue commonly found in the FAQ.
  • Ability to install and uninstall programs on Mac and PC.
  • Clear communication via email.
  • Learn and expand your knowledge base to become a Tech Support Agent.
  • Learn how to navigate the Zendesk support tool and create helpful macros and work flow views.
  • Create receipts for users that ask for them, via template.
  • Respond to any tickets that you get a reply to.
  • Ask questions for facilitate learning.
  • Obtain skills and knowledge to move into a Tier 2 position.

Requirements:

  • Excellent communication, time management, problem solving, and organizational skills.
  • Ability to learn quickly.
  • Position based in San Mateo, California.

Backblaze Employees Have:

  • Good attitude and willingness to do whatever it takes to get the job done.
  • Strong desire to work for a small, fast-paced company.
  • Desire to learn and adapt to rapidly changing technologies and work environment.
  • Comfortable with well-behaved pets in the office.

Backblaze is an Equal Opportunity Employer and we offer competitive salary and benefits, including our no policy vacation policy.

If This Sounds Like You:
Send an email to jobscontact@backblaze.com with:

  1. The position title in the subject line
  2. Your resume attached
  3. An overview of your relevant experience

The post Wanted: Junior Support Technician appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Wanted: Sales Engineer

Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-sales-engineer/

At inception, Backblaze was a consumer company. Thousands upon thousands of individuals came to our website and gave us $5/mo to keep their data safe. But, we didn’t sell business solutions. It took us years before we had a sales team. In the last couple of years, we’ve released products that businesses of all sizes love: Backblaze B2 Cloud Storage and Backblaze for Business Computer Backup. Those businesses want to integrate Backblaze deeply into their infrastructure, so it’s time to hire our first Sales Engineer!

Company Description:
Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 – robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.

We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.

We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple – grow sustainably and profitably.

Some Backblaze Perks:

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive Option grants
  • Unlimited vacation days
  • Strong coffee
  • Fully stocked Micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • Childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office – located near Caltrain and Highways 101 & 280.

Backblaze B2 cloud storage is a building block for almost any computing service that requires storage. Customers need our help integrating B2 into iOS apps to Docker containers. Some customers integrate directly to the API using the programming language of their choice, others want to solve a specific problem using ready made software, already integrated with B2.

At the same time, our computer backup product is deepening it’s integration into enterprise IT systems. We are commonly asked for how to set Windows policies, integrate with Active Directory, and install the client via remote management tools.

We are looking for a sales engineer who can help our customers navigate the integration of Backblaze into their technical environments.

Are you 1/2” deep into many different technologies, and unafraid to dive deeper?

Can you confidently talk with customers about their technology, even if you have to look up all the acronyms right after the call?

Are you excited to setup complicated software in a lab and write knowledge base articles about your work?

Then Backblaze is the place for you!

Enough about Backblaze already, what’s in it for me?
In this role, you will be given the opportunity to learn about the technologies that drive innovation today; diverse technologies that customers are using day in and out. And more importantly, you’ll learn how to learn new technologies.

Just as an example, in the past 12 months, we’ve had the opportunity to learn and become experts in these diverse technologies:

  • How to setup VM servers for lab environments, both on-prem and using cloud services.
  • Create an automatically “resetting” demo environment for the sales team.
  • Setup Microsoft Domain Controllers with Active Directory and AD Federation Services.
  • Learn the basics of OAUTH and web single sign on (SSO).
  • Archive video workflows from camera to media asset management systems.
  • How upload/download files from Javascript by enabling CORS.
  • How to install and monitor online backup installations using RMM tools, like JAMF.
  • Tape (LTO) systems. (Yes – people still use tape for storage!)

How can I know if I’ll succeed in this role?

You have:

  • Confidence. Be able to ask customers questions about their environments and convey to them your technical acumen.
  • Curiosity. Always want to learn about customers’ situations, how they got there and what problems they are trying to solve.
  • Organization. You’ll work with customers, integration partners, and Backblaze team members on projects of various lengths. You can context switch and either have a great memory or keep copious notes. Your checklists have their own checklists.

You are versed in:

  • The fundamentals of Windows, Linux and Mac OS X operating systems. You shouldn’t be afraid to use a command line.
  • Building, installing, integrating and configuring applications on any operating system.
  • Debugging failures – reading logs, monitoring usage, effective google searching to fix problems excites you.
  • The basics of TCP/IP networking and the HTTP protocol.
  • Novice development skills in any programming/scripting language. Have basic understanding of data structures and program flow.
  • Your background contains:

  • Bachelor’s degree in computer science or the equivalent.
  • 2+ years of experience as a pre or post-sales engineer.
  • The right extra credit:
    There are literally hundreds of previous experiences you can have had that would make you perfect for this job. Some experiences that we know would be helpful for us are below, but make sure you tell us your stories!

  • Experience using or programming against Amazon S3.
  • Experience with large on-prem storage – NAS, SAN, Object. And backing up data on such storage with tools like Veeam, Veritas and others.
  • Experience with photo or video media. Media archiving is a key market for Backblaze B2.
  • Program arduinos to automatically feed your dog.
  • Experience programming against web or REST APIs. (Point us towards your projects, if they are open source and available to link to.)
  • Experience with sales tools like Salesforce.
  • 3D print door stops.
  • Experience with Windows Servers, Active Directory, Group policies and the like.
  • What’s it like working with the Sales team?
    The Backblaze sales team collaborates. We help each other out by sharing ideas, templates, and our customer’s experiences. When we talk about our accomplishments, there is no “I did this,” only “we”. We are truly a team.

    We are honest to each other and our customers and communicate openly. We aim to have fun by embracing crazy ideas and creative solutions. We try to think not outside the box, but with no boxes at all. Customers are the driving force behind the success of the company and we care deeply about their success.

    If this all sounds like you:

    1. Send an email to [email protected] with the position in the subject line.
    2. Tell us a bit about your Sales Engineering experience.
    3. Include your resume.

    The post Wanted: Sales Engineer appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Wanted: Datacenter Technician

    Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-datacenter-technician/

    As we shoot way past 400 Petabytes of data under management we need some help scaling up our datacenters! We’re on the lookout for some datacenter technicians that can help us. This role is located near the Sacramento, California area. If you want to join a dynamic team that helps keep our almost 90,000+ hard drives spinning, this might be the job for you!

    Responsibilities

    • Work as Backblaze’s physical presence in Sacramento area datacenter(s).
    • Help maintain physical infrastructure including racking equipment, replacing hard drives and other system components.
    • Repair and troubleshoot defective equipment with minimal supervision.
    • Support datacenter’s 24×7 staff to install new equipment, handle after hours emergencies and other tasks.
    • Help manage onsite inventory of hard drives, cables, rails and other spare parts.
    • RMA defective components.
    • Setup, test and activate new equipment via the Linux command line.
    • Help train new Datacenter Technicians as needed.
    • Help with projects to install new systems and services as time allows.
    • Follow and improve Datacenter best practices and documentation.
    • Maintain a clean and well organized work environment.
    • On-call responsibilities require being within an hour of the SunGard’s Rancho Cordova/Roseville facility and occasional trips onsite 24×7 to resolve issues that can’t be handled remotely.
    • Work days may include Saturday and/or Sunday (e.g. working Tuesday – Saturday).

    Requirements

    • Excellent communication, time management, problem solving and organizational skills.
    • Ability to learn quickly.
    • Ability to lift/move 50-75 lbs and work down near the floor on a daily basis.
    • Position based near Sacramento, California and may require periodic visits to the corporate office in San Mateo.
    • May require travel to other Datacenters to provide coverage and/or to assist
      with new site set-up.

    Backblaze Employees Have:

    • Good attitude and willingness to do whatever it takes to get the job done.
    • Strong desire to work for a small, fast-paced company.
    • Desire to learn and adapt to rapidly changing technologies and work environment.
    • Comfortable with well-behaved pets in the office.
    • This position is located near Sacramento, California.

    Backblaze is an Equal Opportunity Employer and we offer competitive salary and benefits, including our no policy vacation policy.

    If This Sounds Like You:
    Send an email to [email protected] with:

    1. Datacenter Tech in the subject line
    2. Your resume attached
    3. An overview of your relevant experience

    The post Wanted: Datacenter Technician appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Wanted: Fixed Assets Accountant

    Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-fixed-assets-accountant/

    As Backblaze continues to grow, we’re expanding our accounting team! We’re looking for a seasoned Fixed Asset Accountant to help us with fixed assets and equipment leases.

    Job Duties:

    • Maintain and review fixed assets.
    • Record fixed asset acquisitions and dispositions.
    • Review and update the detailed schedule of fixed assets and accumulated depreciation.
    • Calculate depreciation for all fixed assets.
    • Investigate the potential obsolescence of fixed assets.
    • Coordinate with Operations team data center asset dispositions.
    • Conduct periodic physical inventory counts of fixed assets. Work with Operations team on cycle counts.
    • Reconcile the balance in the fixed asset subsidiary ledger to the summary-level account in the general ledger.
    • Track company expenditures for fixed assets in comparison to the capital budget and management authorizations.
    • Prepare audit schedules relating to fixed assets, and assist the auditors in their inquiries.
    • Recommend to management any updates to accounting policies related to fixed assets.
    • Manage equipment leases.
    • Engage and negotiate acquisition of new equipment lease lines.
    • Overall control of original lease documentation and maintenance of master lease files.
    • Facilitate and track routing and execution of various lease related: agreements — documents/forms/lease documents.
    • Establish and maintain proper controls to track expirations, renewal options, and all other critical dates.
    • Perform other duties and special projects as assigned.

    Qualifications:

    • 5-6 years relevant accounting experience.
    • Knowledge of inventory and cycle counting preferred.
    • Quickbooks, Excel, Word experience desired.
    • Organized, with excellent attention to detail, meticulous, quick-learner.
    • Good interpersonal skills and a team player.
    • Flexibility and ability to adapt and wear different hats.

    Backblaze Employees Have:

    • Good attitude and willingness to do whatever it takes to get the job done.
    • Strong desire to work for a small, fast-paced company.
    • Desire to learn and adapt to rapidly changing technologies and work environment.
    • Comfortable with well-behaved pets in the office.

    This position is located in San Mateo, California. Regular attendance in the office is expected. Backblaze is an Equal Opportunity Employer and we offer competitive salary and benefits, including our no policy vacation policy.

    If This Sounds Like You:
    Send an email to [email protected] with:

    1. Fixed Asset Accountant in the subject line
    2. Your resume attached
    3. An overview of your relevant experience

    The post Wanted: Fixed Assets Accountant appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Sky Hits Man With £5k ‘Fine’ For Pirating Boxing on Facebook

    Post Syndicated from Andy original https://torrentfreak.com/sky-hits-man-with-5k-fine-for-pirating-boxing-on-facebook-180108/

    When people download content online using BitTorrent, they also distribute that content to others. This unlawful distribution attracts negative attention from rightsholders, who have sued hundreds of thousands of individuals worldwide.

    Streaming is considered a much safer method to obtain content, since it’s difficult for content owners to track downloaders. However, the same can’t be said about those who stream content to the web for the benefit of others, as an interesting case in the UK has just revealed.

    It involves 34-year-old Craig Foster who received several scary letters from lawyers representing broadcaster Sky. The company alleged that during last April’s bout between Anthony Joshua’s and Wladimir Klitschko, Foster live-streamed the multiple world title fight on Facebook Live.

    Financially, this was a major problem for Sky, law firm Foot Anstey LLP told Foster. According to their calculations, at least 4,250 people watched the stream without paying Sky Box Office the going rate of £19.95 each. Tapped into Sky’s computers, the broadcaster concluded that Foster owed the company £85,000.

    But according to The Mirror, father-of-one Foster wasn’t actually to blame.

    “I’d paid for the boxing, it wasn’t like I was making any money. My iPad was signed in to my Facebook account and my friend just started streaming the fight. I didn’t think anything of it, then a few days later they cut my subscription,” Foster said.

    “They’re demanding the names and addresses of all my mates who were round that night but I’m not going to give them up. I said I’d take the rap.”

    While Foster says he won’t turn in the culprit, there’s no doubt that the fight stream originated from his Sky account. The TV giant embeds watermarks in its broadcasts which enables it to see who paid for an event, should a copy of one turn up on the Internet.

    As we reported last year following the Mayweather v McGregor super-fight, the codes are clearly visible with the naked eye.

    Sky watermarks, as seen in the Mayweather v McGregor fight

    While taking the rap for someone else’s infringing behavior isn’t something anyone should do lightly, it appears that Scarborough-based Foster did just that.

    According to Neil Parkes, who specializes in media litigation, content protection and contentious IP at Foot Anstey, Foster accepted responsibility and agreed to pay a settlement.

    “Mr Foster broke the law,” Parkes said. “He has acknowledged his wrongdoing, apologised and signed a legally binding agreement to pay a sum of £5,000 to Sky.”

    The Mirror, however, has Foster backtracking. He says he wasn’t given enough time to consider his position and now wants to fight Sky in court.

    “It’s heavy-handed. I’ve apologized and told them we were drunk,” Foster said.

    “I know streaming the fight was wrong. I didn’t stop my friend but I was watching the boxing. I’m just a bloke who had a few drinks with his friends.”

    Unless he can find a law firm willing to fight his corner at a hugely cut-down rate, Foster will find this kind of legal fisticuffs to be a massively expensive proposition, one in which he will start out as the clear underdog.

    Not only was Foster’s Sky account the originating source, both his iPad and his Facebook account were used to stream the fight. On top of what appears to be a signed confession, he also promised not to do anything else like this in future. Furthermore, he even agreed to issue an apology that Sky can use in future anti-piracy messages.

    Of course, Foster might indeed be a noble gentleman but he should be aware that as a civil matter, this fight would be decided on the balance of probabilities, not beyond reasonable doubt. If the judge decides 51% in Sky’s favor, he suffers a knockout along with a huge financial headache.

    No one wants a £5,000 bill but that’s a drop in the ocean compared to the cost implications of losing this case.

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

    Combine Transactional and Analytical Data Using Amazon Aurora and Amazon Redshift

    Post Syndicated from Re Alvarez-Parmar original https://aws.amazon.com/blogs/big-data/combine-transactional-and-analytical-data-using-amazon-aurora-and-amazon-redshift/

    A few months ago, we published a blog post about capturing data changes in an Amazon Aurora database and sending it to Amazon Athena and Amazon QuickSight for fast analysis and visualization. In this post, I want to demonstrate how easy it can be to take the data in Aurora and combine it with data in Amazon Redshift using Amazon Redshift Spectrum.

    With Amazon Redshift, you can build petabyte-scale data warehouses that unify data from a variety of internal and external sources. Because Amazon Redshift is optimized for complex queries (often involving multiple joins) across large tables, it can handle large volumes of retail, inventory, and financial data without breaking a sweat.

    In this post, we describe how to combine data in Aurora in Amazon Redshift. Here’s an overview of the solution:

    • Use AWS Lambda functions with Amazon Aurora to capture data changes in a table.
    • Save data in an Amazon S3
    • Query data using Amazon Redshift Spectrum.

    We use the following services:

    Serverless architecture for capturing and analyzing Aurora data changes

    Consider a scenario in which an e-commerce web application uses Amazon Aurora for a transactional database layer. The company has a sales table that captures every single sale, along with a few corresponding data items. This information is stored as immutable data in a table. Business users want to monitor the sales data and then analyze and visualize it.

    In this example, you take the changes in data in an Aurora database table and save it in Amazon S3. After the data is captured in Amazon S3, you combine it with data in your existing Amazon Redshift cluster for analysis.

    By the end of this post, you will understand how to capture data events in an Aurora table and push them out to other AWS services using AWS Lambda.

    The following diagram shows the flow of data as it occurs in this tutorial:

    The starting point in this architecture is a database insert operation in Amazon Aurora. When the insert statement is executed, a custom trigger calls a Lambda function and forwards the inserted data. Lambda writes the data that it received from Amazon Aurora to a Kinesis data delivery stream. Kinesis Data Firehose writes the data to an Amazon S3 bucket. Once the data is in an Amazon S3 bucket, it is queried in place using Amazon Redshift Spectrum.

    Creating an Aurora database

    First, create a database by following these steps in the Amazon RDS console:

    1. Sign in to the AWS Management Console, and open the Amazon RDS console.
    2. Choose Launch a DB instance, and choose Next.
    3. For Engine, choose Amazon Aurora.
    4. Choose a DB instance class. This example uses a small, since this is not a production database.
    5. In Multi-AZ deployment, choose No.
    6. Configure DB instance identifier, Master username, and Master password.
    7. Launch the DB instance.

    After you create the database, use MySQL Workbench to connect to the database using the CNAME from the console. For information about connecting to an Aurora database, see Connecting to an Amazon Aurora DB Cluster.

    The following screenshot shows the MySQL Workbench configuration:

    Next, create a table in the database by running the following SQL statement:

    Create Table
    CREATE TABLE Sales (
    InvoiceID int NOT NULL AUTO_INCREMENT,
    ItemID int NOT NULL,
    Category varchar(255),
    Price double(10,2), 
    Quantity int not NULL,
    OrderDate timestamp,
    DestinationState varchar(2),
    ShippingType varchar(255),
    Referral varchar(255),
    PRIMARY KEY (InvoiceID)
    )

    You can now populate the table with some sample data. To generate sample data in your table, copy and run the following script. Ensure that the highlighted (bold) variables are replaced with appropriate values.

    #!/usr/bin/python
    import MySQLdb
    import random
    import datetime
    
    db = MySQLdb.connect(host="AURORA_CNAME",
                         user="DBUSER",
                         passwd="DBPASSWORD",
                         db="DB")
    
    states = ("AL","AK","AZ","AR","CA","CO","CT","DE","FL","GA","HI","ID","IL","IN",
    "IA","KS","KY","LA","ME","MD","MA","MI","MN","MS","MO","MT","NE","NV","NH","NJ",
    "NM","NY","NC","ND","OH","OK","OR","PA","RI","SC","SD","TN","TX","UT","VT","VA",
    "WA","WV","WI","WY")
    
    shipping_types = ("Free", "3-Day", "2-Day")
    
    product_categories = ("Garden", "Kitchen", "Office", "Household")
    referrals = ("Other", "Friend/Colleague", "Repeat Customer", "Online Ad")
    
    for i in range(0,10):
        item_id = random.randint(1,100)
        state = states[random.randint(0,len(states)-1)]
        shipping_type = shipping_types[random.randint(0,len(shipping_types)-1)]
        product_category = product_categories[random.randint(0,len(product_categories)-1)]
        quantity = random.randint(1,4)
        referral = referrals[random.randint(0,len(referrals)-1)]
        price = random.randint(1,100)
        order_date = datetime.date(2016,random.randint(1,12),random.randint(1,30)).isoformat()
    
        data_order = (item_id, product_category, price, quantity, order_date, state,
        shipping_type, referral)
    
        add_order = ("INSERT INTO Sales "
                       "(ItemID, Category, Price, Quantity, OrderDate, DestinationState, \
                       ShippingType, Referral) "
                       "VALUES (%s, %s, %s, %s, %s, %s, %s, %s)")
    
        cursor = db.cursor()
        cursor.execute(add_order, data_order)
    
        db.commit()
    
    cursor.close()
    db.close() 

    The following screenshot shows how the table appears with the sample data:

    Sending data from Amazon Aurora to Amazon S3

    There are two methods available to send data from Amazon Aurora to Amazon S3:

    • Using a Lambda function
    • Using SELECT INTO OUTFILE S3

    To demonstrate the ease of setting up integration between multiple AWS services, we use a Lambda function to send data to Amazon S3 using Amazon Kinesis Data Firehose.

    Alternatively, you can use a SELECT INTO OUTFILE S3 statement to query data from an Amazon Aurora DB cluster and save it directly in text files that are stored in an Amazon S3 bucket. However, with this method, there is a delay between the time that the database transaction occurs and the time that the data is exported to Amazon S3 because the default file size threshold is 6 GB.

    Creating a Kinesis data delivery stream

    The next step is to create a Kinesis data delivery stream, since it’s a dependency of the Lambda function.

    To create a delivery stream:

    1. Open the Kinesis Data Firehose console
    2. Choose Create delivery stream.
    3. For Delivery stream name, type AuroraChangesToS3.
    4. For Source, choose Direct PUT.
    5. For Record transformation, choose Disabled.
    6. For Destination, choose Amazon S3.
    7. In the S3 bucket drop-down list, choose an existing bucket, or create a new one.
    8. Enter a prefix if needed, and choose Next.
    9. For Data compression, choose GZIP.
    10. In IAM role, choose either an existing role that has access to write to Amazon S3, or choose to generate one automatically. Choose Next.
    11. Review all the details on the screen, and choose Create delivery stream when you’re finished.

     

    Creating a Lambda function

    Now you can create a Lambda function that is called every time there is a change that needs to be tracked in the database table. This Lambda function passes the data to the Kinesis data delivery stream that you created earlier.

    To create the Lambda function:

    1. Open the AWS Lambda console.
    2. Ensure that you are in the AWS Region where your Amazon Aurora database is located.
    3. If you have no Lambda functions yet, choose Get started now. Otherwise, choose Create function.
    4. Choose Author from scratch.
    5. Give your function a name and select Python 3.6 for Runtime
    6. Choose and existing or create a new Role, the role would need to have access to call firehose:PutRecord
    7. Choose Next on the trigger selection screen.
    8. Paste the following code in the code window. Change the stream_name variable to the Kinesis data delivery stream that you created in the previous step.
    9. Choose File -> Save in the code editor and then choose Save.
    import boto3
    import json
    
    firehose = boto3.client('firehose')
    stream_name = ‘AuroraChangesToS3’
    
    
    def Kinesis_publish_message(event, context):
        
        firehose_data = (("%s,%s,%s,%s,%s,%s,%s,%s\n") %(event['ItemID'], 
        event['Category'], event['Price'], event['Quantity'],
        event['OrderDate'], event['DestinationState'], event['ShippingType'], 
        event['Referral']))
        
        firehose_data = {'Data': str(firehose_data)}
        print(firehose_data)
        
        firehose.put_record(DeliveryStreamName=stream_name,
        Record=firehose_data)

    Note the Amazon Resource Name (ARN) of this Lambda function.

    Giving Aurora permissions to invoke a Lambda function

    To give Amazon Aurora permissions to invoke a Lambda function, you must attach an IAM role with appropriate permissions to the cluster. For more information, see Invoking a Lambda Function from an Amazon Aurora DB Cluster.

    Once you are finished, the Amazon Aurora database has access to invoke a Lambda function.

    Creating a stored procedure and a trigger in Amazon Aurora

    Now, go back to MySQL Workbench, and run the following command to create a new stored procedure. When this stored procedure is called, it invokes the Lambda function you created. Change the ARN in the following code to your Lambda function’s ARN.

    DROP PROCEDURE IF EXISTS CDC_TO_FIREHOSE;
    DELIMITER ;;
    CREATE PROCEDURE CDC_TO_FIREHOSE (IN ItemID VARCHAR(255), 
    									IN Category varchar(255), 
    									IN Price double(10,2),
                                        IN Quantity int(11),
                                        IN OrderDate timestamp,
                                        IN DestinationState varchar(2),
                                        IN ShippingType varchar(255),
                                        IN Referral  varchar(255)) LANGUAGE SQL 
    BEGIN
      CALL mysql.lambda_async('arn:aws:lambda:us-east-1:XXXXXXXXXXXXX:function:CDCFromAuroraToKinesis', 
         CONCAT('{ "ItemID" : "', ItemID, 
                '", "Category" : "', Category,
                '", "Price" : "', Price,
                '", "Quantity" : "', Quantity, 
                '", "OrderDate" : "', OrderDate, 
                '", "DestinationState" : "', DestinationState, 
                '", "ShippingType" : "', ShippingType, 
                '", "Referral" : "', Referral, '"}')
         );
    END
    ;;
    DELIMITER ;

    Create a trigger TR_Sales_CDC on the Sales table. When a new record is inserted, this trigger calls the CDC_TO_FIREHOSE stored procedure.

    DROP TRIGGER IF EXISTS TR_Sales_CDC;
     
    DELIMITER ;;
    CREATE TRIGGER TR_Sales_CDC
      AFTER INSERT ON Sales
      FOR EACH ROW
    BEGIN
      SELECT  NEW.ItemID , NEW.Category, New.Price, New.Quantity, New.OrderDate
      , New.DestinationState, New.ShippingType, New.Referral
      INTO @ItemID , @Category, @Price, @Quantity, @OrderDate
      , @DestinationState, @ShippingType, @Referral;
      CALL  CDC_TO_FIREHOSE(@ItemID , @Category, @Price, @Quantity, @OrderDate
      , @DestinationState, @ShippingType, @Referral);
    END
    ;;
    DELIMITER ;

    If a new row is inserted in the Sales table, the Lambda function that is mentioned in the stored procedure is invoked.

    Verify that data is being sent from the Lambda function to Kinesis Data Firehose to Amazon S3 successfully. You might have to insert a few records, depending on the size of your data, before new records appear in Amazon S3. This is due to Kinesis Data Firehose buffering. To learn more about Kinesis Data Firehose buffering, see the “Amazon S3” section in Amazon Kinesis Data Firehose Data Delivery.

    Every time a new record is inserted in the sales table, a stored procedure is called, and it updates data in Amazon S3.

    Querying data in Amazon Redshift

    In this section, you use the data you produced from Amazon Aurora and consume it as-is in Amazon Redshift. In order to allow you to process your data as-is, where it is, while taking advantage of the power and flexibility of Amazon Redshift, you use Amazon Redshift Spectrum. You can use Redshift Spectrum to run complex queries on data stored in Amazon S3, with no need for loading or other data prep.

    Just create a data source and issue your queries to your Amazon Redshift cluster as usual. Behind the scenes, Redshift Spectrum scales to thousands of instances on a per-query basis, ensuring that you get fast, consistent performance even as your dataset grows to beyond an exabyte! Being able to query data that is stored in Amazon S3 means that you can scale your compute and your storage independently. You have the full power of the Amazon Redshift query model and all the reporting and business intelligence tools at your disposal. Your queries can reference any combination of data stored in Amazon Redshift tables and in Amazon S3.

    Redshift Spectrum supports open, common data types, including CSV/TSV, Apache Parquet, SequenceFile, and RCFile. Files can be compressed using gzip or Snappy, with other data types and compression methods in the works.

    First, create an Amazon Redshift cluster. Follow the steps in Launch a Sample Amazon Redshift Cluster.

    Next, create an IAM role that has access to Amazon S3 and Athena. By default, Amazon Redshift Spectrum uses the Amazon Athena data catalog. Your cluster needs authorization to access your external data catalog in AWS Glue or Athena and your data files in Amazon S3.

    In the demo setup, I attached AmazonS3FullAccess and AmazonAthenaFullAccess. In a production environment, the IAM roles should follow the standard security of granting least privilege. For more information, see IAM Policies for Amazon Redshift Spectrum.

    Attach the newly created role to the Amazon Redshift cluster. For more information, see Associate the IAM Role with Your Cluster.

    Next, connect to the Amazon Redshift cluster, and create an external schema and database:

    create external schema if not exists spectrum_schema
    from data catalog 
    database 'spectrum_db' 
    region 'us-east-1'
    IAM_ROLE 'arn:aws:iam::XXXXXXXXXXXX:role/RedshiftSpectrumRole'
    create external database if not exists;

    Don’t forget to replace the IAM role in the statement.

    Then create an external table within the database:

     CREATE EXTERNAL TABLE IF NOT EXISTS spectrum_schema.ecommerce_sales(
      ItemID int,
      Category varchar,
      Price DOUBLE PRECISION,
      Quantity int,
      OrderDate TIMESTAMP,
      DestinationState varchar,
      ShippingType varchar,
      Referral varchar)
    ROW FORMAT DELIMITED
          FIELDS TERMINATED BY ','
    LINES TERMINATED BY '\n'
    LOCATION 's3://{BUCKET_NAME}/CDC/'

    Query the table, and it should contain data. This is a fact table.

    select top 10 * from spectrum_schema.ecommerce_sales

     

    Next, create a dimension table. For this example, we create a date/time dimension table. Create the table:

    CREATE TABLE date_dimension (
      d_datekey           integer       not null sortkey,
      d_dayofmonth        integer       not null,
      d_monthnum          integer       not null,
      d_dayofweek                varchar(10)   not null,
      d_prettydate        date       not null,
      d_quarter           integer       not null,
      d_half              integer       not null,
      d_year              integer       not null,
      d_season            varchar(10)   not null,
      d_fiscalyear        integer       not null)
    diststyle all;

    Populate the table with data:

    copy date_dimension from 's3://reparmar-lab/2016dates' 
    iam_role 'arn:aws:iam::XXXXXXXXXXXX:role/redshiftspectrum'
    DELIMITER ','
    dateformat 'auto';

    The date dimension table should look like the following:

    Querying data in local and external tables using Amazon Redshift

    Now that you have the fact and dimension table populated with data, you can combine the two and run analysis. For example, if you want to query the total sales amount by weekday, you can run the following:

    select sum(quantity*price) as total_sales, date_dimension.d_season
    from spectrum_schema.ecommerce_sales 
    join date_dimension on spectrum_schema.ecommerce_sales.orderdate = date_dimension.d_prettydate 
    group by date_dimension.d_season

    You get the following results:

    Similarly, you can replace d_season with d_dayofweek to get sales figures by weekday:

    With Amazon Redshift Spectrum, you pay only for the queries you run against the data that you actually scan. We encourage you to use file partitioning, columnar data formats, and data compression to significantly minimize the amount of data scanned in Amazon S3. This is important for data warehousing because it dramatically improves query performance and reduces cost.

    Partitioning your data in Amazon S3 by date, time, or any other custom keys enables Amazon Redshift Spectrum to dynamically prune nonrelevant partitions to minimize the amount of data processed. If you store data in a columnar format, such as Parquet, Amazon Redshift Spectrum scans only the columns needed by your query, rather than processing entire rows. Similarly, if you compress your data using one of the supported compression algorithms in Amazon Redshift Spectrum, less data is scanned.

    Analyzing and visualizing Amazon Redshift data in Amazon QuickSight

    Modify the Amazon Redshift security group to allow an Amazon QuickSight connection. For more information, see Authorizing Connections from Amazon QuickSight to Amazon Redshift Clusters.

    After modifying the Amazon Redshift security group, go to Amazon QuickSight. Create a new analysis, and choose Amazon Redshift as the data source.

    Enter the database connection details, validate the connection, and create the data source.

    Choose the schema to be analyzed. In this case, choose spectrum_schema, and then choose the ecommerce_sales table.

    Next, we add a custom field for Total Sales = Price*Quantity. In the drop-down list for the ecommerce_sales table, choose Edit analysis data sets.

    On the next screen, choose Edit.

    In the data prep screen, choose New Field. Add a new calculated field Total Sales $, which is the product of the Price*Quantity fields. Then choose Create. Save and visualize it.

    Next, to visualize total sales figures by month, create a graph with Total Sales on the x-axis and Order Data formatted as month on the y-axis.

    After you’ve finished, you can use Amazon QuickSight to add different columns from your Amazon Redshift tables and perform different types of visualizations. You can build operational dashboards that continuously monitor your transactional and analytical data. You can publish these dashboards and share them with others.

    Final notes

    Amazon QuickSight can also read data in Amazon S3 directly. However, with the method demonstrated in this post, you have the option to manipulate, filter, and combine data from multiple sources or Amazon Redshift tables before visualizing it in Amazon QuickSight.

    In this example, we dealt with data being inserted, but triggers can be activated in response to an INSERT, UPDATE, or DELETE trigger.

    Keep the following in mind:

    • Be careful when invoking a Lambda function from triggers on tables that experience high write traffic. This would result in a large number of calls to your Lambda function. Although calls to the lambda_async procedure are asynchronous, triggers are synchronous.
    • A statement that results in a large number of trigger activations does not wait for the call to the AWS Lambda function to complete. But it does wait for the triggers to complete before returning control to the client.
    • Similarly, you must account for Amazon Kinesis Data Firehose limits. By default, Kinesis Data Firehose is limited to a maximum of 5,000 records/second. For more information, see Monitoring Amazon Kinesis Data Firehose.

    In certain cases, it may be optimal to use AWS Database Migration Service (AWS DMS) to capture data changes in Aurora and use Amazon S3 as a target. For example, AWS DMS might be a good option if you don’t need to transform data from Amazon Aurora. The method used in this post gives you the flexibility to transform data from Aurora using Lambda before sending it to Amazon S3. Additionally, the architecture has the benefits of being serverless, whereas AWS DMS requires an Amazon EC2 instance for replication.

    For design considerations while using Redshift Spectrum, see Using Amazon Redshift Spectrum to Query External Data.

    If you have questions or suggestions, please comment below.


    Additional Reading

    If you found this post useful, be sure to check out Capturing Data Changes in Amazon Aurora Using AWS Lambda and 10 Best Practices for Amazon Redshift Spectrum


    About the Authors

    Re Alvarez-Parmar is a solutions architect for Amazon Web Services. He helps enterprises achieve success through technical guidance and thought leadership. In his spare time, he enjoys spending time with his two kids and exploring outdoors.

     

     

     

    A hedgehog cam or two

    Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/a-hedgehog-cam-or-two/

    Here we are, hauling ourselves out of the Christmas and New Year holidays and into January proper. It’s dawning on me that I have to go back to work, even though it’s still very cold and gloomy in northern Europe, and even though my duvet is lovely and warm. I found myself envying beings that hibernate, and thinking about beings that hibernate, and searching for things to do with hedgehogs. And, well, the long and the short of it is, today’s blog post is a short meditation on the hedgehog cam.

    A hedgehog in a garden, photographed in infrared light by a hedgehog cam

    Success! It’s a hedgehog!
    Photo by Andrew Wedgbury

    Hedgehog watching

    Someone called Barker has installed a Raspberry Pi–based hedgehog cam in a location with a distant view of a famous Alp, and as well as providing live views by visible and infrared light for the dedicated and the insomniac, they also make a sped-up version of the previous night’s activity available. With hedgehogs usually being in hibernation during January, you mightn’t see them in any current feed — but don’t worry! You’re guaranteed a few hedgehogs on Barker’s website, because they have also thrown in some lovely GIFs of hoggy (and foxy) divas that their camera captured in the past.

    A Hedgehog eating from a bowl on a patio, captured by a hedgehog cam

    Nom nom nom!
    GIF by Barker’s Site

    Build your own hedgehog cam

    For pointers on how to replicate this kind of setup, you could do worse than turn to Andrew Wedgbury’s hedgehog cam write-up. Andrew’s Twitter feed reveals that he’s a Cambridge local, and there are hints that he was behind RealVNC’s hoggy mascot for Pi Wars 2017.

    RealVNC on Twitter

    Another day at the office: testing our #PiWars mascot using a @Raspberry_Pi 3, #VNC Connect and @4tronix_uk Picon Zero. Name suggestions? https://t.co/iYY3xAX9Bk

    Our infrared bird box and time-lapse camera resources will also set you well on the way towards your own custom wildlife camera. For a kit that wraps everything up in a weatherproof enclosure made with love, time, and serious amounts of design and testing, take a look at Naturebytes’ wildlife cam kit.

    Or, if you’re thinking that a robot mascot is more dependable than real animals for the fluffiness you need in order to start your January with something like productivity and with your soul intact, you might like to put your own spin on our robot buggy.

    Happy 2018

    While we’re on the subject of getting to grips with the new year, do take a look at yesterday’s blog post, in which we suggest a New Year’s project that’s different from the usual resolutions. However you tackle 2018, we wish you an excellent year of creative computing.

    The post A hedgehog cam or two appeared first on Raspberry Pi.

    Filmmakers Want The Right to Break DRM and Rip Blu-Rays

    Post Syndicated from Ernesto original https://torrentfreak.com/filmmakers-want-the-right-to-break-drm-and-rip-blu-rays-171228/

    The major movie studios are doing everything in their power to stop the public from copying films.

    While nearly every movie and TV-show leaks on the Internet, these companies still see DRM as a vital tool to prevent piracy from spiraling out of control.

    Technically speaking it’s not hard to rip a DVD or Blu-Ray disc nowadays, and the same is true for ripping content from Netflix or YouTube. However, people who do this are breaking the law.

    The DMCA’s anti-circumvention provisions specifically forbid it. There are some exemptions, for educational use for example, and to allow for other types of fair use, but the line between legal and illegal is not always clear.

    Interestingly, filmmakers are not happy with the current law either. They often want to use small pieces of other videos in their films, but under the current exemptions, this is only permitted for documentaries.

    The International Documentary Association, Kartemquin Films, Independent Filmmaker Project, University of Film and Video Association and several other organizations hope this will change.

    In a comment to the Copyright Office, which is currently considering updates to the exemptions, they argue that all filmmakers should be allowed by break DRM and rip Blu-Rays.

    According to the filmmakers, the documentary genre is vaguely defined. This leads to a lot of confusion whether or not the exemptions apply. They, therefore, suggest to apply it to all filmmakers, instead of criminalizing those who don’t identify themselves as documentarians.

    “Since 2010, exemptions applicable to documentary filmmaking have been in effect. This exemption has helped many filmmakers, and there has been neither evidence nor any allegation that this exemption has harmed rightsholders in any way.

    “There is no reason this would change if the ‘documentary’ limitation were removed. All filmmakers regularly need access to footage on DVDs and without an exemption to DVDs, many non-infringing uses simply cannot be made,” the groups add.

    The submission includes letters from several filmmakers who explain why an exemption would be crucial to them.

    Filmmakers Steve Boettcher and Mike Trinklein explain that they refrained from making a film how they wanted it to be, fearing legal trouble. Their film included a lot of drama elements and was not a typical documentary.

    “Given the significant amount of drama in the film [we are working on], we decided early on that our storytelling toolbox could not include fair use of materials from DVD or Blu-ray, because the exemption did not cover accessing that material for use in a drama,” they write

    “Already, we were hindered in our ability to tell these stories. So, there is already a chilling effect in that a drama-heavy documentary might be seen as a drama outright, and thus under a different set of rules.”

    Another filmmaker, who wants to remain anonymous, plans on making a hybrid documentary/narrative feature about a famous film duo. Without ripping the clips he needs, this movie is never going to be made.

    “I am unsure of whether my project would fall under the exemption because it is a combination of documentary and narrative, and my fear of a lawsuit once my project is publicly viewed and distributed stops me from ripping from these sources.”

    These are just two of many examples where filmmakers show that they need to break DRM and rip content to make the work they want.

    The MPAA and others have previously argued that these changes are not required. Instead, they pointed out that people could point their cameras or phones at the screen to record something, or use screen capture software.

    However, these are not viable alternatives according to the filmmakers, as the quality is inferior. They, therefore, call on the Copyright Office to expand the exemption to cover all films and filmmakers.

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

    Gamers Want DMCA Exemption for ‘Abandoned’ Online Games

    Post Syndicated from Ernesto original https://torrentfreak.com/gamers-want-dmca-exemption-for-abandoned-online-games-171221/

    The U.S. Copyright Office is considering whether or not to update the DMCA’s anti-circumvention provisions, which prevent the public from tinkering with DRM-protected content and devices.

    These provisions are renewed every three years. To allow individuals and organizations to chime in, the Office traditionally launches a public consultation, before it makes any decisions.

    This week a series of new responses were received and many of these focused on abandoned games. As is true for most software, games have a limited lifespan, so after a few years they are no longer supported by manufacturers.

    To preserve these games for future generations and nostalgic gamers, the Copyright Office previously included game preservation exemptions. This means that libraries, archives and museums can use emulators and other circumvention tools to make old classics playable.

    However, these exemptions are limited and do not apply to games that require a connection to an online server, which includes most recent games. When the online servers are taken down, the game simply disappears forever.

    This should be prevented, according to The Museum of Art and Digital Entertainment (the MADE), a nonprofit organization operating in California.

    “Although the Current Exemption does not cover it, preservation of online video games is now critical,” MADE writes in its comment to the Copyright Office.

    “Online games have become ubiquitous and are only growing in popularity. For example, an estimated fifty-three percent of gamers play multiplayer games at least once a week, and spend, on average, six hours a week playing with others online.”

    During the previous review, similar calls for an online exemption were made but, at the time, the Register of Copyrights noted that multiplayer games could still be played on local area networks.

    “Today, however, local multiplayer options are increasingly rare, and many games no longer support LAN connected multiplayer capability,” MADE counters, adding that nowadays even some single-player games require an online connection.

    “More troubling still to archivists, many video games rely on server connectivity to function in single-player mode and become unplayable when servers shut down.”

    MADE asks the Copyright Office to extend the current exemptions and include games with an online connection as well. This would allow libraries, archives, and museums to operate servers for these abandoned games and keep them alive.

    The nonprofit museum is not alone in its call, with digital rights group Public Knowledge submitting a similar comment. They also highlight the need to preserve online games. Not just for nostalgic gamers, but also for researchers and scholars.

    This issue is more relevant than ever before, as hundreds of online multiplayer games have been abandoned already.

    “It is difficult to quantify the number of multiplayer servers that have been shut down in recent years. However, Electronic Arts’ ‘Online Services Shutdown’ list is one illustrative example,” Public Knowledge writes.

    “The list — which is littered with popular franchises such as FIFA World Cup, Nascar, and The Sims — currently stands at 319 games and servers discontinued since 2013, or just over one game per week since 2012.”

    Finally, several ‘regular’ gaming fans have also made their feelings known. While their arguments are usually not as elaborate, the personal pleasure people still get out of older games can’t be overstated.

    “I have been playing video games since the Atari 2600, for 35 years. Nowadays, game ‘museums’ — getting the opportunity to replay games from my youth, and share them with my child — are a source of joy for me,” one individual commenter wrote.

    “I would love the opportunity to explore some of the early online / MMO games that I spent so much time on in the past!”

    Game on?

    Header image via MMOs.com

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

    Students and Youths Offered $10 to Pirate Latest Movies in Cinemas

    Post Syndicated from Andy original https://torrentfreak.com/students-and-youths-offered-10-to-pirate-latest-movies-in-cinemas-171219/

    In common with most other countries, demand for movies is absolutely huge in India. According to a 2015 report, the country produces between 1,500 and 2,000 movies each year, more than any other country in the world.

    But India also has a huge piracy problem. If a movie is worth watching, it’s pirated extremely quickly, mostly within a couple of days of release, often much sooner. These early copies ordinarily come from “cams” – recordings made in cinemas – which are sold on the streets for next to nothing and eagerly snapped up citizens. Who, incidentally, are served by ten times fewer cinema screens than their US counterparts.

    These cam copies have to come from somewhere and according to representatives from the local Anti-Video Piracy Committee, piracy groups have begun to divert “camming” duties to outsiders, effectively decentralizing their operations.

    Their targets are said to be young people with decent mobile phones, students in particular. Along with China, India now has more than a billion phone users, so there’s no shortage of candidates.

    “The offer to youngsters is that they would get 10 US dollars into their bank accounts, if they videographed and sent it on the first day of release of the film,” says Raj Kumar, Telugu Film Chamber of Commerce representative and Anti-Video Piracy Committee chairman.

    “The minors and youngsters are getting attracted to the money, not knowing that piracy is a crime,” he adds.

    Although US$10 sounds like a meager amount, for many locals the offer is significant. According to figures from 2014, the average daily wage in India is just 272 Indian Rupees (US$4.24) so, for an hour or two’s ‘work’ sitting in a cinema with a phone, a student can, in theory, earn more than he can in two days employment.

    The issue of youth “camming” came up yesterday during a meeting of film producers, Internet service providers and cybercrime officials convened by IT and Industries Secretary Jayesh Ranjan.

    The meeting heard that the Telangana State government will soon have its own special police officers and cybercrime experts to tackle the growing problem of pirate sites, who will take them down if necessary.

    “The State government has adopted a no-tolerance policy towards online piracy of films and will soon have a plan in place to tackle and effectively curb piracy. We need to adopt strong measures and countermeasures to weed out all kinds of piracy,” Ranjan said.

    The State already has its own Intellectual Property Crimes Unit (IPCU) but local officials have complained that not enough is being done to curb huge losses faced by the industry. There have been successes, however.

    Cybercrime officials previously tracked down individuals said to have been involved in the piracy of the spectacular movie Baahubali 2 – The Conclusion which became the highest grossing Indian film ever just six days after its release earlier this year. But despite the efforts and successes, the basics appear to elude Indian anti-piracy forces.

    During October 2017, a 4K copy of Baahubali 2 was uploaded to YouTube and has since racked up an astonishing 54.7m views to the delight of a worldwide audience, many of them enjoying the best of Indian cinema for the first time – for free.

    Still, the meeting Monday found that sites offering pirated Indian movies should be targeted and brought to their knees.

    “In the meeting, the ISPs too were asked to designate a nodal officer who can keep a watch over websites which upload such data onto their websites and bring them down,” a cybercrime police officer said.

    Next stop, YouTube?

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

    VPN Server Seized to Investigate Russian Ambassador’s Assassination

    Post Syndicated from Ernesto original https://torrentfreak.com/vpn-server-seized-to-investigate-russian-ambassadors-assassination-1171219/

    VPNs are valuable tools for people who want to use the Internet securely and maintain their anonymity. They are vital for whistleblowers and people who rebel against Government oppression.

    As with any online service, they can also be used for criminal purposes. According to Turkish news sources, this is also what happened following the assassination of Andrei Karlov, the Russian Ambassador to Turkey, exactly one year ago.

    Karlov was shot dead in Ankara by Mevlüt Mert Altıntaş, an off-duty Turkish police officer. While that much is clear, the investigation into the assassination is not closed yet.

    When the authorities tried to find links to other people that may have been involved, they found out that the policeman’s Gmail and Facebook had been deleted. This happened remotely over a VPN connection, operated by ExpressVPN.

    To find out more, the authorities raided the datacenter and seized the server through which the connection went. This all happened last January, but the information just came out today.

    Like many other VPN services nowadays, ExpressVPN doesn’t store any logs, and this is what the investigators soon found out as well. An inspection of the server in question yielded no useful information.

    Following the seizure, an investigator also reached out to ExpressVPN directly, asking for logs. The VPN provider is incorporated in the British Virgin Islands and only responds to local court orders, but the investigator was informed that they don’t store connection or activity logs.

    “As we stated to Turkish authorities in January 2017, ExpressVPN does not and has never possessed any customer connection logs that would enable us to know which customer was using the specific IPs cited by the investigators,” ExpressVPN writes in a statement.

    “Furthermore, we were unable to see which customers accessed Gmail or Facebook during the time in question, as we do not keep activity logs. We believe that the investigators’ seizure and inspection of the VPN server in question confirmed these points.”

    Speaking with TorrentFreak, the VPN provider mentions that they’ve had physical server seizures in the past, but generally not more than a few times per year.

    These seizures are not announced in public, but the company stresses that user anonymity is their highest priority.

    “While we don’t have a policy of announcing such incidents, we’ve designed our technology to ensure that VPN servers do not possess logs which would enable a third party to determine sensitive information about our users, such as their VPN activity or connections.

    “A physical server seizure is therefore highly unlikely to provide relevant information to someone trying to determine data about specific usage,” ExpressVPN tells us.

    Incidents like these show that decent VPNs do what they’re set out to. They safeguard the privacy of users which, like the Internet in general, can be used for good and bad.

    It also highlights the importance of the server location. When servers are operated by third-party companies in foreign jurisdictions, they can be easily targeted, or perhaps even worse, monitored.

    ExpressVPN told TorrentFreak that after the seizure incident in Turkey, the company decided to no longer use physical servers in Turkey. Instead, they provide a virtual location with Turkey-registered IP addresses pointing to VPN servers hosted in the Netherlands.

    The VPN provider regrets that its services were used for unlawful purposes but says that its policies will remain the same.

    “While it’s unfortunate that security tools like VPNs can be abused for illicit purposes, they are critical for our safety and the preservation of our right to privacy online. ExpressVPN is fundamentally opposed to any efforts to install ‘backdoors’ or attempts by governments to otherwise undermine such technologies,” the company concludes.

    Disclosure: ExpressVPN is a TorrentFreak sponsor

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

    Rosie the Countdown champion

    Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/rosie-the-countdown-champion/

    Beating the contestants at Countdown: is it cheating if you happen to know every word in the English dictionary?

    Rosie plays Countdown

    Allow your robots to join in the fun this Christmas with a round of Channel 4’s Countdown. https://www.rosietheredrobot.com/2017/12/tea-minus-30.html

    Rosie the Red Robot

    First, a little bit of backstory. Challenged by his eldest daughter to build a robot, technology-loving Alan got to work building Rosie.

    I became (unusually) determined. I wanted to show her what can be done… and the how can be learnt later. After all, there is nothing more exciting and encouraging than seeing technology come alive. Move. Groove. Quite literally.

    Originally, Rosie had a Raspberry Pi 3 brain controlling ultrasonic sensors and motors via Python. From there, she has evolved into something much grander, and Alan has documented her upgrades on the Rosie the Red Robot blog. Using GPS trackers and a Raspberry Pi camera module, she became Rosie Patrol, a rolling, walking, interactive bot; then, with further upgrades, the Tea Minus 30 project came to be. Which brings us back to Countdown.

    T(ea) minus 30

    In case it hasn’t been a big part of your life up until now, Countdown is one of the longest running televisions shows in history, and occupies a special place in British culture. Contestants take turns to fill a board with nine randomly selected vowels and consonants, before battling the Countdown clock to find the longest word they can in the space of 30 seconds.

    The Countdown Clock

    I’ve had quite a few requests to show just the Countdown clock for use in school activities/own games etc., so here it is! Enjoy! It’s a brand new version too, using the 2010 Office package.

    There’s a numbers round involving arithmetic, too – but for now, we’re going to focus on letters and words, because that’s where Rosie’s skills shine.

    Using an online resource, Alan created a dataset of the ten thousand most common English words.

    Rosie the Red Robot Raspberry Pi

    Many words, listed in order of common-ness. Alan wrote a Python script to order them alphabetically and by length

    Next, Alan wrote a Python script to select nine letters at random, then search the word list to find all the words that could be spelled using only these letters. He used the randint function to select letters from a pre-loaded alphabet, and introduced a requirement to include at least two vowels among the nine letters.

    Rosie the Red Robot Raspberry Pi

    Words that match the available letters are displayed on the screen.

    Rosie the Red Robot Raspberry Pi

    Putting it all together

    With the basic game-play working, it was time to bring the project to life. For this, Alan used Rosie’s camera module, along with optical character recognition (OCR) and text-to-speech capabilities.

    Rosie the Red Robot Raspberry Pi

    Alan writes, “Here’s a very amateurish drawing to brainstorm our idea. Let’s call it a design as it makes it sound like we know what we’re doing.”

    Alan’s script has Rosie take a photo of the TV screen during the Countdown letters round, then perform OCR using the Google Cloud Vision API to detect the nine letters contestants have to work with. Next, Rosie runs Alan’s code to check the letters against the ten-thousand-word dataset, converts text to speech with Python gTTS, and finally speaks her highest-scoring word via omxplayer.

    You can follow the adventures of Rosie the Red Robot on her blog, or follow her on Twitter. And if you’d like to build your own Rosie, Alan has provided code and tutorials for his projects too. Thanks, Alan!

    The post Rosie the Countdown champion appeared first on Raspberry Pi.

    US Government Teaches Anti-Piracy Skills Around The Globe

    Post Syndicated from Ernesto original https://torrentfreak.com/us-government-teaches-anti-piracy-skills-around-the-globe-171217/

    Online piracy is a global issue. Pirate sites and services tend to operate in multiple jurisdictions and are purposefully set up to evade law enforcement.

    This makes it hard for police from one country to effectively crack down on a site in another. International cooperation is often required, and the US Government is one of the leaders on this front.

    The US Department of Justice (DoJ) has quite a bit of experience in tracking down pirates and they are actively sharing this knowledge with countries that can use some help. This goes far beyond the occasional seminar.

    A diplomatic cable obtained through a Freedom of Information request provides a relatively recent example of these efforts. The document gives an overview of anti-piracy training, provided and funded by the US Government, during the fall of 2015.

    “On November 24 and 25, prosecutors and investigators from Romania, Moldova, Bulgaria, and Turkey participated in a two-day, US. Department of Justice (USDOJ)-sponsored training program on combatting online piracy.

    “The program updated participants on legal issues, including data retention legislation, surrounding the investigation and prosecution of online piracy,” the cable adds.

    According to the cable, piracy has become a very significant problem in Eastern Europe, costing rightsholders and governments millions of dollars in revenues. After the training, local law enforcement officers in these countries should be better equipped to deal with the problem.

    Pirates Beware

    The event was put together with help from various embassies and among the presenters were law enforcement professionals from around the world.

    The Director of the DoJ’s CCIPS Cybercrime Laboratory was among the speakers. He gave training on computer forensics and participants were provided with various tools to put this to use.

    “Participants were given copies of forensic tools at the conclusion of the program so that they could put to use some of what they saw demonstrated during the training,” the cable reads.

    While catching pirates can be quite hard already, getting them convicted is a challenge as well. Increasingly we’ve seen criminal complaints using non-copyright claims to have site owners prosecuted.

    By using money laundering and tax offenses, pirates can receive tougher penalties. This was one of the talking points during the training as well.

    “Participants were encouraged to consider the use of statutes such as money laundering and tax evasion, in addition to those protecting copyrights and trademarks, since these offenses are often punished more severely than standalone intellectual property crimes.”

    The cable, written by the US Embassy in Bucharest, provides a lot of detail about the two-day training session. It’s also clear on the overall objective. The US wants to increase the likelihood that pirate sites are brought to justice. Not only in the homeland, but around the globe.

    “By focusing approximately forty investigators and prosecutors from four countries on how they can more effectively attack rogue sites, and by connecting rights holders and their investigators with law enforcement, the chances of pirates being caught and held accountable have increased.”

    While it’s hard to link the training to any concrete successes, Romanian law enforcement did shut down the country’s leading pirate site a few months later. As with a previous case in Romania, which involved the FBI, money laundering and tax evasion allegations were expected.

    While it’s not out of the ordinary for international law enforcers to work together, it’s notable how coordinated the US efforts are. Earlier this week we wrote about the US pressure on Sweden to raid The Pirate Bay. And these are not isolated incidents.

    While the US Department of Justice doesn’t reveal all details of its operations, it is very open about its global efforts to protect Intellectual Property.

    Around the world..

    The DoJ’s Computer Crime and Intellectual Property Section (CCIPS) has relationships with law enforcement worldwide and regularly provides training to foreign officers.

    A crucial part of the Department’s international enforcement activities is the Intellectual Property Law Enforcement Coordinator (IPLEC) program, which started in 2006.

    Through IPLECs, the department now has Attorneys stationed in Thailand, Hong Kong, Romania, Brazil, and Nigeria. These Attorneys keep an eye on local law enforcement and provide assistance and training, to protect US copyright holders.

    “Our strategically placed coordinators draw upon their subject matter expertise to help ensure that property holders’ rights are enforced across the globe, and that the American people are protected from harmful products entering the marketplace,” Attorney General John Cronan of the Criminal Division said just last Friday.

    Or to end with the title of the Romanian cable: ‘Pirates beware!’

    The cable cited here was made available in response to a Freedom of Information request, which was submitted by Rachael Tackett and shared with TorrentFreak. It starts at page 47 of document 2.

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

    FCC Repeals U.S. Net Neutrality Rules

    Post Syndicated from Ernesto original https://torrentfreak.com/fcc-repeals-u-s-net-neutrality-rules-171214/

    In recent months, millions of people have protested the FCC’s plan to repeal U.S. net neutrality rules, which were put in place by the Obama administration.

    However, an outpouring public outrage, critique from major tech companies, and even warnings from pioneers of the Internet, had no effect.

    Today the FCC voted to repeal the old rules, effectively ending net neutrality.

    Under the net neutrality rules that have been in effect during recent years, ISPs were specifically prohibited from blocking, throttling, and paid prioritization of “lawful” traffic. In addition, Internet providers could be regulated as carriers under Title II.

    Now that these rules have been repealed, Internet providers have more freedom to experiment with paid prioritization. Under the new guidelines, they can charge customers extra for access to some online services, or throttle certain types of traffic.

    Most critics of the repeal fear that, now that the old net neutrality rules are in the trash, ‘fast lanes’ for some services, and throttling for others, will become commonplace in the U.S.

    This could also mean that BitTorrent traffic becomes a target once again. After all, it was Comcast’s ‘secretive’ BitTorrent throttling that started the broader net neutrality debate, now ten years ago.

    Comcast’s throttling history is a sensitive issue, also for the company itself.

    Before the Obama-era net neutrality rules, the ISP vowed that it would no longer discriminate against specific traffic classes. Ahead of the FCC vote yesterday, it doubled down on this promise.

    “Despite repeated distortions and biased information, as well as misguided, inaccurate attacks from detractors, our Internet service is not going to change,” writes David Cohen, Comcast’s Chief Diversity Officer.

    “We have repeatedly stated, and reiterate today, that we do not and will not block, throttle, or discriminate against lawful content.”

    It’s worth highlighting the term “lawful” in the last sentence. It is by no means a promise that pirate sites won’t be blocked.

    As we’ve highlighted in the past, blocking pirate sites was already an option under the now-repealed rules. The massive copyright loophole made sure of that. Targeting all torrent traffic is even an option, in theory.

    That said, today’s FCC vote certainly makes it easier for ISPs to block or throttle BitTorrent traffic across the entire network. For the time being, however, there are no signs that any ISPs plan to do so.

    If they do, we will know soon enough. The FCC requires all ISPs to be transparent under the new plan. They have to disclose network management practices, blocking efforts, commercial prioritization, and the like.

    And with the current focus on net neutrality, ISPs are likely to tread carefully, or else they might just face an exodus of customers.

    Finally, it’s worth highlighting that today’s vote is not the end of the road yet. Net neutrality supporters are planning to convince Congress to overturn the repeal. In addition, there are is also talk of taking the matter to court.

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

    What is HAMR and How Does It Enable the High-Capacity Needs of the Future?

    Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hamr-hard-drives/

    HAMR drive illustration

    During Q4, Backblaze deployed 100 petabytes worth of Seagate hard drives to our data centers. The newly deployed Seagate 10 and 12 TB drives are doing well and will help us meet our near term storage needs, but we know we’re going to need more drives — with higher capacities. That’s why the success of new hard drive technologies like Heat-Assisted Magnetic Recording (HAMR) from Seagate are very relevant to us here at Backblaze and to the storage industry in general. In today’s guest post we are pleased to have Mark Re, CTO at Seagate, give us an insider’s look behind the hard drive curtain to tell us how Seagate engineers are developing the HAMR technology and making it market ready starting in late 2018.

    What is HAMR and How Does It Enable the High-Capacity Needs of the Future?

    Guest Blog Post by Mark Re, Seagate Senior Vice President and Chief Technology Officer

    Earlier this year Seagate announced plans to make the first hard drives using Heat-Assisted Magnetic Recording, or HAMR, available by the end of 2018 in pilot volumes. Even as today’s market has embraced 10TB+ drives, the need for 20TB+ drives remains imperative in the relative near term. HAMR is the Seagate research team’s next major advance in hard drive technology.

    HAMR is a technology that over time will enable a big increase in the amount of data that can be stored on a disk. A small laser is attached to a recording head, designed to heat a tiny spot on the disk where the data will be written. This allows a smaller bit cell to be written as either a 0 or a 1. The smaller bit cell size enables more bits to be crammed into a given surface area — increasing the areal density of data, and increasing drive capacity.

    It sounds almost simple, but the science and engineering expertise required, the research, experimentation, lab development and product development to perfect this technology has been enormous. Below is an overview of the HAMR technology and you can dig into the details in our technical brief that provides a point-by-point rundown describing several key advances enabling the HAMR design.

    As much time and resources as have been committed to developing HAMR, the need for its increased data density is indisputable. Demand for data storage keeps increasing. Businesses’ ability to manage and leverage more capacity is a competitive necessity, and IT spending on capacity continues to increase.

    History of Increasing Storage Capacity

    For the last 50 years areal density in the hard disk drive has been growing faster than Moore’s law, which is a very good thing. After all, customers from data centers and cloud service providers to creative professionals and game enthusiasts rarely go shopping looking for a hard drive just like the one they bought two years ago. The demands of increasing data on storage capacities inevitably increase, thus the technology constantly evolves.

    According to the Advanced Storage Technology Consortium, HAMR will be the next significant storage technology innovation to increase the amount of storage in the area available to store data, also called the disk’s “areal density.” We believe this boost in areal density will help fuel hard drive product development and growth through the next decade.

    Why do we Need to Develop Higher-Capacity Hard Drives? Can’t Current Technologies do the Job?

    Why is HAMR’s increased data density so important?

    Data has become critical to all aspects of human life, changing how we’re educated and entertained. It affects and informs the ways we experience each other and interact with businesses and the wider world. IDC research shows the datasphere — all the data generated by the world’s businesses and billions of consumer endpoints — will continue to double in size every two years. IDC forecasts that by 2025 the global datasphere will grow to 163 zettabytes (that is a trillion gigabytes). That’s ten times the 16.1 ZB of data generated in 2016. IDC cites five key trends intensifying the role of data in changing our world: embedded systems and the Internet of Things (IoT), instantly available mobile and real-time data, cognitive artificial intelligence (AI) systems, increased security data requirements, and critically, the evolution of data from playing a business background to playing a life-critical role.

    Consumers use the cloud to manage everything from family photos and videos to data about their health and exercise routines. Real-time data created by connected devices — everything from Fitbit, Alexa and smart phones to home security systems, solar systems and autonomous cars — are fueling the emerging Data Age. On top of the obvious business and consumer data growth, our critical infrastructure like power grids, water systems, hospitals, road infrastructure and public transportation all demand and add to the growth of real-time data. Data is now a vital element in the smooth operation of all aspects of daily life.

    All of this entails a significant infrastructure cost behind the scenes with the insatiable, global appetite for data storage. While a variety of storage technologies will continue to advance in data density (Seagate announced the first 60TB 3.5-inch SSD unit for example), high-capacity hard drives serve as the primary foundational core of our interconnected, cloud and IoT-based dependence on data.

    HAMR Hard Drive Technology

    Seagate has been working on heat assisted magnetic recording (HAMR) in one form or another since the late 1990s. During this time we’ve made many breakthroughs in making reliable near field transducers, special high capacity HAMR media, and figuring out a way to put a laser on each and every head that is no larger than a grain of salt.

    The development of HAMR has required Seagate to consider and overcome a myriad of scientific and technical challenges including new kinds of magnetic media, nano-plasmonic device design and fabrication, laser integration, high-temperature head-disk interactions, and thermal regulation.

    A typical hard drive inside any computer or server contains one or more rigid disks coated with a magnetically sensitive film consisting of tiny magnetic grains. Data is recorded when a magnetic write-head flies just above the spinning disk; the write head rapidly flips the magnetization of one magnetic region of grains so that its magnetic pole points up or down, to encode a 1 or a 0 in binary code.

    Increasing the amount of data you can store on a disk requires cramming magnetic regions closer together, which means the grains need to be smaller so they won’t interfere with each other.

    Heat Assisted Magnetic Recording (HAMR) is the next step to enable us to increase the density of grains — or bit density. Current projections are that HAMR can achieve 5 Tbpsi (Terabits per square inch) on conventional HAMR media, and in the future will be able to achieve 10 Tbpsi or higher with bit patterned media (in which discrete dots are predefined on the media in regular, efficient, very dense patterns). These technologies will enable hard drives with capacities higher than 100 TB before 2030.

    The major problem with packing bits so closely together is that if you do that on conventional magnetic media, the bits (and the data they represent) become thermally unstable, and may flip. So, to make the grains maintain their stability — their ability to store bits over a long period of time — we need to develop a recording media that has higher coercivity. That means it’s magnetically more stable during storage, but it is more difficult to change the magnetic characteristics of the media when writing (harder to flip a grain from a 0 to a 1 or vice versa).

    That’s why HAMR’s first key hardware advance required developing a new recording media that keeps bits stable — using high anisotropy (or “hard”) magnetic materials such as iron-platinum alloy (FePt), which resist magnetic change at normal temperatures. Over years of HAMR development, Seagate researchers have tested and proven out a variety of FePt granular media films, with varying alloy composition and chemical ordering.

    In fact the new media is so “hard” that conventional recording heads won’t be able to flip the bits, or write new data, under normal temperatures. If you add heat to the tiny spot on which you want to write data, you can make the media’s coercive field lower than the magnetic field provided by the recording head — in other words, enable the write head to flip that bit.

    So, a challenge with HAMR has been to replace conventional perpendicular magnetic recording (PMR), in which the write head operates at room temperature, with a write technology that heats the thin film recording medium on the disk platter to temperatures above 400 °C. The basic principle is to heat a tiny region of several magnetic grains for a very short time (~1 nanoseconds) to a temperature high enough to make the media’s coercive field lower than the write head’s magnetic field. Immediately after the heat pulse, the region quickly cools down and the bit’s magnetic orientation is frozen in place.

    Applying this dynamic nano-heating is where HAMR’s famous “laser” comes in. A plasmonic near-field transducer (NFT) has been integrated into the recording head, to heat the media and enable magnetic change at a specific point. Plasmonic NFTs are used to focus and confine light energy to regions smaller than the wavelength of light. This enables us to heat an extremely small region, measured in nanometers, on the disk media to reduce its magnetic coercivity,

    Moving HAMR Forward

    HAMR write head

    As always in advanced engineering, the devil — or many devils — is in the details. As noted earlier, our technical brief provides a point-by-point short illustrated summary of HAMR’s key changes.

    Although hard work remains, we believe this technology is nearly ready for commercialization. Seagate has the best engineers in the world working towards a goal of a 20 Terabyte drive by 2019. We hope we’ve given you a glimpse into the amount of engineering that goes into a hard drive. Keeping up with the world’s insatiable appetite to create, capture, store, secure, manage, analyze, rapidly access and share data is a challenge we work on every day.

    With thousands of HAMR drives already being made in our manufacturing facilities, our internal and external supply chain is solidly in place, and volume manufacturing tools are online. This year we began shipping initial units for customer tests, and production units will ship to key customers by the end of 2018. Prepare for breakthrough capacities.

    The post What is HAMR and How Does It Enable the High-Capacity Needs of the Future? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.