Tag Archives: Business

Cloud Babble: The Jargon of Cloud Storage

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/what-is-cloud-computing/

Cloud Babble

One of the things we in the technology business are good at is coming up with names, phrases, euphemisms, and acronyms for the stuff that we create. The Cloud Storage market is no different, and we’d like to help by illuminating some of the cloud storage related terms that you might come across. We know this is just a start, so please feel free to add in your favorites in the comments section below and we’ll update this post accordingly.

Clouds

The cloud is really just a collection of purpose built servers. In a public cloud the servers are shared between multiple unrelated tenants. In a private cloud, the servers are dedicated to a single tenant or sometimes a group of related tenants. A public cloud is off-site, while a private cloud can be on-site or off-site – or on-prem or off-prem, if you prefer.

Both Sides Now: Hybrid Clouds

Speaking of on-prem and off-prem, there are Hybrid Clouds or Hybrid Data Clouds depending on what you need. Both are based on the idea that you extend your local resources (typically on-prem) to the cloud (typically off-prem) as needed. This extension is controlled by software that decides, based on rules you define, what needs to be done where.

A Hybrid Data Cloud is specific to data. For example, you can set up a rule that says all accounting files that have not been touched in the last year are automatically moved off-prem to cloud storage. The files are still available; they are just no longer stored on your local systems. The rules can be defined to fit an organization’s workflow and data retention policies.

A Hybrid Cloud is similar to a Hybrid Data Cloud except it also extends compute. For example, at the end of the quarter, you can spin up order processing application instances off-prem as needed to add to your on-prem capacity. Of course, determining where the transactional data used and created by these applications resides can be an interesting systems design challenge.

Clouds in my Coffee: Fog

Typically, public and private clouds live in large buildings called data centers. Full of servers, networking equipment, and clean air, data centers need lots of power, lots of networking bandwidth, and lots of space. This often limits where data centers are located. The further away you are from a data center, the longer it generally takes to get your data to and from there. This is known as latency. That’s where “Fog” comes in.

Fog is often referred to as clouds close to the ground. Fog, in our cloud world, is basically having a “little” data center near you. This can make data storage and even cloud based processing faster for everyone nearby. Data, and less so processing, can be transferred to/from the Fog to the Cloud when time is less a factor. Data could also be aggregated in the Fog and sent to the Cloud. For example, your electric meter could report its minute-by-minute status to the Fog for diagnostic purposes. Then once a day the aggregated data could be send to the power company’s Cloud for billing purposes.

Another term used in place of Fog is Edge, as in computing at the Edge. In either case, a given cloud (data center) usually has multiple Edges (little data centers) connected to it. The connection between the Edge and the Cloud is sometimes known as the middle-mile. The network in the middle-mile can be less robust than that required to support a stand-alone data center. For example, the middle-mile can use 1 Gbps lines, versus a data center, which would require multiple 10 Gbps lines.

Heavy Clouds No Rain: Data

We’re all aware that we are creating, processing, and storing data faster than ever before. All of this data is stored in either a structured or more likely an unstructured way. Databases and data warehouses are structured ways to store data, but a vast amount of data is unstructured – meaning the schema and data access requirements are not known until the data is queried. A large pool of unstructured data in a flat architecture can be referred to as a Data Lake.

A Data Lake is often created so we can perform some type of “big data” analysis. In an over simplified example, let’s extend the lake metaphor a bit and ask the question; “how many fish are in our lake?” To get an answer, we take a sufficient sample of our lake’s water (data), count the number of fish we find, and extrapolate based on the size of the lake to get an answer within a given confidence interval.

A Data Lake is usually found in the cloud, an excellent place to store large amounts of non-transactional data. Watch out as this can lead to our data having too much Data Gravity or being locked in the Hotel California. This could also create a Data Silo, thereby making a potential data Lift-and-Shift impossible. Let me explain:

  • Data Gravity — Generally, the more data you collect in one spot, the harder it is to move. When you store data in a public cloud, you have to pay egress and/or network charges to download the data to another public cloud or even to your own on-premise systems. Some public cloud vendors charge a lot more than others, meaning that depending on your public cloud provider, your data could financially have a lot more gravity than you expected.
  • Hotel California — This is like Data Gravity but to a lesser scale. Your data is in the Hotel California if, to paraphrase, “your data can check out any time you want, but it can never leave.” If the cost of downloading your data is limiting the things you want to do with that data, then your data is in the Hotel California. Data is generally most valuable when used, and with cloud storage that can include archived data. This assumes of course that the archived data is readily available, and affordable, to download. When considering a cloud storage project always figure in the cost of using your own data.
  • Data Silo — Over the years, businesses have suffered from organizational silos as information is not shared between different groups, but instead needs to travel up to the top of the silo before it can be transferred to another silo. If your data is “trapped” in a given cloud by the cost it takes to share such data, then you may have a Data Silo, and that’s exactly opposite of what the cloud should do.
  • Lift-and-Shift — This term is used to define the movement of data or applications from one data center to another or from on-prem to off-prem systems. The move generally occurs all at once and once everything is moved, systems are operational and data is available at the new location with few, if any, changes. If your data has too much gravity or is locked in a hotel, a data lift-and-shift may break the bank.

I Can See Clearly Now

Hopefully, the cloudy terms we’ve covered are well, less cloudy. As we mentioned in the beginning, our compilation is just a start, so please feel free to add in your favorite cloud term in the comments section below and we’ll update this post with your contributions. Keep your entries “clean,” and please no words or phrases that are really adverts for your company. Thanks.

The post Cloud Babble: The Jargon of Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Pirate IPTV Mastermind Owns Raided Bulgarian ISP, Sources Say

Post Syndicated from Andy original https://torrentfreak.com/pirate-iptv-mastermind-owns-raided-bulgarian-isp-sources-say-180117/

Last Tuesday a year-long investigation came to a climax when the Intellectual Property Crime Unit of the Cypriot Police teamed up with the Cybercrime Division of the Greek Police, the Dutch Fiscal Investigative and Intelligence Service (FIOD), the Cybercrime Unit of the Bulgarian Police, Europol’s Intellectual Property Crime Coordinated Coalition (IPC³), and the Audiovisual Anti-Piracy Alliance (AAPA), to raid a ‘pirate’ TV operation.

Official information didn’t become freely available until later in the week but across Cyprus, Bulgaria and Greece there were at least 17 house searches and individuals aged 43, 44, and 53 were arrested in Cyprus and remanded in custody for seven days.

According to Europol, the IPTV operation was considerable, offering 1,200 channels to as many as 500,000 subscribers around the world. Although early financial estimates in cases like these are best taken with a grain of salt, latest claims suggest revenues of five million euros a month, 60 million euros per year.

Part of the IPTV operation (credit:Europol)

As previously reported, so-called ‘front servers’ (servers designed to hide the main servers’ true location) were discovered in the Netherlands. Additionally, it’s now being reported by Cypriot media that nine suspects from an unnamed Internet service provider housing the servers were arrested and taken in for questioning. But the intrigue doesn’t stop there.

Well in advance of Europol’s statement late last week, TorrentFreak was informed by a source that police in Bulgaria had targeted a specific ISP called MegaByte Internet, located in the small town of Petrich. After returning online after a couple of days’ downtime, the ISP responded to some of our questions, detailed in our earlier interview.

“We were informed by the police that some of our clients in Petrich and Sofia were using our service for illegal streaming and actions,” a company spokesperson said.

“Of course, we were not able to know this because our services are unmanaged and root access [to servers] is given to our clients. For this reason any client and anyone that uses our services are responsible for their own actions.”

Other questions went unanswered but yesterday fresh information coming out of Cyprus certainly helped to fill in the gaps – and then some.

Philenews reports that a total of 140 servers were seized in Bulgaria – 60 from the headquarters of MegaByte Internet and four other custom locations, and 80 from two other locations in the Bulgarian capital, Sofia.

At least as far as locations go, this ties in with a statement provided by MegaByte to TF last week which claimed that some of its equipment was seized from Telepoint, Bulgaria’s biggest datacenter.

Viewing cards facilitating feeds…

We now know that ten employees of MegaByte were interrogated by the police but perhaps the biggest revelation is that the owner of the Internet service provider is now being openly named as the brains behind the entire operation.

Philenews reports that 47-year-old businessman Christos Apostolos Samaras from Greece, who has owned and run MegaByte since 2009, is the individual Europol reported as being arrested in Bulgaria last week.

In addition to linking him to MegaByte Internet’s domain, various searches indicate that Samaras is also connected to 1Stream, a hosting company dedicated to providing bandwidth for streaming purposes.

The investigation continues.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Early Challenges: Managing Cash Flow

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/managing-cash-flow/

Cash flow projection charts

This post by Backblaze’s CEO and co-founder Gleb Budman is the eighth in a series about entrepreneurship. You can choose posts in the series from the list below:

  1. How Backblaze got Started: The Problem, The Solution, and the Stuff In-Between
  2. Building a Competitive Moat: Turning Challenges Into Advantages
  3. From Idea to Launch: Getting Your First Customers
  4. How to Get Your First 1,000 Customers
  5. Surviving Your First Year
  6. How to Compete with Giants
  7. The Decision on Transparency
  8. Early Challenges: Managing Cash Flow

Use the Join button above to receive notification of new posts in this series.

Running out of cash is one of the quickest ways for a startup to go out of business. When you are starting a company the question of where to get cash is usually the top priority, but managing cash flow is critical for every stage in the lifecycle of a company. As a primarily bootstrapped but capital-intensive business, managing cash flow at Backblaze was and still is a key element of our success and requires continued focus. Let’s look at what we learned over the years.

Raising Your Initial Funding

When starting a tech business in Silicon Valley, the default assumption is that you will immediately try to raise venture funding. There are certainly many advantages to raising funding — not the least of which is that you don’t need to be cash-flow positive since you have cash in the bank and the expectation is that you will have a “burn rate,” i.e. you’ll be spending more than you make.

Note: While you’re not expected to be cash-flow positive, that doesn’t mean you don’t have to worry about cash. Cash-flow management will determine your burn rate. Whether you can get to cash-flow breakeven or need to raise another round of funding is a direct byproduct of your cash flow management.

Also, raising funding takes time (most successful fundraising cycles take 3-6 months start-to-finish), and time at a startup is in short supply. Constantly trying to raise funding can take away from product development and pursuing growth opportunities. If you’re not successful in raising funding, you then have to either shut down or find an alternate method of funding the business.

Sources of Funding

Depending on the stage of the company, type of company, and other factors, you may have access to different sources of funding. Let’s list a number of them:

Customers

Sales — the best kind of funding. It is non-dilutive, doesn’t have to be paid back, and is a direct metric of the success of your company.

Pre-Sales — some customers may be willing to pay you for a product in beta, a test, or pre-pay for a product they’ll receive when finished. Pre-Sales income also is great because it shares the characteristics of cash from sales, but you get the cash early. It also can be a good sign that the product you’re building fills a market need. We started charging for Backblaze computer backup while it was still in private beta, which allowed us to not only collect cash from customers, but also test the billing experience and users’ real desire for the service.

Services — if you’re a service company and customers are paying you for that, great. You can effectively scale for the number of hours available in a day. As demand grows, you can add more employees to increase the total number of billable hours.

Note: If you’re a product company and customers are paying you to consult, that can provide much needed cash, and could provide feedback toward the right product. However, it can also distract from your core business, send you down a path where you’re building a product for a single customer, and addict you to a path that prevents you from building a scalable business.

Investors

Yourself — you likely are putting your time into the business, and deferring salary in the process. You may also put your own cash into the business either as an investment or a loan.

Angels — angels are ideal as early investors since they are used to investing in businesses with little to no traction. AngelList is a good place to find them, though finding people you’re connected with through someone that knows you well is best.

Crowdfunding — a component of the JOBS Act permitted entrepreneurs to raise money from nearly anyone since May 2016. The SEC imposes limits on both investors and the companies. This article goes into some depth on the options and sites available.

VCs — VCs are ideal for companies that need to raise at least a few million dollars and intend to build a business that will be worth over $1 billion.

Debt

Friends & Family — F&F are often the first people to give you money because they are investing in you. It’s great to have some early supporters, but it also can be risky to take money from people who aren’t used to the risks. The key advice here is to only take money from people who won’t mind losing it. If someone is talking about using their children’s college funds or borrowing from their 401k, say ‘no thank you’ — even if they’re sure they want to loan you money.

Bank Loans — a variety of loan types exist, but most either require the company to have been operational for a couple years, be able to borrow against money the company has or is making, or be able to get a personal guarantee from the founders whereby their own credit is on the line. Fundera provides a good overview of loan options and can help secure some, but most will not be an option for a brand new startup.

Grants

Government — in some areas there is the potential for government grants to facilitate research. The SBIR program facilitates some such grants.

At Backblaze, we used a number of these options:

• Investors/Yourself
We loaned a cumulative total of a couple hundred thousand dollars to the company and invested our time by going without a salary for a year and a half.
• Customers/Pre-Sales
We started selling the Backblaze service while it was still in beta.
• Customers/Sales
We launched v1.0 and kept selling.
• Investors/Angels
After a year and a half, we raised $370k from 11 angels. All of them were either people whom we knew personally or were a strong recommendation from a mutual friend.
• Debt/Loans
After a couple years we were able to get equipment leases whereby the Storage Pods and hard drives were used as collateral to secure the lease on them.
• Investors/VCs
Ater five years we raised $5m from TMT Investments to add to the balance sheet and invest in growth.

The variety and quantity of sources we used is by no means uncommon.

GAAP vs. Cash

Most companies start tracking financials based on cash, and as they scale they switch to GAAP (Generally Accepted Accounting Principles). Cash is easier to track — we got paid $XXXX and spent $YYY — and as often mentioned, is required for the business to stay alive. GAAP has more subtlety and complexity, but provides a clearer picture of how the business is really doing. Backblaze was on a ‘cash’ system for the first few years, then switched to GAAP. For this post, I’m going to focus on things that help cash flow, not GAAP profitability.

Stages of Cash Flow Management

All-spend

In a pure service business (e.g. solo proprietor law firm), you may have no expenses other than your time, so this stage doesn’t exist. However, in a product business there is a period of time where you are building the product and have nothing to sell. You have zero cash coming in, but have cash going out. Your cash-flow is completely negative and you need funds to cover that.

Sales-generating

Starting to see cash come in from customers is thrilling. I initially had our system set up to email me with every $5 payment we received. You’re making sales, but not covering expenses.

Ramen-profitable

But it takes a lot of $5 payments to pay for servers and salaries, so for a while expenses are likely to outstrip sales. Getting to ramen-profitable is a critical stage where sales cover the business expenses and are “paying enough for the founders to eat ramen.” This extends the runway for a business, but is not completely sustainable, since presumably the founders can’t (or won’t) live forever on a subsistence salary.

Business-profitable

This is the ultimate stage whereby the business is truly profitable, including paying everyone market-rate salaries. A business at this stage is self-sustaining. (Of course, market shifts and plenty of other challenges can kill the business, but cash-flow issues alone will not.)

Note, I’m using the word ‘profitable’ here to mean this is still on a cash-basis.

Backblaze was in the all-spend stage for just over a year, during which time we built the service and hadn’t yet made the service available to customers. Backblaze was in the sales-generating stage for nearly another year before the company was barely ramen-profitable where sales were covering the company expenses and paying the founders minimum wage. (I say ‘barely’ since minimum wage in the SF Bay Area is arguably never subsistence.) It took almost three more years before the company was business-profitable, paying everyone including the founders market-rate.

Cash Flow Forecasting

When raising funding it’s helpful to think of milestones reached. You don’t necessarily need enough cash on day one to last for the next 100 years of the company. Some good milestones to consider are how much cash you need to prove there is a market need, prove you can build a product to meet that need, or get to ramen-profitable.

Two things to consider:

1) Unit Economics (COGS)

If your product is 100% software, this may not be relevant. Once software is built it costs effectively nothing to deliver the product to one customer or one million customers. However, in most businesses there is some incremental cost to provide the product. If you’re selling a hardware device, perhaps you sell it for $100 but it costs you $50 to make it. This is called “COGS” (Cost of Goods Sold).

Many products rely on cloud services where the costs scale with growth. That model works great, but it’s still important to understand what the costs are for the cloud service you use per unit of product you sell.

Support is often done by the founders early-on in a business, but that is another real cost to factor in and estimate on a per-user basis. Taking all of the per unit costs combined, you may charge $10/month/user for your service, but if it costs you $7/month/user in cloud services, you’re only netting $3/month/user.

2) Operating Expenses (OpEx)

These are expenses that don’t scale with the number of product units you sell. Typically this includes research & development, sales & marketing, and general & administrative expenses. Presumably there is a certain level of these functions required to build the product, market it, sell it, and run the organization. You can choose to invest or cut back on these, but you’ll still make the same amount per product unit.

Incremental Net Profit Per Unit

If you’ve calculated your COGS and your unit economics are “upside down,” where the amount you charge is less than that it costs you to provide your service, it’s worth thinking hard about how that’s going to change over time. If it will not change, there is no scale that will make the business work. Presuming you do make money on each unit of product you sell — what is sometimes referred to as “Contribution Margin” — consider how many of those product units you need to sell to cover your operating expenses as described above.

Calculating Your Profit

The math on getting to ramen-profitable is simple:

(Number of Product Units Sold x Contribution Margin) - Operating Expenses = Profit

If your operating expenses include subsistence salaries for the founders and profit > $0, you’re ramen-profitable.

Improving Cash Flow

Having access to sources of cash, whether from selling to customers or other methods, is excellent. But needing less cash gives you more choices and allows you to either dilute less, owe less, or invest more.

There are two ways to improve cash flow:

1) Collect More Cash

The best way to collect more cash is to provide more value to your customers and as a result have them pay you more. Additional features/products/services can allow this. However, you can also collect more cash by changing how you charge for your product. If you have a subscription, changing from charging monthly to yearly dramatically improves your cash flow. If you have a product that customers use up, selling a year’s supply instead of selling them one-by-one can help.

2) Spend Less Cash

Reducing COGS is a fantastic way to spend less cash in a scalable way. If you can do this without harming the product or customer experience, you win. There are a myriad of ways to also reduce operating expenses, including taking sub-market salaries, using your home instead of renting office space, staying focused on your core product, etc.

Ultimately, collecting more and spending less cash dramatically simplifies the process of getting to ramen-profitable and later to business-profitable.

Be Careful (Why GAAP Matters)

A word of caution: while running out of cash will put you out of business immediately, overextending yourself will likely put you out of business not much later. GAAP shows how a business is really doing; cash doesn’t. If you only focus on cash, it is possible to commit yourself to both delivering products and repaying loans in the future in an unsustainable fashion. If you’re taking out loans, watch the total balance and monthly payments you’re committing to. If you’re asking customers for pre-payment, make sure you believe you can deliver on what they’ve paid for.

Summary

There are numerous challenges to building a business, and ensuring you have enough cash is amongst the most important. Having the cash to keep going lets you keep working on all of the other challenges. The frameworks above were critical for maintaining Backblaze’s cash flow and cash balance. Hopefully you can take some of the lessons we learned and apply them to your business. Let us know what works for you in the comments below.

The post Early Challenges: Managing Cash Flow appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Now Open – Third AWS Availability Zone in London

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-third-aws-availability-zone-in-london/

We expand AWS by picking a geographic area (which we call a Region) and then building multiple, isolated Availability Zones in that area. Each Availability Zone (AZ) has multiple Internet connections and power connections to multiple grids.

Today I am happy to announce that we are opening our 50th AWS Availability Zone, with the addition of a third AZ to the EU (London) Region. This will give you additional flexibility to architect highly scalable, fault-tolerant applications that run across multiple AZs in the UK.

Since launching the EU (London) Region, we have seen an ever-growing set of customers, particularly in the public sector and in regulated industries, use AWS for new and innovative applications. Here are a couple of examples, courtesy of my AWS colleagues in the UK:

Enterprise – Some of the UK’s most respected enterprises are using AWS to transform their businesses, including BBC, BT, Deloitte, and Travis Perkins. Travis Perkins is one of the largest suppliers of building materials in the UK and is implementing the biggest systems and business change in its history, including an all-in migration of its data centers to AWS.

Startups – Cross-border payments company Currencycloud has migrated its entire payments production, and demo platform to AWS resulting in a 30% saving on their infrastructure costs. Clearscore, with plans to disrupting the credit score industry, has also chosen to host their entire platform on AWS. UnderwriteMe is using the EU (London) Region to offer an underwriting platform to their customers as a managed service.

Public Sector -The Met Office chose AWS to support the Met Office Weather App, available for iPhone and Android phones. Since the Met Office Weather App went live in January 2016, it has attracted more than half a million users. Using AWS, the Met Office has been able to increase agility, speed, and scalability while reducing costs. The Driver and Vehicle Licensing Agency (DVLA) is using the EU (London) Region for services such as the Strategic Card Payments platform, which helps the agency achieve PCI DSS compliance.

The AWS EU (London) Region has achieved Public Services Network (PSN) assurance, which provides UK Public Sector customers with an assured infrastructure on which to build UK Public Sector services. In conjunction with AWS’s Standardized Architecture for UK-OFFICIAL, PSN assurance enables UK Public Sector organizations to move their UK-OFFICIAL classified data to the EU (London) Region in a controlled and risk-managed manner.

For a complete list of AWS Regions and Services, visit the AWS Global Infrastructure page. As always, pricing for services in the Region can be found on the detail pages; visit our Cloud Products page to get started.

Jeff;

Pirate Streaming on Facebook is a Seriously Risky Business

Post Syndicated from Andy original https://torrentfreak.com/pirate-streaming-on-facebook-is-a-seriously-risky-business-180114/

For more than a year the British public has been warned about the supposed dangers of Kodi piracy.

Dozens of headlines have claimed consequences ranging from system-destroying malware to prison sentences. Fortunately, most of them can be filed under “tabloid nonsense.”

That being said, there is an extremely important issue that deserves much closer attention, particularly given a shift in the UK legal climate during 2017. We’re talking about live streaming copyrighted content on Facebook, which is both incredibly easy and frighteningly risky.

This week it was revealed that 34-year-old Craig Foster from the UK had been given an ultimatum from Sky to pay a £5,000 settlement fee. The media giant discovered that he’d live-streamed the Anthony Joshua v Wladimir Klitschko fight on Facebook and wanted compensation to make a potential court case disappear.

While it may seem initially odd to use the word, Foster was lucky.

Under last year’s Digital Economy Act, he could’ve been jailed for up to ten years for distributing copyright-infringing content to the public, if he had “reason to believe that communicating the work to the public [would] cause loss to the owner of the copyright, or [would] expose the owner of the copyright to a risk of loss.”

Clearly, as a purchaser of the £19.95 pay-per-view himself, he would’ve appreciated that the event costs money. With that in mind, a court would likely find that he would have been aware that Sky would have been exposed to a “risk of loss”. Sky claim that 4,250 people watched the stream but the way the law is written, no specific level of loss is required for a breach of the law.

But it’s not just the threat of a jail sentence that’s the problem. People streaming live sports on Facebook are sitting ducks.

In Foster’s case, the fight he streamed was watermarked, which means that Sky put a tracking code into it which identified him personally as the buyer of the event. When he (or his friend, as Foster claims) streamed it on Facebook, it was trivial for Sky to capture the watermark and track it back to his Sky account.

Equally, it would be simplicity itself to see that the name on the Sky account had exactly the same name and details as Foster’s Facebook account. So, to most observers, it would appear that not only had Foster purchased the event, but he was also streaming it to Facebook illegally.

It’s important to keep something else in mind. No cooperation between Sky and Facebook would’ve been necessary to obtain Foster’s details. Take the amount of information most people share on Facebook, combine that with the information Sky already had, and the company’s anti-piracy team would have had a very easy job.

Now compare this situation with an upload of the same stream to a torrent site.

While the video capture would still contain Foster’s watermark, which would indicate the source, to prove he also distributed the video Sky would’ve needed to get inside a torrent swarm. From there they would need to capture the IP address of the initial seeder and take the case to court, to force an ISP to hand over that person’s details.

Presuming they were the same person, Sky would have a case, with a broadly similar level of evidence to that presented in the current matter. However, it would’ve taken them months to get their man and cost large sums of money to get there. It’s very unlikely that £5,000 would cover the costs, meaning a much, much bigger bill for the culprit.

Or, confident that Foster was behind the leak based on the watermark alone, Sky could’ve gone straight to the police. That never ends well.

The bottom line is that while live-streaming on Facebook is simplicity itself, people who do it casually from their own account (especially with watermarked content) are asking for trouble.

Nailing Foster was the piracy equivalent of shooting fish in a barrel but the worrying part is that he probably never gave his (or his friend’s…) alleged infringement a second thought. With a click or two, the fight was live and he was staring down the barrel of a potential jail sentence, had Sky not gone the civil route.

It’s scary stuff and not enough is being done to warn people of the consequences. Forget the scare stories attempting to deter people from watching fights or movies on Kodi, thoughtlessly streaming them to the public on social media is the real danger.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

ISP: We’re Cooperating With Police Following Pirate IPTV Raid

Post Syndicated from Andy original https://torrentfreak.com/isp-were-cooperating-with-police-following-pirate-iptv-raid-180113/

This week, police forces around Europe took action against what is believed to be one of the world’s largest pirate IPTV networks.

The investigation, launched a year ago and coordinated by Europol, came to head on Tuesday when police carried out raids in Cyprus, Bulgaria, Greece, and the Netherlands. A fresh announcement from the crime-fighting group reveals the scale of the operation.

It was led by the Cypriot Police – Intellectual Property Crime Unit, with the support of the Cybercrime Division of the Greek Police, the Dutch Fiscal Investigative and Intelligence Service (FIOD), the Cybercrime Unit of the Bulgarian Police, Europol’s Intellectual Property Crime Coordinated Coalition (IPC³), and supported by members of the Audiovisual Anti-Piracy Alliance (AAPA).

In Cyprus, Bulgaria and Greece, 17 house searches were carried out. Three individuals aged 43, 44, and 53 were arrested in Cyprus and one was arrested in Bulgaria.

All stand accused of being involved in an international operation to illegally broadcast around 1,200 channels of pirated content to an estimated 500,000 subscribers. Some of the channels offered were illegally sourced from Sky UK, Bein Sports, Sky Italia, and Sky DE. On Thursday, the three individuals in Cyprus were remanded in custody for seven days.

“The servers used to distribute the channels were shut down, and IP addresses hosted by a Dutch company were also deactivated thanks to the cooperation of the authorities of The Netherlands,” Europol reports.

“In Bulgaria, 84 servers and 70 satellite receivers were seized, with decoders, computers and accounting documents.”

TorrentFreak was previously able to establish that Megabyte-Internet Ltd, an ISP located in the small Bulgarian town Petrich, was targeted by police. The provider went down on Tuesday but returned towards the end of the week. Responding to our earlier inquiries, the company told us more about the situation.

“We are an ISP provider located in Petrich, Bulgaria. We are selling services to around 1,500 end-clients in the Petrich area and surrounding villages,” a spokesperson explained.

“Another part of our business is internet services like dedicated unmanaged servers, hosting, email servers, storage services, and VPNs etc.”

The spokesperson added that some of Megabyte’s equipment is located at Telepoint, Bulgaria’s biggest datacenter, with connectivity to Petrich. During the raid the police seized the company’s hardware to check for evidence of illegal activity.

“We were informed by the police that some of our clients in Petrich and Sofia were using our service for illegal streaming and actions,” the company said.

“Of course, we were not able to know this because our services are unmanaged and root access [to servers] is given to our clients. For this reason any client and anyone that uses our services are responsible for their own actions.”

TorrentFreak asked many more questions, including how many police attended, what type and volume of hardware was seized, and whether anyone was arrested or taken for questioning. But, apart from noting that the police were friendly, the company declined to give us any additional information, revealing that it was not permitted to do so at this stage.

What is clear, however, is that Megabyte-Internet is offering its full cooperation to the authorities. The company says that it cannot be held responsible for the actions of its clients so their details will be handed over as part of the investigation.

“So now we will give to the police any details about these clients because we hold their full details by law. [The police] will find [out about] all the illegal actions from them,” the company concludes, adding that it’s fully operational once more and working with clients.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

timeShift(GrafanaBuzz, 1w) Issue 29

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/01/12/timeshiftgrafanabuzz-1w-issue-29/

Welcome to TimeShift

intro paragraph


Latest Stable Release

Grafana 4.6.3 is now available. Latest bugfixes include:

  • Gzip: Fixes bug Gravatar images when gzip was enabled #5952
  • Alert list: Now shows alert state changes even after adding manual annotations on dashboard #99513
  • Alerting: Fixes bug where rules evaluated as firing when all conditions was false and using OR operator. #93183
  • Cloudwatch: CloudWatch no longer display metrics’ default alias #101514, thx @mtanda

Download Grafana 4.6.3 Now


From the Blogosphere

Graphite 1.1: Teaching an Old Dog New Tricks: Grafana Labs’ own Dan Cech is a contributor to the Graphite project, and has been instrumental in the addition of some of the newest features. This article discusses five of the biggest additions, how they work, and what you can expect for the future of the project.

Instrument an Application Using Prometheus and Grafana: Chris walks us through how easy it is to get useful metrics from an application to understand bottlenecks and performace. In this article, he shares an application he built that indexes your Gmail account into Elasticsearch, and sends the metrics to Prometheus. Then, he shows you how to set up Grafana to get meaningful graphs and dashboards.

Visualising Serverless Metrics With Grafana Dashboards: Part 3 in this series of blog posts on “Monitoring Serverless Applications Metrics” starts with an overview of Grafana and the UI, covers queries and templating, then dives into creating some great looking dashboards. The series plans to conclude with a post about setting up alerting.

Huawei FAT WLAN Access Points in Grafana: Huawei’s FAT firmware for their WLAN Access points lacks central management overview. To get a sense of the performance of your AP’s, why not quickly create a templated dashboard in Grafana? This article quickly steps your through the process, and includes a sample dashboard.


Grafana Plugins

Lots of updated plugins this week. Plugin authors add new features and fix bugs often, to make your plugin perform better – so it’s important to keep your plugins up to date. We’ve made updating easy; for on-prem Grafana, use the Grafana-cli tool, or update with 1 click if you’re using Hosted Grafana.

UPDATED PLUGIN

Clickhouse Data Source – The Clickhouse Data Source plugin has been updated a few times with small fixes during the last few weeks.

  • Fix for quantile functions
  • Allow rounding with round option for both time filters: $from and $to

Update

UPDATED PLUGIN

Zabbix App – The Zabbix App had a release with a redesign of the Triggers panel as well as support for Multiple data sources for the triggers panel

Update

UPDATED PLUGIN

OpenHistorian Data Source – this data source plugin received some new query builder screens and improved documentation.

Update

UPDATED PLUGIN

BT Status Dot Panel – This panel received a small bug fix.

Update

UPDATED PLUGIN

Carpet Plot Panel – A recent update for this panel fixes a D3 import bug.

Update


Upcoming Events

In between code pushes we like to speak at, sponsor and attend all kinds of conferences and meetups. We also like to make sure we mention other Grafana-related events happening all over the world. If you’re putting on just such an event, let us know and we’ll list it here.

Women Who Go Berlin: Go Workshop – Monitoring and Troubleshooting using Prometheus and Grafana | Berlin, Germany – Jan 31, 2018: In this workshop we will learn about one of the most important topics in making apps production ready: Monitoring. We will learn how to use tools you’ve probably heard a lot about – Prometheus and Grafana, and using what we learn we will troubleshoot a particularly buggy Go app.

Register Now

FOSDEM | Brussels, Belgium – Feb 3-4, 2018: FOSDEM is a free developer conference where thousands of developers of free and open source software gather to share ideas and technology. There is no need to register; all are welcome.

Jfokus | Stockholm, Sweden – Feb 5-7, 2018:
Carl Bergquist – Quickie: Monitoring? Not OPS Problem

Why should we monitor our system? Why can’t we just rely on the operations team anymore? They use to be able to do that. What’s currently changing? Presentation content: – Why do we monitor our system – How did it use to work? – Whats changing – Why do we need to shift focus – Everyone should be on call. – Resilience is the goal (Best way of having someone care about quality is to make them responsible).

Register Now

Jfokus | Stockholm, Sweden – Feb 5-7, 2018:
Leonard Gram – Presentation: DevOps Deconstructed

What’s a Site Reliability Engineer and how’s that role different from the DevOps engineer my boss wants to hire? I really don’t want to be on call, should I? Is Docker the right place for my code or am I better of just going straight to Serverless? And why should I care about any of it? I’ll try to answer some of these questions while looking at what DevOps really is about and how commodisation of servers through “the cloud” ties into it all. This session will be an opinionated piece from a developer who’s been on-call for the past 6 years and would like to convince you to do the same, at least once.

Register Now

Stockholm Metrics and Monitoring | Stockholm, Sweden – Feb 7, 2018:
Observability 3 ways – Logging, Metrics and Distributed Tracing

Let’s talk about often confused telemetry tools: Logging, Metrics and Distributed Tracing. We’ll show how you capture latency using each of the tools and how they work differently. Through examples and discussion, we’ll note edge cases where certain tools have advantages over others. By the end of this talk, we’ll better understand how each of Logging, Metrics and Distributed Tracing aids us in different ways to understand our applications.

Register Now

OpenNMS – Introduction to “Grafana” | Webinar – Feb 21, 2018:
IT monitoring helps detect emerging hardware damage and performance bottlenecks in the enterprise network before any consequential damage or disruption to business processes occurs. The powerful open-source OpenNMS software monitors a network, including all connected devices, and provides logging of a variety of data that can be used for analysis and planning purposes. In our next OpenNMS webinar on February 21, 2018, we introduce “Grafana” – a web-based tool for creating and displaying dashboards from various data sources, which can be perfectly combined with OpenNMS.

Register Now

GrafanaCon EU | Amsterdam, Netherlands – March 1-2, 2018:
Lock in your seat for GrafanaCon EU while there are still tickets avaialable! Join us March 1-2, 2018 in Amsterdam for 2 days of talks centered around Grafana and the surrounding monitoring ecosystem including Graphite, Prometheus, InfluxData, Elasticsearch, Kubernetes, and more.

We have some exciting talks lined up from Google, CERN, Bloomberg, eBay, Red Hat, Tinder, Automattic, Prometheus, InfluxData, Percona and more! Be sure to get your ticket before they’re sold out.

Learn More


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Nice hack! I know I like to keep one eye on server requests when I’m dropping beats. 😉


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


How are we doing?

Thanks for reading another issue of timeShift. Let us know what you think! Submit a comment on this article below, or post something at our community forum.

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Introducing Nextcloud Talk

Post Syndicated from ris original https://lwn.net/Articles/744082/rss

Nextcloud has announced
Nextcloud Talk, a fully open source video meeting software that is on-premise
hosted and end-to-end encrypted. “Nextcloud Talk makes it easier than
ever to host a privacy-respecting audio/video communication service for
home users and enterprises. Business users have optional access to the
Spreed High Performance Back-end offering enterprise-class scalability,
reliability, and features through a Nextcloud subscription. With the
easy-to-use interface, users can engage colleagues, friends, partners or
customers, working in real time through High Definition (H265 based) audio
and video in web meetings and webinars.

Netflix, Amazon and Hollywood Sue Kodi-Powered Dragon Box Over Piracy

Post Syndicated from Ernesto original https://torrentfreak.com/netflix-amazon-and-hollywood-sue-kodi-powered-dragon-box-over-piracy-180111/

More and more people are starting to use Kodi-powered set-top boxes to stream video content to their TVs.

While Kodi itself is a neutral platform, sellers who ship devices with unauthorized add-ons give it a bad reputation.

In recent months these boxes have become the prime target for copyright enforcers, including the Alliance for Creativity and Entertainment (ACE), an anti-piracy partnership between Hollywood studios, Netflix, Amazon, and more than two dozen other companies.

After suing Tickbox last year a group of key ACE members have now filed a similar lawsuit against Dragon Media Inc, which sells the popular Dragon Box. The complaint, filed at a California federal court, also lists the company’s owner Paul Christoforo and reseller Jeff Williams among the defendants.

According to ACE, these type of devices are nothing more than pirate tools, allowing buyers to stream copyright infringing content. That also applies to Dragon Box, they inform the court.

“Defendants market and sell ‘Dragon Box,’ a computer hardware device that Defendants urge their customers to use as a tool for the mass infringement of the copyrighted motion pictures and television shows,” the complaint, picked up by HWR, reads.

The movie companies note that the defendants distribute and promote the Dragon Box as a pirate tool, using phrases such as “Watch your Favourites Anytime For FREE” and “stop paying for Netflix and Hulu.”

Dragon Box

When users follow the instructions Dragon provides they get free access to copyrighted movies, TV-shows and live content, ACE alleges. The complaint further points out that the device uses the open source Kodi player paired with pirate addons.

“The Dragon Media application provides Defendants’ customers with a customized configuration of the Kodi media player and a curated selection of the most popular addons for accessing infringing content,” the movie companies write.

“These addons are designed and maintained for the overarching purpose of scouring the Internet for illegal sources of copyrighted content and returning links to that content. When Dragon Box customers click those links, those customers receive unauthorized streams of popular motion pictures and television shows.”

One of the addons that are included with the download and installation of the Dragon software is Covenant.

This addon can be accessed through a preinstalled shortcut which is linked under the “Videos” menu. Users are then able to browse through a large library of curated content, including a separate category of movies that are still in theaters.

In theaters

According to a statement from Dragon owner Christoforo, business is going well. The company claims to have “over 250,000 customers in 50 states and 4 countries and growing” as well as “374 sellers” across the world.

With this lawsuit, however, the company’s future has suddenly become uncertain.

The movie companies ask the California District for an injunction to shut down the infringing service and impound all Dragon Box devices. In addition, they’re requesting statutory damages which can go up to several million dollars.

At the time of writing the Dragon Box website is still in on air and the company has yet to comment on the allegations.

A copy of the complaint is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Wanted: Sales Engineer

Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-sales-engineer/

At inception, Backblaze was a consumer company. Thousands upon thousands of individuals came to our website and gave us $5/mo to keep their data safe. But, we didn’t sell business solutions. It took us years before we had a sales team. In the last couple of years, we’ve released products that businesses of all sizes love: Backblaze B2 Cloud Storage and Backblaze for Business Computer Backup. Those businesses want to integrate Backblaze deeply into their infrastructure, so it’s time to hire our first Sales Engineer!

Company Description:
Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 – robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.

We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.

We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple – grow sustainably and profitably.

Some Backblaze Perks:

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive Option grants
  • Unlimited vacation days
  • Strong coffee
  • Fully stocked Micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • Childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office – located near Caltrain and Highways 101 & 280.

Backblaze B2 cloud storage is a building block for almost any computing service that requires storage. Customers need our help integrating B2 into iOS apps to Docker containers. Some customers integrate directly to the API using the programming language of their choice, others want to solve a specific problem using ready made software, already integrated with B2.

At the same time, our computer backup product is deepening it’s integration into enterprise IT systems. We are commonly asked for how to set Windows policies, integrate with Active Directory, and install the client via remote management tools.

We are looking for a sales engineer who can help our customers navigate the integration of Backblaze into their technical environments.

Are you 1/2” deep into many different technologies, and unafraid to dive deeper?

Can you confidently talk with customers about their technology, even if you have to look up all the acronyms right after the call?

Are you excited to setup complicated software in a lab and write knowledge base articles about your work?

Then Backblaze is the place for you!

Enough about Backblaze already, what’s in it for me?
In this role, you will be given the opportunity to learn about the technologies that drive innovation today; diverse technologies that customers are using day in and out. And more importantly, you’ll learn how to learn new technologies.

Just as an example, in the past 12 months, we’ve had the opportunity to learn and become experts in these diverse technologies:

  • How to setup VM servers for lab environments, both on-prem and using cloud services.
  • Create an automatically “resetting” demo environment for the sales team.
  • Setup Microsoft Domain Controllers with Active Directory and AD Federation Services.
  • Learn the basics of OAUTH and web single sign on (SSO).
  • Archive video workflows from camera to media asset management systems.
  • How upload/download files from Javascript by enabling CORS.
  • How to install and monitor online backup installations using RMM tools, like JAMF.
  • Tape (LTO) systems. (Yes – people still use tape for storage!)

How can I know if I’ll succeed in this role?

You have:

  • Confidence. Be able to ask customers questions about their environments and convey to them your technical acumen.
  • Curiosity. Always want to learn about customers’ situations, how they got there and what problems they are trying to solve.
  • Organization. You’ll work with customers, integration partners, and Backblaze team members on projects of various lengths. You can context switch and either have a great memory or keep copious notes. Your checklists have their own checklists.

You are versed in:

  • The fundamentals of Windows, Linux and Mac OS X operating systems. You shouldn’t be afraid to use a command line.
  • Building, installing, integrating and configuring applications on any operating system.
  • Debugging failures – reading logs, monitoring usage, effective google searching to fix problems excites you.
  • The basics of TCP/IP networking and the HTTP protocol.
  • Novice development skills in any programming/scripting language. Have basic understanding of data structures and program flow.
  • Your background contains:

  • Bachelor’s degree in computer science or the equivalent.
  • 2+ years of experience as a pre or post-sales engineer.
  • The right extra credit:
    There are literally hundreds of previous experiences you can have had that would make you perfect for this job. Some experiences that we know would be helpful for us are below, but make sure you tell us your stories!

  • Experience using or programming against Amazon S3.
  • Experience with large on-prem storage – NAS, SAN, Object. And backing up data on such storage with tools like Veeam, Veritas and others.
  • Experience with photo or video media. Media archiving is a key market for Backblaze B2.
  • Program arduinos to automatically feed your dog.
  • Experience programming against web or REST APIs. (Point us towards your projects, if they are open source and available to link to.)
  • Experience with sales tools like Salesforce.
  • 3D print door stops.
  • Experience with Windows Servers, Active Directory, Group policies and the like.
  • What’s it like working with the Sales team?
    The Backblaze sales team collaborates. We help each other out by sharing ideas, templates, and our customer’s experiences. When we talk about our accomplishments, there is no “I did this,” only “we”. We are truly a team.

    We are honest to each other and our customers and communicate openly. We aim to have fun by embracing crazy ideas and creative solutions. We try to think not outside the box, but with no boxes at all. Customers are the driving force behind the success of the company and we care deeply about their success.

    If this all sounds like you:

    1. Send an email to [email protected] with the position in the subject line.
    2. Tell us a bit about your Sales Engineering experience.
    3. Include your resume.

    The post Wanted: Sales Engineer appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Media Giant Can Keep Seized Ad Revenue From Pirate Sites

    Post Syndicated from Ernesto original https://torrentfreak.com/media-giant-can-keep-seized-ad-revenue-from-pirate-sites-180109/

    For several decades the MPAA and RIAA have been the prime anti-piracy groups in the United States.

    While that may be true, there’s another player making a massive impact, while getting barely any press.

    ABS-CBN, the largest media and entertainment company in the Philippines, has filed a series of lawsuits against pirate sites in the US, with the popular streaming portal Fmovies as the biggest target.

    The company has already won several cases with damages ranging from a few hundred thousand to millions of dollars. However, the associated injunctions in these cases are perhaps even more significant.

    We previously covered how ABS-CBN managed to get court orders to seize domain names, without the defendants getting actively involved. This is also the case in a recent lawsuit where a Florida federal court signed a broad injunction targeting more than two dozen sites that offered the company’s content.

    The websites, including abscbn-teleserye.com, dramascools.com, tvnijuan.org, pinoydailyshows.com and weeklywarning.org, may not be known to a broad audience but their domain names have all been suspended, linking to a takedown message instead.

    What’s most interesting, however, is that the advertising revenues of these sites were previously frozen. This was done to ensure that ABS-CBN would at least get some money if the defendants failed to respond, a strategy that seems to have paid off.

    After the targeted site owners failed to respond, ABS-CBN requested a default judgment with damages for trademark and copyright infringement.

    U.S. District Court Judge Cecilia Altonaga has now signed the order, awarding the media company over a million dollars in statutory trademark infringement damages. In addition, several of the sites must also pay copyright infringement damages.

    Damages

    The default judgment also orders associated registrars and registries to hand over the domain names to ABS-CBN. Thus far several domains have been seized already, but some foreign companies have not complied, most likely because they fall outside the US jurisdiction.

    The most interesting part of the order, however, is that Judge Altonaga grants ABS-CBN the previously seized advertising revenues.

    “All funds currently restrained by the advertising services, networks, and/or platforms […], pursuant to the temporary restraining order and preliminary injunction in this action are to be immediately (within five business days) transferred to Plaintiffs in partial satisfaction of the monetary judgment entered herein against each Defendant,” the Judge writes.

    List of sites and their ad-networks

    The sites in question used advertising services from a variety of well-known networks, including Google Adsense, MGID, Popads, AdsKeeper, and Bidvertiser. None of these companies responded in court after the initial seizure order, suggesting that they did not object.

    This is the first time, to our knowledge, that a copyright holder has been granted advertising revenue from pirate sites in this manner. While it’s not known how much revenue the sites were making, there is bound to be some.

    This could be a common legal tactic going forward because, generally speaking, it is very hard to get money from defaulting defendants who are relatively anonymous, or living in a foreign jurisdiction. By going after the advertisers, copyright holders have a good chance of securing some money, at least.

    A copy of the default judgment is available here (pdf) and all affected websites are listed below.

    – abscbn-teleserye.com
    – astigvideos.com
    – cinepinoy.lol
    – cinepinoy.ag
    – pinoyflix.ag
    – pinoyflix.lol
    – cinezen.me
    – dramascools.com
    – dramasget.com
    – frugalpinoytv.org
    – lambingan.cn
    – pinoylambingan.ph
    – lambingan.io
    – lambingans.net
    – latestpinoymovies.com
    – pinasnews.net
    – pinastvreplay.com
    – pinoybay.ch
    – pinoychannel.me
    – pinoydailyshows.com
    – pinoyplayback.net
    – pinoytvshows.net
    – pinoytv-shows.net
    – rondownload.net
    – sarapmanood.com
    – tambayanshow.net
    – thelambingan.com
    – tvnijuan.org
    – tvtambayan.org
    – vianowpe.com
    – weeklywarning.org
    – weeklywarning.com

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

    AWS Online Tech Talks – January 2018

    Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-january-2018/

    Happy New Year! Kick of 2018 right by expanding your AWS knowledge with a great batch of new Tech Talks. We’re covering some of the biggest launches from re:Invent including Amazon Neptune, Amazon Rekognition Video, AWS Fargate, AWS Cloud9, Amazon Kinesis Video Streams, AWS PrivateLink, AWS Single-Sign On and more!

    January 2018– Schedule

    Noted below are the upcoming scheduled live, online technical sessions being held during the month of January. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts.

    Webinars featured this month are:

    Monday January 22

    Analytics & Big Data
    11:00 AM – 11:45 AM PT Analyze your Data Lake, Fast @ Any Scale  Lvl 300

    Database
    01:00 PM – 01:45 PM PT Deep Dive on Amazon Neptune Lvl 200

    Tuesday, January 23

    Artificial Intelligence
    9:00 AM – 09:45 AM PT  How to get the most out of Amazon Rekognition Video, a deep learning based video analysis service Lvl 300

    Containers

    11:00 AM – 11:45 AM Introducing AWS Fargate Lvl 200

    Serverless
    01:00 PM – 02:00 PM PT Overview of Serverless Application Deployment Patterns Lvl 400

    Wednesday, January 24

    DevOps
    09:00 AM – 09:45 AM PT Introducing AWS Cloud9  Lvl 200

    Analytics & Big Data
    11:00 AM – 11:45 AM PT Deep Dive: Amazon Kinesis Video Streams
    Lvl 300
    Database
    01:00 PM – 01:45 PM PT Introducing Amazon Aurora with PostgreSQL Compatibility Lvl 200

    Thursday, January 25

    Artificial Intelligence
    09:00 AM – 09:45 AM PT Introducing Amazon SageMaker Lvl 200

    Mobile
    11:00 AM – 11:45 AM PT Ionic and React Hybrid Web/Native Mobile Applications with Mobile Hub Lvl 200

    IoT
    01:00 PM – 01:45 PM PT Connected Product Development: Secure Cloud & Local Connectivity for Microcontroller-based Devices Lvl 200

    Monday, January 29

    Enterprise
    11:00 AM – 11:45 AM PT Enterprise Solutions Best Practices 100 Achieving Business Value with AWS Lvl 100

    Compute
    01:00 PM – 01:45 PM PT Introduction to Amazon Lightsail Lvl 200

    Tuesday, January 30

    Security, Identity & Compliance
    09:00 AM – 09:45 AM PT Introducing Managed Rules for AWS WAF Lvl 200

    Storage
    11:00 AM – 11:45 AM PT  Improving Backup & DR – AWS Storage Gateway Lvl 300

    Compute
    01:00 PM – 01:45 PM PT  Introducing the New Simplified Access Model for EC2 Spot Instances Lvl 200

    Wednesday, January 31

    Networking
    09:00 AM – 09:45 AM PT  Deep Dive on AWS PrivateLink Lvl 300

    Enterprise
    11:00 AM – 11:45 AM PT Preparing Your Team for a Cloud Transformation Lvl 200

    Compute
    01:00 PM – 01:45 PM PT  The Nitro Project: Next-Generation EC2 Infrastructure Lvl 300

    Thursday, February 1

    Security, Identity & Compliance
    09:00 AM – 09:45 AM PT  Deep Dive on AWS Single Sign-On Lvl 300

    Storage
    11:00 AM – 11:45 AM PT How to Build a Data Lake in Amazon S3 & Amazon Glacier Lvl 300

    TVAddons and ZemTV Ask Court to Dismiss U.S. Piracy Lawsuit

    Post Syndicated from Ernesto original https://torrentfreak.com/tvaddons-and-zemtv-ask-court-to-dismiss-u-s-piracy-lawsuit-180108/

    Last year, American satellite and broadcast provider Dish Network targeted two well-known players in the third-party Kodi add-on ecosystem.

    In a complaint filed in a federal court in Texas, add-on ZemTV and the TVAddons library were accused of copyright infringement. As a result, both are facing up to $150,000 in damages for each offense.

    While the case was filed in Texas, neither of the defendants live there, or even in the United States. The owner and operator of TVAddons is Adam Lackman, who resides in Montreal, Canada. ZemTV’s developer Shahjahan Durrani is even further away in London, UK.

    Their limited connection to Texas is reason for the case to be dismissed, according to the legal team of the two defendants. They are represented by attorneys Erin Russel and Jason Sweet, who asked the Court to drop the case late last week.

    According to their motion, the Texas District Court does not have jurisdiction over the two defendants.

    “Lackman and Durrani have never been residents or citizens of Texas; they have never owned property in Texas; they have never voted in Texas; they have never personally visited Texas; they have never directed any business activity of any kind to anyone in Texas […] and they have never earned income in Texas,” the motion reads.

    Technically, defendants can be sued in a district they have never been, as long as they “directed actions” at the state or its citizens.

    According to Dish, this is the case here since both defendants made their services available to local residents, among other things. However, the defense team argues that’s not enough to establish jurisdiction in this case.

    “Plaintiff’s conclusory allegation that Lackman and Durrani marketed, made available, and distributed ZemTV service and the ZemTV add-on to consumers in the State of Texas and the Southern District of Texas is misleading at best,” the attorneys write.

    If the case proceeds this would go against the US constitution, violating the defendants’ due process rights. Whether the infringement claims hold ground or not, Dish has no right to sue, according to the defense.

    “Defendants are citizens of Canada and Great Britain and have not had sufficient contacts in the State of Texas for this Court to exercise personal jurisdiction over them. To do so would violate the Due Process Clause of the United States Constitution.”

    The Court must now decide whether the case can proceed or not. TorrentFreak reached out to TVAddons but the service wishes to refrain from commenting on the proceeding at the moment.

    Previously, TVAddons made it clear that it sees the Dish lawsuit as an attempt to destroy the Kodi addon community. One of the methods of attack it mentioned, was to sue people in foreign jurisdictions.

    “Most people don’t have money lying around to hire lawyers in places they’ve never even visited. This means that if a company sues you in a foreign country and you can’t afford a lawyer, you’re screwed even if you did nothing wrong,” TVAddons wrote at the time.

    A copy of the motion to dismiss is available here (pdf).

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

    No Level of Copyright Enforcement Will Ever Be Enough For Big Media

    Post Syndicated from Andy original https://torrentfreak.com/no-level-of-copyright-enforcement-will-ever-be-enough-for-big-media-180107/

    For more than ten years TorrentFreak has documented a continuous stream of piracy battles so it’s natural that, every now and then, we pause to consider when this war might stop. The answer is always “no time soon” and certainly not in 2018.

    When swapping files over the Internet first began it wasn’t a particularly widespread activity. A reasonable amount of content was available, but it was relatively inaccessible. Then peer-to-peer came along and it sparked a revolution.

    From the beginning, copyright holders felt that the law would answer their problems, whether that was by suing Napster, Kazaa, or even end users. Some industry players genuinely believed this strategy was just a few steps away from achieving its goals. Just a little bit more pressure and all would be under control.

    Then, when the landmark MGM Studios v. Grokster decision was handed down in the studios’ favor during 2005, the excitement online was palpable. As copyright holders rejoiced in this body blow for the pirating masses, file-sharing communities literally shook under the weight of the ruling. For a day, maybe two.

    For the majority of file-sharers, the ruling meant absolutely nothing. So what if some company could be held responsible for other people’s infringements? Another will come along, outside of the US if need be, people said. They were right not to be concerned – that’s exactly what happened.

    Ever since, this cycle has continued. Eager to stem the tide of content being shared without their permission, rightsholders have advocated stronger anti-piracy enforcement and lobbied for more restrictive interpretations of copyright law. Thus far, however, literally nothing has provided a solution.

    One would have thought that given the military-style raid on Kim Dotcom’s Megaupload, a huge void would’ve appeared in the sharing landscape. Instead, the file-locker business took itself apart and reinvented itself in jurisdictions outside the United States. Meanwhile, the BitTorrent scene continued in the background, somewhat obliviously.

    With the SOPA debacle still fresh in relatively recent memory, copyright holders are still doggedly pursuing their aims. Site-blocking is rampant, advertisers are being pressured into compliance, and ISPs like Cox Communications now find themselves responsible for the infringements of their users. But has any of this caused any fatal damage to the sharing landscape? Not really.

    Instead, we’re seeing a rise in the use of streaming sites, each far more accessible to the newcomer than their predecessors and vastly more difficult for copyright holders to police.

    Systems built into Kodi are transforming these platforms into a plug-and-play piracy playground, one in which sites skirt US law and users can consume both at will and in complete privacy. Meanwhile, commercial and unauthorized IPTV offerings are gathering momentum, even as rightsholders try to pull them back.

    Faced with problems like these we are now seeing calls for even tougher legislation. While groups like the RIAA dream of filtering the Internet, over in the UK a 2017 consultation had copyright holders excited that end users could be criminalized for simply consuming infringing content, let alone distributing it.

    While the introduction of both or either of these measures would cause uproar (and rightly so), history tells us that each would fail in its stated aim of stopping piracy. With that eventuality all but guaranteed, calls for even tougher legislation are being readied for later down the line.

    In short, there is no law that can stop piracy and therefore no law that will stop the entertainment industries coming back for harsher measures, pursuing the dream. This much we’ve established from close to two decades of litigation and little to no progress.

    But really, is anyone genuinely surprised that they’re still taking this route? Draconian efforts to maintain control over the distribution of content predate the file-sharing wars by a couple of hundred years, at the very least. Why would rightsholders stop now, when the prize is even more valuable?

    No one wants a minefield of copyright law. No one wants a restricted Internet. No one wants extended liability for innovators, service providers, or the public. But this is what we’ll get if this problem isn’t solved soon. Something drastic needs to happen, but who will be brave enough to admit it, let alone do something about it?

    During a discussion about piracy last year on the BBC, the interviewer challenged a caller who freely admitted to pirating sports content online. The caller’s response was clear:

    For far too long, broadcasters and rightsholders have abused their monopoly position, charging ever-increasing amounts for popular content, even while making billions. Piracy is a natural response to that, and effectively a chance for the little guy to get back some control, he argued.

    Exactly the same happened in the music market during the late 1990s and 2000s. In response to artificial restriction of the market and the unrealistic hiking of prices, people turned to peer-to-peer networks for their fix. Thanks to this pressure but after years of turmoil, services like Spotify emerged, converting millions of former pirates in the process. Netflix, it appears, is attempting to do the same thing with video.

    When people feel that they aren’t getting ripped off and that they have no further use for sub-standard piracy services in the face of stunning legal alternatives, things will change. But be under no illusion, people won’t be bullied there.

    If we end up with an Internet stifled in favor of rightsholders, one in which service providers are too scared to innovate, the next generation of consumers will never forget. This will be a major problem for two key reasons. Not only will consumers become enemies but piracy will still exist. We will have come full circle, fueled only by division and hatred.

    It’s a natural response to reject monopolistic behavior and it’s a natural response, for most, to be fair when treated with fairness. Destroying freedom is far from fair and will not create a better future – for anyone.

    Laws have their place, no sane person will argue against that, but when the entertainment industries are making billions yet still want more, they’ll have to decide whether this will go on forever with building resentment, or if making a bit less profit now makes more sense longer term.

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

    Torrent Pioneers: isoHunt’s Gary Fung, Ten Years Later

    Post Syndicated from Ernesto original https://torrentfreak.com/torrent-pioneers-isohunts-gary-fung-ten-years-later-180106/

    Ten years ago, November 2007 to be precise, we published an article featuring the four leading torrent site admins at the time.

    Niek van der Maas of Mininova, Justin Bunnell of TorrentSpy, Pirate Bay’s Peter Sunde and isoHunt’s Gary Fung were all kind enough to share their vision of BitTorrent’s future.

    This future is the present today, and although the predictions were not all spot-on, there are a few interesting observations to make.

    For one, these four men were all known by name, despite the uncertain legal situation they were in. How different is that today, when the operators of most of the world’s largest torrent sites are unknown to the broader public.

    Another thing that stands out is that none of these pioneers are still active in the torrent space today. Niek and Justin have their own advertising businesses, Peter is a serial entrepreneur involved in various startups, while Gary works on his own projects.

    While they have all moved on, they also remain a part of Internet history, which is why we decided to reach out to them ten years on.

    Gary Fung was the first to reply. Those who’ve been following torrent news for a while know that isoHunt was shut down in 2013. The shutdown was the result of a lawsuit and came with a $110 million settlement with the MPAA, on paper.

    Today the Canadian entrepreneur has other things on his hands, which includes “leveling up” his now one-year-old daughter. While that can be a day job by itself, he is also finalizing a mobile search app which will be released in the near future.

    “The key is speed, and I can measure its speedup of the whole mobile search experience to be 10-100x that of conventional mobile web browsers,” Gary tells us, noting that after years of development, it’s almost ready.

    The new search app is not one dedicated to torrents, as isoHunt once was. However, looking back, Gary is proud of what he accomplished with isoHunt, despite the bitter end.

    “It was a humbling experience, in more ways than one. I’m proud that I participated and championed the rise of P2P content distribution through isoHunt as a search gateway,” Gary tells us.

    “But I was also humbled by the responsibility and power at play, as seen in the lawsuits from the media industry giants, as well as the even larger picture of what P2P technologies were bringing, and still bring today.”

    Decentralization has always been a key feature of BitTorrent and Gary sees this coming back in new trends. This includes the massive attention for blockchain related projects such as Bitcoin.

    “2017 was the year Bitcoin became mainstream in a big way, and it’s feeling like the Internet before 2000. Decentralization is by nature disruptive, and I can’t wait to see what decentralizing money, governance, organizations and all kinds of applications will bring in the next few years.

    “dApps [decentralized apps] made possible by platforms like Ethereum are like generalized BitTorrent for all kinds of applications, with ones we haven’t even thought of yet,” Gary adds.

    Not everything is positive in hindsight, of course. Gary tells us that if he had to do it all over again he would take legal issues and lawyers more seriously. Not doing so led to more trouble than he imagined.

    As a former torrent site admin, he has thought about the piracy issue quite a bit over the years. And unlike some sites today, he was happy to look for possible solutions to stop piracy.

    One solution Gary suggested to Hollywood in the past was a hash recognition system for infringing torrents. A system to automatically filter known infringing files and remove these from cooperating torrent sites could still work today, he thinks.

    “ContentID for all files shared on BitTorrent, similar to YouTube. I’ve proposed this to Hollywood studios before, as a better solution to suing their customers and potential P2P technology partners, but it obviously fell on deaf ears.”

    In any case, torrent sites and similar services will continue to play an important role in how the media industry evolves. These platforms are showing Hollywood what the public wants, Gary believes.

    “It has and will continue to play a role in showing the industry what consumers truly want: frictionless, convenient distribution, without borders of country or bundles. Bundles as in cable channels, but also in any way unwanted content is forced onto consumers without choice.”

    While torrents were dominant in the past, the future will be streaming mostly, isoHunt’s founder says. He said this ten years ago, and he believes that in another decade it will have completely replaced cable TV.

    Whether piracy will still be relevant then depends on how content is offered. More fragmentation will lead to more piracy, while easier access will make it less relevant.

    “The question then will be, will streaming platforms be fragmented and exclusive content bundled into a hundred pieces besides Netflix, or will consumer choice and convenience win out in a cross-platform way?

    “A piracy increase or reduction will depend on how that plays out because nobody wants to worry about ten monthly subscriptions to ten different streaming services, much less a hundred,” Gary concludes.

    Perhaps we should revisit this again next decade…


    The second post in this series, with Peter Sunde, will be published this weekend. The other two pioneers did not respond or declined to take part.

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

    Combine Transactional and Analytical Data Using Amazon Aurora and Amazon Redshift

    Post Syndicated from Re Alvarez-Parmar original https://aws.amazon.com/blogs/big-data/combine-transactional-and-analytical-data-using-amazon-aurora-and-amazon-redshift/

    A few months ago, we published a blog post about capturing data changes in an Amazon Aurora database and sending it to Amazon Athena and Amazon QuickSight for fast analysis and visualization. In this post, I want to demonstrate how easy it can be to take the data in Aurora and combine it with data in Amazon Redshift using Amazon Redshift Spectrum.

    With Amazon Redshift, you can build petabyte-scale data warehouses that unify data from a variety of internal and external sources. Because Amazon Redshift is optimized for complex queries (often involving multiple joins) across large tables, it can handle large volumes of retail, inventory, and financial data without breaking a sweat.

    In this post, we describe how to combine data in Aurora in Amazon Redshift. Here’s an overview of the solution:

    • Use AWS Lambda functions with Amazon Aurora to capture data changes in a table.
    • Save data in an Amazon S3
    • Query data using Amazon Redshift Spectrum.

    We use the following services:

    Serverless architecture for capturing and analyzing Aurora data changes

    Consider a scenario in which an e-commerce web application uses Amazon Aurora for a transactional database layer. The company has a sales table that captures every single sale, along with a few corresponding data items. This information is stored as immutable data in a table. Business users want to monitor the sales data and then analyze and visualize it.

    In this example, you take the changes in data in an Aurora database table and save it in Amazon S3. After the data is captured in Amazon S3, you combine it with data in your existing Amazon Redshift cluster for analysis.

    By the end of this post, you will understand how to capture data events in an Aurora table and push them out to other AWS services using AWS Lambda.

    The following diagram shows the flow of data as it occurs in this tutorial:

    The starting point in this architecture is a database insert operation in Amazon Aurora. When the insert statement is executed, a custom trigger calls a Lambda function and forwards the inserted data. Lambda writes the data that it received from Amazon Aurora to a Kinesis data delivery stream. Kinesis Data Firehose writes the data to an Amazon S3 bucket. Once the data is in an Amazon S3 bucket, it is queried in place using Amazon Redshift Spectrum.

    Creating an Aurora database

    First, create a database by following these steps in the Amazon RDS console:

    1. Sign in to the AWS Management Console, and open the Amazon RDS console.
    2. Choose Launch a DB instance, and choose Next.
    3. For Engine, choose Amazon Aurora.
    4. Choose a DB instance class. This example uses a small, since this is not a production database.
    5. In Multi-AZ deployment, choose No.
    6. Configure DB instance identifier, Master username, and Master password.
    7. Launch the DB instance.

    After you create the database, use MySQL Workbench to connect to the database using the CNAME from the console. For information about connecting to an Aurora database, see Connecting to an Amazon Aurora DB Cluster.

    The following screenshot shows the MySQL Workbench configuration:

    Next, create a table in the database by running the following SQL statement:

    Create Table
    CREATE TABLE Sales (
    InvoiceID int NOT NULL AUTO_INCREMENT,
    ItemID int NOT NULL,
    Category varchar(255),
    Price double(10,2), 
    Quantity int not NULL,
    OrderDate timestamp,
    DestinationState varchar(2),
    ShippingType varchar(255),
    Referral varchar(255),
    PRIMARY KEY (InvoiceID)
    )

    You can now populate the table with some sample data. To generate sample data in your table, copy and run the following script. Ensure that the highlighted (bold) variables are replaced with appropriate values.

    #!/usr/bin/python
    import MySQLdb
    import random
    import datetime
    
    db = MySQLdb.connect(host="AURORA_CNAME",
                         user="DBUSER",
                         passwd="DBPASSWORD",
                         db="DB")
    
    states = ("AL","AK","AZ","AR","CA","CO","CT","DE","FL","GA","HI","ID","IL","IN",
    "IA","KS","KY","LA","ME","MD","MA","MI","MN","MS","MO","MT","NE","NV","NH","NJ",
    "NM","NY","NC","ND","OH","OK","OR","PA","RI","SC","SD","TN","TX","UT","VT","VA",
    "WA","WV","WI","WY")
    
    shipping_types = ("Free", "3-Day", "2-Day")
    
    product_categories = ("Garden", "Kitchen", "Office", "Household")
    referrals = ("Other", "Friend/Colleague", "Repeat Customer", "Online Ad")
    
    for i in range(0,10):
        item_id = random.randint(1,100)
        state = states[random.randint(0,len(states)-1)]
        shipping_type = shipping_types[random.randint(0,len(shipping_types)-1)]
        product_category = product_categories[random.randint(0,len(product_categories)-1)]
        quantity = random.randint(1,4)
        referral = referrals[random.randint(0,len(referrals)-1)]
        price = random.randint(1,100)
        order_date = datetime.date(2016,random.randint(1,12),random.randint(1,30)).isoformat()
    
        data_order = (item_id, product_category, price, quantity, order_date, state,
        shipping_type, referral)
    
        add_order = ("INSERT INTO Sales "
                       "(ItemID, Category, Price, Quantity, OrderDate, DestinationState, \
                       ShippingType, Referral) "
                       "VALUES (%s, %s, %s, %s, %s, %s, %s, %s)")
    
        cursor = db.cursor()
        cursor.execute(add_order, data_order)
    
        db.commit()
    
    cursor.close()
    db.close() 

    The following screenshot shows how the table appears with the sample data:

    Sending data from Amazon Aurora to Amazon S3

    There are two methods available to send data from Amazon Aurora to Amazon S3:

    • Using a Lambda function
    • Using SELECT INTO OUTFILE S3

    To demonstrate the ease of setting up integration between multiple AWS services, we use a Lambda function to send data to Amazon S3 using Amazon Kinesis Data Firehose.

    Alternatively, you can use a SELECT INTO OUTFILE S3 statement to query data from an Amazon Aurora DB cluster and save it directly in text files that are stored in an Amazon S3 bucket. However, with this method, there is a delay between the time that the database transaction occurs and the time that the data is exported to Amazon S3 because the default file size threshold is 6 GB.

    Creating a Kinesis data delivery stream

    The next step is to create a Kinesis data delivery stream, since it’s a dependency of the Lambda function.

    To create a delivery stream:

    1. Open the Kinesis Data Firehose console
    2. Choose Create delivery stream.
    3. For Delivery stream name, type AuroraChangesToS3.
    4. For Source, choose Direct PUT.
    5. For Record transformation, choose Disabled.
    6. For Destination, choose Amazon S3.
    7. In the S3 bucket drop-down list, choose an existing bucket, or create a new one.
    8. Enter a prefix if needed, and choose Next.
    9. For Data compression, choose GZIP.
    10. In IAM role, choose either an existing role that has access to write to Amazon S3, or choose to generate one automatically. Choose Next.
    11. Review all the details on the screen, and choose Create delivery stream when you’re finished.

     

    Creating a Lambda function

    Now you can create a Lambda function that is called every time there is a change that needs to be tracked in the database table. This Lambda function passes the data to the Kinesis data delivery stream that you created earlier.

    To create the Lambda function:

    1. Open the AWS Lambda console.
    2. Ensure that you are in the AWS Region where your Amazon Aurora database is located.
    3. If you have no Lambda functions yet, choose Get started now. Otherwise, choose Create function.
    4. Choose Author from scratch.
    5. Give your function a name and select Python 3.6 for Runtime
    6. Choose and existing or create a new Role, the role would need to have access to call firehose:PutRecord
    7. Choose Next on the trigger selection screen.
    8. Paste the following code in the code window. Change the stream_name variable to the Kinesis data delivery stream that you created in the previous step.
    9. Choose File -> Save in the code editor and then choose Save.
    import boto3
    import json
    
    firehose = boto3.client('firehose')
    stream_name = ‘AuroraChangesToS3’
    
    
    def Kinesis_publish_message(event, context):
        
        firehose_data = (("%s,%s,%s,%s,%s,%s,%s,%s\n") %(event['ItemID'], 
        event['Category'], event['Price'], event['Quantity'],
        event['OrderDate'], event['DestinationState'], event['ShippingType'], 
        event['Referral']))
        
        firehose_data = {'Data': str(firehose_data)}
        print(firehose_data)
        
        firehose.put_record(DeliveryStreamName=stream_name,
        Record=firehose_data)

    Note the Amazon Resource Name (ARN) of this Lambda function.

    Giving Aurora permissions to invoke a Lambda function

    To give Amazon Aurora permissions to invoke a Lambda function, you must attach an IAM role with appropriate permissions to the cluster. For more information, see Invoking a Lambda Function from an Amazon Aurora DB Cluster.

    Once you are finished, the Amazon Aurora database has access to invoke a Lambda function.

    Creating a stored procedure and a trigger in Amazon Aurora

    Now, go back to MySQL Workbench, and run the following command to create a new stored procedure. When this stored procedure is called, it invokes the Lambda function you created. Change the ARN in the following code to your Lambda function’s ARN.

    DROP PROCEDURE IF EXISTS CDC_TO_FIREHOSE;
    DELIMITER ;;
    CREATE PROCEDURE CDC_TO_FIREHOSE (IN ItemID VARCHAR(255), 
    									IN Category varchar(255), 
    									IN Price double(10,2),
                                        IN Quantity int(11),
                                        IN OrderDate timestamp,
                                        IN DestinationState varchar(2),
                                        IN ShippingType varchar(255),
                                        IN Referral  varchar(255)) LANGUAGE SQL 
    BEGIN
      CALL mysql.lambda_async('arn:aws:lambda:us-east-1:XXXXXXXXXXXXX:function:CDCFromAuroraToKinesis', 
         CONCAT('{ "ItemID" : "', ItemID, 
                '", "Category" : "', Category,
                '", "Price" : "', Price,
                '", "Quantity" : "', Quantity, 
                '", "OrderDate" : "', OrderDate, 
                '", "DestinationState" : "', DestinationState, 
                '", "ShippingType" : "', ShippingType, 
                '", "Referral" : "', Referral, '"}')
         );
    END
    ;;
    DELIMITER ;

    Create a trigger TR_Sales_CDC on the Sales table. When a new record is inserted, this trigger calls the CDC_TO_FIREHOSE stored procedure.

    DROP TRIGGER IF EXISTS TR_Sales_CDC;
     
    DELIMITER ;;
    CREATE TRIGGER TR_Sales_CDC
      AFTER INSERT ON Sales
      FOR EACH ROW
    BEGIN
      SELECT  NEW.ItemID , NEW.Category, New.Price, New.Quantity, New.OrderDate
      , New.DestinationState, New.ShippingType, New.Referral
      INTO @ItemID , @Category, @Price, @Quantity, @OrderDate
      , @DestinationState, @ShippingType, @Referral;
      CALL  CDC_TO_FIREHOSE(@ItemID , @Category, @Price, @Quantity, @OrderDate
      , @DestinationState, @ShippingType, @Referral);
    END
    ;;
    DELIMITER ;

    If a new row is inserted in the Sales table, the Lambda function that is mentioned in the stored procedure is invoked.

    Verify that data is being sent from the Lambda function to Kinesis Data Firehose to Amazon S3 successfully. You might have to insert a few records, depending on the size of your data, before new records appear in Amazon S3. This is due to Kinesis Data Firehose buffering. To learn more about Kinesis Data Firehose buffering, see the “Amazon S3” section in Amazon Kinesis Data Firehose Data Delivery.

    Every time a new record is inserted in the sales table, a stored procedure is called, and it updates data in Amazon S3.

    Querying data in Amazon Redshift

    In this section, you use the data you produced from Amazon Aurora and consume it as-is in Amazon Redshift. In order to allow you to process your data as-is, where it is, while taking advantage of the power and flexibility of Amazon Redshift, you use Amazon Redshift Spectrum. You can use Redshift Spectrum to run complex queries on data stored in Amazon S3, with no need for loading or other data prep.

    Just create a data source and issue your queries to your Amazon Redshift cluster as usual. Behind the scenes, Redshift Spectrum scales to thousands of instances on a per-query basis, ensuring that you get fast, consistent performance even as your dataset grows to beyond an exabyte! Being able to query data that is stored in Amazon S3 means that you can scale your compute and your storage independently. You have the full power of the Amazon Redshift query model and all the reporting and business intelligence tools at your disposal. Your queries can reference any combination of data stored in Amazon Redshift tables and in Amazon S3.

    Redshift Spectrum supports open, common data types, including CSV/TSV, Apache Parquet, SequenceFile, and RCFile. Files can be compressed using gzip or Snappy, with other data types and compression methods in the works.

    First, create an Amazon Redshift cluster. Follow the steps in Launch a Sample Amazon Redshift Cluster.

    Next, create an IAM role that has access to Amazon S3 and Athena. By default, Amazon Redshift Spectrum uses the Amazon Athena data catalog. Your cluster needs authorization to access your external data catalog in AWS Glue or Athena and your data files in Amazon S3.

    In the demo setup, I attached AmazonS3FullAccess and AmazonAthenaFullAccess. In a production environment, the IAM roles should follow the standard security of granting least privilege. For more information, see IAM Policies for Amazon Redshift Spectrum.

    Attach the newly created role to the Amazon Redshift cluster. For more information, see Associate the IAM Role with Your Cluster.

    Next, connect to the Amazon Redshift cluster, and create an external schema and database:

    create external schema if not exists spectrum_schema
    from data catalog 
    database 'spectrum_db' 
    region 'us-east-1'
    IAM_ROLE 'arn:aws:iam::XXXXXXXXXXXX:role/RedshiftSpectrumRole'
    create external database if not exists;

    Don’t forget to replace the IAM role in the statement.

    Then create an external table within the database:

     CREATE EXTERNAL TABLE IF NOT EXISTS spectrum_schema.ecommerce_sales(
      ItemID int,
      Category varchar,
      Price DOUBLE PRECISION,
      Quantity int,
      OrderDate TIMESTAMP,
      DestinationState varchar,
      ShippingType varchar,
      Referral varchar)
    ROW FORMAT DELIMITED
          FIELDS TERMINATED BY ','
    LINES TERMINATED BY '\n'
    LOCATION 's3://{BUCKET_NAME}/CDC/'

    Query the table, and it should contain data. This is a fact table.

    select top 10 * from spectrum_schema.ecommerce_sales

     

    Next, create a dimension table. For this example, we create a date/time dimension table. Create the table:

    CREATE TABLE date_dimension (
      d_datekey           integer       not null sortkey,
      d_dayofmonth        integer       not null,
      d_monthnum          integer       not null,
      d_dayofweek                varchar(10)   not null,
      d_prettydate        date       not null,
      d_quarter           integer       not null,
      d_half              integer       not null,
      d_year              integer       not null,
      d_season            varchar(10)   not null,
      d_fiscalyear        integer       not null)
    diststyle all;

    Populate the table with data:

    copy date_dimension from 's3://reparmar-lab/2016dates' 
    iam_role 'arn:aws:iam::XXXXXXXXXXXX:role/redshiftspectrum'
    DELIMITER ','
    dateformat 'auto';

    The date dimension table should look like the following:

    Querying data in local and external tables using Amazon Redshift

    Now that you have the fact and dimension table populated with data, you can combine the two and run analysis. For example, if you want to query the total sales amount by weekday, you can run the following:

    select sum(quantity*price) as total_sales, date_dimension.d_season
    from spectrum_schema.ecommerce_sales 
    join date_dimension on spectrum_schema.ecommerce_sales.orderdate = date_dimension.d_prettydate 
    group by date_dimension.d_season

    You get the following results:

    Similarly, you can replace d_season with d_dayofweek to get sales figures by weekday:

    With Amazon Redshift Spectrum, you pay only for the queries you run against the data that you actually scan. We encourage you to use file partitioning, columnar data formats, and data compression to significantly minimize the amount of data scanned in Amazon S3. This is important for data warehousing because it dramatically improves query performance and reduces cost.

    Partitioning your data in Amazon S3 by date, time, or any other custom keys enables Amazon Redshift Spectrum to dynamically prune nonrelevant partitions to minimize the amount of data processed. If you store data in a columnar format, such as Parquet, Amazon Redshift Spectrum scans only the columns needed by your query, rather than processing entire rows. Similarly, if you compress your data using one of the supported compression algorithms in Amazon Redshift Spectrum, less data is scanned.

    Analyzing and visualizing Amazon Redshift data in Amazon QuickSight

    Modify the Amazon Redshift security group to allow an Amazon QuickSight connection. For more information, see Authorizing Connections from Amazon QuickSight to Amazon Redshift Clusters.

    After modifying the Amazon Redshift security group, go to Amazon QuickSight. Create a new analysis, and choose Amazon Redshift as the data source.

    Enter the database connection details, validate the connection, and create the data source.

    Choose the schema to be analyzed. In this case, choose spectrum_schema, and then choose the ecommerce_sales table.

    Next, we add a custom field for Total Sales = Price*Quantity. In the drop-down list for the ecommerce_sales table, choose Edit analysis data sets.

    On the next screen, choose Edit.

    In the data prep screen, choose New Field. Add a new calculated field Total Sales $, which is the product of the Price*Quantity fields. Then choose Create. Save and visualize it.

    Next, to visualize total sales figures by month, create a graph with Total Sales on the x-axis and Order Data formatted as month on the y-axis.

    After you’ve finished, you can use Amazon QuickSight to add different columns from your Amazon Redshift tables and perform different types of visualizations. You can build operational dashboards that continuously monitor your transactional and analytical data. You can publish these dashboards and share them with others.

    Final notes

    Amazon QuickSight can also read data in Amazon S3 directly. However, with the method demonstrated in this post, you have the option to manipulate, filter, and combine data from multiple sources or Amazon Redshift tables before visualizing it in Amazon QuickSight.

    In this example, we dealt with data being inserted, but triggers can be activated in response to an INSERT, UPDATE, or DELETE trigger.

    Keep the following in mind:

    • Be careful when invoking a Lambda function from triggers on tables that experience high write traffic. This would result in a large number of calls to your Lambda function. Although calls to the lambda_async procedure are asynchronous, triggers are synchronous.
    • A statement that results in a large number of trigger activations does not wait for the call to the AWS Lambda function to complete. But it does wait for the triggers to complete before returning control to the client.
    • Similarly, you must account for Amazon Kinesis Data Firehose limits. By default, Kinesis Data Firehose is limited to a maximum of 5,000 records/second. For more information, see Monitoring Amazon Kinesis Data Firehose.

    In certain cases, it may be optimal to use AWS Database Migration Service (AWS DMS) to capture data changes in Aurora and use Amazon S3 as a target. For example, AWS DMS might be a good option if you don’t need to transform data from Amazon Aurora. The method used in this post gives you the flexibility to transform data from Aurora using Lambda before sending it to Amazon S3. Additionally, the architecture has the benefits of being serverless, whereas AWS DMS requires an Amazon EC2 instance for replication.

    For design considerations while using Redshift Spectrum, see Using Amazon Redshift Spectrum to Query External Data.

    If you have questions or suggestions, please comment below.


    Additional Reading

    If you found this post useful, be sure to check out Capturing Data Changes in Amazon Aurora Using AWS Lambda and 10 Best Practices for Amazon Redshift Spectrum


    About the Authors

    Re Alvarez-Parmar is a solutions architect for Amazon Web Services. He helps enterprises achieve success through technical guidance and thought leadership. In his spare time, he enjoys spending time with his two kids and exploring outdoors.

     

     

     

    Backblaze Cloud Backup Release 5.2

    Post Syndicated from Yev original https://www.backblaze.com/blog/backblaze-cloud-backup-release-5-2/

    We’re pleased to start the year off the right way, with an update to Backblaze Cloud Backup, version 5.2! This is a smaller release, but does increase backup speeds, optimizes the backup client, and addresses a few minor bugs that we’re excited to lay to rest.

    What’s New

    • Increased transmission speed of files between 30MB and 400MB+.
    • Optimized indexing to decrease system resource usage and lower the performance impact on computers that are backing up to Backblaze.
    • Adjusted external hard drive monitoring and increased the speed of indexing.
    • Changed copyright to 2018.

    Release Version Number:

    • Mac — 5.2.0
    • PC — 5.2.0

    Clients:
    Backblaze Personal Backup
    Backblaze Business Backup

    Availability:
    January 4, 2018

    Upgrade Methods:

    • Immediately as a download from: files.backblaze.com
    • Rolling out soon when performing a “Check for Updates” (right-click on the Backblaze icon and then select “Check for Updates”).
    • Rolling out soon as a download from: https://secure.backblaze.com/update.htm.
    • Rolling out soon as the default download from: www.backblaze.com.
    • Auto-update will begin in a couple of weeks.

    Cost:
    This is a free update for all Backblaze Cloud Backup consumer and business customers and active trial users.

    The post Backblaze Cloud Backup Release 5.2 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Dish Network Files Two Lawsuits Against Pirate IPTV Providers

    Post Syndicated from Andy original https://torrentfreak.com/dish-network-files-two-lawsuits-against-pirate-iptv-providers-180103/

    In broad terms, there are two types of unauthorized online streaming of live TV. The first is via open-access websites where users can view for free. The second features premium services to which viewers are required to subscribe.

    Usually available for a few dollars, euros, or pounds per month, the latter are gaining traction all around the world. Service levels are relatively high and the majority of illicit packages offer a dazzling array of programming, often putting official providers in the shade.

    For this reason, commercial IPTV providers are considered a huge threat to broadcasters’ business models, since they offer a broadly comparable and accessible service at a much cheaper price. This is forcing companies such as US giant Dish Networks to court, seeking relief.

    Following on from a lawsuit filed last year against Kodi add-on ZemTV and TVAddons.ag, Dish has just filed two more lawsuits targeting a pair of unauthorized pirate IPTV services.

    Filed in Maryland and Texas respectively, the actions are broadly similar, with the former targeting a provider known as Spider-TV.

    The suit, filed against Dima Furniture Inc. and Mohammad Yusif (individually and collectively doing business as Spider-TV), claims that the defendants are “capturing
    broadcasts of television channels exclusively licensed to DISH and are unlawfully retransmitting these channels over the Internet to their customers throughout the United States, 24 hours per day, 7 days per week.”

    Dish claim that the defendants profit from the scheme by selling set-top boxes along with subscriptions, charging around $199 per device loaded with 13 months of service.

    Dima Furniture is a Maryland corporation, registered at Takoma Park, Maryland 20912, an address that is listed on the Spider-TV website. The connection between the defendants is further supported by FCC references which identify Spider devices in the market. Mohammad Yusif is claimed to be the president, executive director, general manager, and sole shareholder of Dima Furniture.

    Dish describes itself as the fourth largest pay-television provider in the United States, delivering copyrighted programming to millions of subscribers nationwide by means of satellite delivery and over-the-top services. Dish has acquired the rights to do this, the defendants have not, the broadcaster states.

    “Defendants capture live broadcast signals of the Protected Channels, transcode these signals into a format useful for streaming over the Internet, transfer the transcoded content to one or more servers provided, controlled, and maintained by Defendants, and then transmit the Protected Channels to users of the Service through
    OTT delivery, including users in the United States,” the lawsuit reads.

    It’s claimed that in July 2015, Yusif registered Spider-TV as a trade name of Dima Furniture with the Department of Assessments and Taxation Charter Division, describing the business as “Television Channel Installation”. Since then, the defendants have been illegally retransmitting Dish channels to customers in the United States.

    The overall offer from Spider-TV appears to be considerable, with a claimed 1,300 channels from major regions including the US, Canada, UK, Europe, Middle East, and Africa.

    Importantly, Dish state that the defendants know that their activities are illegal, since the provider sent at least 32 infringement notices since January 20, 2017 demanding an end to the unauthorized retransmission of its channels. It went on to send even more to the defendants’ ISPs.

    “DISH and Networks sent at least thirty-three additional notices requesting the
    removal of infringing content to Internet service providers associated with the Service from February 16, 2017 to the filing of this Complaint. Upon information and belief, at least some of these notices were forwarded to Defendants,” the lawsuit reads.

    But while Dish says that the takedowns responded to by the ISPs were initially successful, the defendants took evasive action by transmitting the targeted channels from other locations.

    Describing the defendants’ actions as “willful, malicious, intentional [and] purposeful”, Dish is suing for Direct Copyright Infringement, demanding a permanent injunction preventing the promotion and provision of the service plus statutory damages of $150,000 per registered work. The final amount isn’t specified but the numbers are potentially enormous. In addition, Dish demands attorneys’ fees, costs, and the seizure of all infringing articles.

    The second lawsuit, filed in Texas, is broadly similar. It targets Mo’ Ayad Al
    Zayed Trading Est., and Mo’ Ayad Fawzi Al Zayed (individually and collectively doing business as Tiger International Company), and Shenzhen Tiger Star Electronical Co., Ltd, otherwise known as Shenzhen Tiger Star.

    Dish claims that these defendants also illegally capture and retransmit channels to customers in the United States. IPTV boxes costing up to $179 including one year’s service are the method of delivery.

    In common with the Maryland case, Dish says it sent almost two dozen takedown notices to ISPs utilized by the defendants. These were also countered by the unauthorized service retransmitting Dish channels from other servers.

    The biggest difference between the Maryland and Texas cases is that while Yusif/Spider/Dima Furniture are said to be in the US, Zayed is said to reside in Amman, Jordan, and Tiger Star is registered in Shenzhen, China. However, since the unauthorized service is targeted at customers in Texas, Dish states that the Texas court has jurisdiction.

    Again, Dish is suing for Direct Infringement, demanding damages, costs, and a permanent injunction.

    The complaints can be found here and here.

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

    12 B2 Power Tips for New Users

    Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/newbie-cloud-storage-guide/

    B2 Tips for Beginners
    You probably know that B2 is Backblaze’s fast and economical general purpose cloud storage, but do you know everything that you can do with it?

    If you’re a B2 newbie, here are some blazing power tips to help you get the most out of B2 Cloud Storage.

    If you’re a B2 expert or a developer, stay tuned. We’ll be publishing power tips for you in the near future. Enter your email address using the Join button at the top of the page and you won’t miss any upcoming blog posts.
    Backblaze logo

    1    Drag and Drop Files to B2

    Use Backblaze’s drag-and-drop web interface to store, restore, and share B2 files.

    Backblaze logo

    2    Share Files You Have in B2

    You can designate a B2 bucket as private or public. If the bucket is public and you’d like to share a file with others, you can create and copy a Friendly URL and paste it into an email or message.

    Backblaze logo

    3    Use B2 Just Like Any Other Drive

    Use B2 just as if it were a drive on your computer — drag and drop files and folders, save files to it — using one of a number of integrations that let you mount B2 as a volume in your Windows or Macintosh file system (Mountain Duck, ExpanDrive, odrive). Pick the files you want to save, drop them in a desktop folder, and they are automatically saved to B2.

    Backblaze logo

    4    Drag and Drop To and From B2 from the Desktop, Too

    Use Cyberduck, a B2 integration partner, to drag-and-drop files to and from B2 right from the Windows or Macintosh desktop.

    Backblaze logo

    5    Determine the Speed of your Connection to B2

    You can check the speed and latency of your internet connection between your location and Backblaze’s data centers, and see how much data you could theoretically transfer in a day, at https://www.backblaze.com/speedtest/.

    Backblaze logo

    6    No Matter What Type of Data you Have, B2 Can Handle It

    You can transfer any type or amount of data to B2 from any device that can connect to the internet, including Windows, Macintosh, Linux, servers, mobile devices, external drives, and NAS.

    Backblaze logo

    7    Get Your Files from B2 by Mail

    You have a choice of how to receive your data from B2. You can download data directly or request that your data be shipped to you via FedEx.

    Backblaze logo

    8    Back Up Your Backups to B2

    You can automatically back up your Apple Time Machine backup or Windows backup to a NAS and then back that up to B2 to give you both local and cloud backups for a 3-2-1 backup solution.

    Backblaze logo

    9    Protect Your B2 Account with Two-Factor Verification

    You can (and should) protect your Backblaze account with two-factor verification (such as using an app on your smartphone), and you can use backup codes and SMS verification in case you lose access to your smartphone.

    Backblaze logo

    10    Preview Photos Stored on B2 from the Web

    Preview your photos as thumbnails (and optionally download individual photos) in common image formats (including jpg, png, img, tiff, and gif) with the B2 web interface.

    Backblaze logo

    11    B2 Has Group Management, Too

    Backblaze Groups works for B2, too — just like Backblaze Personal Backup and Business Backup. You can manage billing, group membership, and control access using Group Management in your Backblaze account dashboard.

    Backblaze logo

    12    B2 Integrations Make B2 More Powerful and Useful

    There are over 30+ software and hardware integrations that make B2 more powerful. You can visit our integrations page to find a solution that works for you.

    Want to Learn More About B2?

    You can find more information on B2 on our website and in our help pages.

    The post 12 B2 Power Tips for New Users appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    [$] Welcome to 2018

    Post Syndicated from corbet original https://lwn.net/Articles/742545/rss

    Welcome to the first LWN.net feature article for 2018. The holidays are
    over and it’s time to get back to work. One of the first orders of
    business here at LWN is keeping up with our ill-advised tradition of making
    unlikely predictions for the coming year. There can be no doubt that 2018
    will be an eventful and interesting year; here’s our attempt at guessing
    how it will play out.