Tag Archives: 2016

Migrating .NET Classic Applications to Amazon ECS Using Windows Containers

Post Syndicated from Sundar Narasiman original https://aws.amazon.com/blogs/compute/migrating-net-classic-applications-to-amazon-ecs-using-windows-containers/

This post contributed by Sundar Narasiman, Arun Kannan, and Thomas Fuller.

AWS recently announced the general availability of Windows container management for Amazon Elastic Container Service (Amazon ECS). Docker containers and Amazon ECS make it easy to run and scale applications on a virtual machine by abstracting the complex cluster management and setup needed.

Classic .NET applications are developed with .NET Framework 4.7.1 or older and can run only on a Windows platform. These include Windows Communication Foundation (WCF), ASP.NET Web Forms, and an ASP.NET MVC web app or web API.

Why classic ASP.NET?

ASP.NET MVC 4.6 and older versions of ASP.NET occupy a significant footprint in the enterprise web application space. As enterprises move towards microservices for new or existing applications, containers are one of the stepping stones for migrating from monolithic to microservices architectures. Additionally, the support for Windows containers in Windows 10, Windows Server 2016, and Visual Studio Tooling support for Docker simplifies the containerization of ASP.NET MVC apps.

Getting started

In this post, you pick an ASP.NET 4.6.2 MVC application and get step-by-step instructions for migrating to ECS using Windows containers. The detailed steps, AWS CloudFormation template, Microsoft Visual Studio solution, ECS service definition, and ECS task definition are available in the aws-ecs-windows-aspnet GitHub repository.

To help you getting started running Windows containers, here is the reference architecture for Windows containers on GitHub: ecs-refarch-cloudformation-windows. This reference architecture is the layered CloudFormation stack, in that it calls the other stacks to create the environment. The CloudFormation YAML template in this reference architecture is referenced to create a single JSON CloudFormation stack, which is used in the steps for the migration.

Steps for Migration

The code and templates to implement this migration can be found on GitHub: https://github.com/aws-samples/aws-ecs-windows-aspnet.

  1. Your development environment needs to have the latest version and updates for Visual Studio 2017, Windows 10, and Docker for Windows Stable.
  2. Next, containerize the ASP.NET application and test it locally. The size of Windows container application images is generally larger compared to Linux containers. This is because the base image of the Windows container itself is large in size, typically greater than 9 GB.
  3. After the application is containerized, the container image needs to be pushed to Amazon Elastic Container Registry (Amazon ECR). Images stored in ECR are compressed to improve pull times and reduce storage costs. In this case, you can see that ECR compresses the image to around 1 GB, for an optimization factor of 90%.
  4. Create a CloudFormation stack using the template in the ‘CloudFormation template’ folder. This creates an ECS service, task definition (referring the containerized ASP.NET application), and other related components mentioned in the ECS reference architecture for Windows containers.
  5. After the stack is created, verify the successful creation of the ECS service, ECS instances, running tasks (with the threshold mentioned in the task definition), and the Application Load Balancer’s successful health check against running containers.
  6. Navigate to the Application Load Balancer URL and see the successful rendering of the containerized ASP.NET MVC app in the browser.

Key Notes

  • Generally, Windows container images occupy large amount of space (in the order of few GBs).
  • All the task definition parameters for Linux containers are not available for Windows containers. For more information, see Windows Task Definitions.
  • An Application Load Balancer can be configured to route requests to one or more ports on each container instance in a cluster. The dynamic port mapping allows you to have multiple tasks from a single service on the same container instance.
  • IAM roles for Windows tasks require extra configuration. For more information, see Windows IAM Roles for Tasks. For this post, configuration was handled by the CloudFormation template.
  • The ECS container agent log file can be accessed for troubleshooting Windows containers: C:\ProgramData\Amazon\ECS\log\ecs-agent.log

Summary

In this post, you migrated an ASP.NET MVC application to ECS using Windows containers.

The logical next step is to automate the activities for migration to ECS and build a fully automated continuous integration/continuous deployment (CI/CD) pipeline for Windows containers. This can be orchestrated by leveraging services such as AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, Amazon ECR, and Amazon ECS. You can learn more about how this is done in the Set Up a Continuous Delivery Pipeline for Containers Using AWS CodePipeline and Amazon ECS post.

If you have questions or suggestions, please comment below.

Early Challenges: Managing Cash Flow

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/managing-cash-flow/

Cash flow projection charts

This post by Backblaze’s CEO and co-founder Gleb Budman is the eighth in a series about entrepreneurship. You can choose posts in the series from the list below:

  1. How Backblaze got Started: The Problem, The Solution, and the Stuff In-Between
  2. Building a Competitive Moat: Turning Challenges Into Advantages
  3. From Idea to Launch: Getting Your First Customers
  4. How to Get Your First 1,000 Customers
  5. Surviving Your First Year
  6. How to Compete with Giants
  7. The Decision on Transparency
  8. Early Challenges: Managing Cash Flow

Use the Join button above to receive notification of new posts in this series.

Running out of cash is one of the quickest ways for a startup to go out of business. When you are starting a company the question of where to get cash is usually the top priority, but managing cash flow is critical for every stage in the lifecycle of a company. As a primarily bootstrapped but capital-intensive business, managing cash flow at Backblaze was and still is a key element of our success and requires continued focus. Let’s look at what we learned over the years.

Raising Your Initial Funding

When starting a tech business in Silicon Valley, the default assumption is that you will immediately try to raise venture funding. There are certainly many advantages to raising funding — not the least of which is that you don’t need to be cash-flow positive since you have cash in the bank and the expectation is that you will have a “burn rate,” i.e. you’ll be spending more than you make.

Note: While you’re not expected to be cash-flow positive, that doesn’t mean you don’t have to worry about cash. Cash-flow management will determine your burn rate. Whether you can get to cash-flow breakeven or need to raise another round of funding is a direct byproduct of your cash flow management.

Also, raising funding takes time (most successful fundraising cycles take 3-6 months start-to-finish), and time at a startup is in short supply. Constantly trying to raise funding can take away from product development and pursuing growth opportunities. If you’re not successful in raising funding, you then have to either shut down or find an alternate method of funding the business.

Sources of Funding

Depending on the stage of the company, type of company, and other factors, you may have access to different sources of funding. Let’s list a number of them:

Customers

Sales — the best kind of funding. It is non-dilutive, doesn’t have to be paid back, and is a direct metric of the success of your company.

Pre-Sales — some customers may be willing to pay you for a product in beta, a test, or pre-pay for a product they’ll receive when finished. Pre-Sales income also is great because it shares the characteristics of cash from sales, but you get the cash early. It also can be a good sign that the product you’re building fills a market need. We started charging for Backblaze computer backup while it was still in private beta, which allowed us to not only collect cash from customers, but also test the billing experience and users’ real desire for the service.

Services — if you’re a service company and customers are paying you for that, great. You can effectively scale for the number of hours available in a day. As demand grows, you can add more employees to increase the total number of billable hours.

Note: If you’re a product company and customers are paying you to consult, that can provide much needed cash, and could provide feedback toward the right product. However, it can also distract from your core business, send you down a path where you’re building a product for a single customer, and addict you to a path that prevents you from building a scalable business.

Investors

Yourself — you likely are putting your time into the business, and deferring salary in the process. You may also put your own cash into the business either as an investment or a loan.

Angels — angels are ideal as early investors since they are used to investing in businesses with little to no traction. AngelList is a good place to find them, though finding people you’re connected with through someone that knows you well is best.

Crowdfunding — a component of the JOBS Act permitted entrepreneurs to raise money from nearly anyone since May 2016. The SEC imposes limits on both investors and the companies. This article goes into some depth on the options and sites available.

VCs — VCs are ideal for companies that need to raise at least a few million dollars and intend to build a business that will be worth over $1 billion.

Debt

Friends & Family — F&F are often the first people to give you money because they are investing in you. It’s great to have some early supporters, but it also can be risky to take money from people who aren’t used to the risks. The key advice here is to only take money from people who won’t mind losing it. If someone is talking about using their children’s college funds or borrowing from their 401k, say ‘no thank you’ — even if they’re sure they want to loan you money.

Bank Loans — a variety of loan types exist, but most either require the company to have been operational for a couple years, be able to borrow against money the company has or is making, or be able to get a personal guarantee from the founders whereby their own credit is on the line. Fundera provides a good overview of loan options and can help secure some, but most will not be an option for a brand new startup.

Grants

Government — in some areas there is the potential for government grants to facilitate research. The SBIR program facilitates some such grants.

At Backblaze, we used a number of these options:

• Investors/Yourself
We loaned a cumulative total of a couple hundred thousand dollars to the company and invested our time by going without a salary for a year and a half.
• Customers/Pre-Sales
We started selling the Backblaze service while it was still in beta.
• Customers/Sales
We launched v1.0 and kept selling.
• Investors/Angels
After a year and a half, we raised $370k from 11 angels. All of them were either people whom we knew personally or were a strong recommendation from a mutual friend.
• Debt/Loans
After a couple years we were able to get equipment leases whereby the Storage Pods and hard drives were used as collateral to secure the lease on them.
• Investors/VCs
Ater five years we raised $5m from TMT Investments to add to the balance sheet and invest in growth.

The variety and quantity of sources we used is by no means uncommon.

GAAP vs. Cash

Most companies start tracking financials based on cash, and as they scale they switch to GAAP (Generally Accepted Accounting Principles). Cash is easier to track — we got paid $XXXX and spent $YYY — and as often mentioned, is required for the business to stay alive. GAAP has more subtlety and complexity, but provides a clearer picture of how the business is really doing. Backblaze was on a ‘cash’ system for the first few years, then switched to GAAP. For this post, I’m going to focus on things that help cash flow, not GAAP profitability.

Stages of Cash Flow Management

All-spend

In a pure service business (e.g. solo proprietor law firm), you may have no expenses other than your time, so this stage doesn’t exist. However, in a product business there is a period of time where you are building the product and have nothing to sell. You have zero cash coming in, but have cash going out. Your cash-flow is completely negative and you need funds to cover that.

Sales-generating

Starting to see cash come in from customers is thrilling. I initially had our system set up to email me with every $5 payment we received. You’re making sales, but not covering expenses.

Ramen-profitable

But it takes a lot of $5 payments to pay for servers and salaries, so for a while expenses are likely to outstrip sales. Getting to ramen-profitable is a critical stage where sales cover the business expenses and are “paying enough for the founders to eat ramen.” This extends the runway for a business, but is not completely sustainable, since presumably the founders can’t (or won’t) live forever on a subsistence salary.

Business-profitable

This is the ultimate stage whereby the business is truly profitable, including paying everyone market-rate salaries. A business at this stage is self-sustaining. (Of course, market shifts and plenty of other challenges can kill the business, but cash-flow issues alone will not.)

Note, I’m using the word ‘profitable’ here to mean this is still on a cash-basis.

Backblaze was in the all-spend stage for just over a year, during which time we built the service and hadn’t yet made the service available to customers. Backblaze was in the sales-generating stage for nearly another year before the company was barely ramen-profitable where sales were covering the company expenses and paying the founders minimum wage. (I say ‘barely’ since minimum wage in the SF Bay Area is arguably never subsistence.) It took almost three more years before the company was business-profitable, paying everyone including the founders market-rate.

Cash Flow Forecasting

When raising funding it’s helpful to think of milestones reached. You don’t necessarily need enough cash on day one to last for the next 100 years of the company. Some good milestones to consider are how much cash you need to prove there is a market need, prove you can build a product to meet that need, or get to ramen-profitable.

Two things to consider:

1) Unit Economics (COGS)

If your product is 100% software, this may not be relevant. Once software is built it costs effectively nothing to deliver the product to one customer or one million customers. However, in most businesses there is some incremental cost to provide the product. If you’re selling a hardware device, perhaps you sell it for $100 but it costs you $50 to make it. This is called “COGS” (Cost of Goods Sold).

Many products rely on cloud services where the costs scale with growth. That model works great, but it’s still important to understand what the costs are for the cloud service you use per unit of product you sell.

Support is often done by the founders early-on in a business, but that is another real cost to factor in and estimate on a per-user basis. Taking all of the per unit costs combined, you may charge $10/month/user for your service, but if it costs you $7/month/user in cloud services, you’re only netting $3/month/user.

2) Operating Expenses (OpEx)

These are expenses that don’t scale with the number of product units you sell. Typically this includes research & development, sales & marketing, and general & administrative expenses. Presumably there is a certain level of these functions required to build the product, market it, sell it, and run the organization. You can choose to invest or cut back on these, but you’ll still make the same amount per product unit.

Incremental Net Profit Per Unit

If you’ve calculated your COGS and your unit economics are “upside down,” where the amount you charge is less than that it costs you to provide your service, it’s worth thinking hard about how that’s going to change over time. If it will not change, there is no scale that will make the business work. Presuming you do make money on each unit of product you sell — what is sometimes referred to as “Contribution Margin” — consider how many of those product units you need to sell to cover your operating expenses as described above.

Calculating Your Profit

The math on getting to ramen-profitable is simple:

(Number of Product Units Sold x Contribution Margin) - Operating Expenses = Profit

If your operating expenses include subsistence salaries for the founders and profit > $0, you’re ramen-profitable.

Improving Cash Flow

Having access to sources of cash, whether from selling to customers or other methods, is excellent. But needing less cash gives you more choices and allows you to either dilute less, owe less, or invest more.

There are two ways to improve cash flow:

1) Collect More Cash

The best way to collect more cash is to provide more value to your customers and as a result have them pay you more. Additional features/products/services can allow this. However, you can also collect more cash by changing how you charge for your product. If you have a subscription, changing from charging monthly to yearly dramatically improves your cash flow. If you have a product that customers use up, selling a year’s supply instead of selling them one-by-one can help.

2) Spend Less Cash

Reducing COGS is a fantastic way to spend less cash in a scalable way. If you can do this without harming the product or customer experience, you win. There are a myriad of ways to also reduce operating expenses, including taking sub-market salaries, using your home instead of renting office space, staying focused on your core product, etc.

Ultimately, collecting more and spending less cash dramatically simplifies the process of getting to ramen-profitable and later to business-profitable.

Be Careful (Why GAAP Matters)

A word of caution: while running out of cash will put you out of business immediately, overextending yourself will likely put you out of business not much later. GAAP shows how a business is really doing; cash doesn’t. If you only focus on cash, it is possible to commit yourself to both delivering products and repaying loans in the future in an unsustainable fashion. If you’re taking out loans, watch the total balance and monthly payments you’re committing to. If you’re asking customers for pre-payment, make sure you believe you can deliver on what they’ve paid for.

Summary

There are numerous challenges to building a business, and ensuring you have enough cash is amongst the most important. Having the cash to keep going lets you keep working on all of the other challenges. The frameworks above were critical for maintaining Backblaze’s cash flow and cash balance. Hopefully you can take some of the lessons we learned and apply them to your business. Let us know what works for you in the comments below.

The post Early Challenges: Managing Cash Flow appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Now Open – Third AWS Availability Zone in London

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-third-aws-availability-zone-in-london/

We expand AWS by picking a geographic area (which we call a Region) and then building multiple, isolated Availability Zones in that area. Each Availability Zone (AZ) has multiple Internet connections and power connections to multiple grids.

Today I am happy to announce that we are opening our 50th AWS Availability Zone, with the addition of a third AZ to the EU (London) Region. This will give you additional flexibility to architect highly scalable, fault-tolerant applications that run across multiple AZs in the UK.

Since launching the EU (London) Region, we have seen an ever-growing set of customers, particularly in the public sector and in regulated industries, use AWS for new and innovative applications. Here are a couple of examples, courtesy of my AWS colleagues in the UK:

Enterprise – Some of the UK’s most respected enterprises are using AWS to transform their businesses, including BBC, BT, Deloitte, and Travis Perkins. Travis Perkins is one of the largest suppliers of building materials in the UK and is implementing the biggest systems and business change in its history, including an all-in migration of its data centers to AWS.

Startups – Cross-border payments company Currencycloud has migrated its entire payments production, and demo platform to AWS resulting in a 30% saving on their infrastructure costs. Clearscore, with plans to disrupting the credit score industry, has also chosen to host their entire platform on AWS. UnderwriteMe is using the EU (London) Region to offer an underwriting platform to their customers as a managed service.

Public Sector -The Met Office chose AWS to support the Met Office Weather App, available for iPhone and Android phones. Since the Met Office Weather App went live in January 2016, it has attracted more than half a million users. Using AWS, the Met Office has been able to increase agility, speed, and scalability while reducing costs. The Driver and Vehicle Licensing Agency (DVLA) is using the EU (London) Region for services such as the Strategic Card Payments platform, which helps the agency achieve PCI DSS compliance.

The AWS EU (London) Region has achieved Public Services Network (PSN) assurance, which provides UK Public Sector customers with an assured infrastructure on which to build UK Public Sector services. In conjunction with AWS’s Standardized Architecture for UK-OFFICIAL, PSN assurance enables UK Public Sector organizations to move their UK-OFFICIAL classified data to the EU (London) Region in a controlled and risk-managed manner.

For a complete list of AWS Regions and Services, visit the AWS Global Infrastructure page. As always, pricing for services in the Region can be found on the detail pages; visit our Cloud Products page to get started.

Jeff;

US Govt Brands Torrent, Streaming & Cyberlocker Sites As Notorious Markets

Post Syndicated from Andy original https://torrentfreak.com/us-govt-brands-torrent-streaming-cyberlocker-sites-as-notorious-markets-180115/

In its annual “Out-of-Cycle Review of Notorious Markets” the office of the United States Trade Representative (USTR) has listed a long list of websites said to be involved in online piracy.

The list is compiled with high-level input from various trade groups, including the MPAA and RIAA who both submitted their recommendations (1,2) during early October last year.

With the word “allegedly” used more than two dozen times in the report, the US government notes that its report does not constitute cast-iron proof of illegal activity. However, it urges the countries from where the so-called “notorious markets” operate to take action where they can, while putting owners and facilitators on notice that their activities are under the spotlight.

“A goal of the List is to motivate appropriate action by owners, operators, and service providers in the private sector of these and similar markets, as well as governments, to reduce piracy and counterfeiting,” the report reads.

“USTR highlights the following marketplaces because they exemplify global counterfeiting and piracy concerns and because the scale of infringing activity in these marketplaces can cause significant harm to U.S. intellectual property (IP) owners, consumers, legitimate online platforms, and the economy.”

The report begins with a page titled “Issue Focus: Illicit Streaming Devices”. Unsurprisingly, particularly given their place in dozens of headlines last year, the segment focus on the set-top box phenomenon. The piece doesn’t list any apps or software tools as such but highlights the general position, claiming a cost to the US entertainment industry of $4-5 billion a year.

Torrent Sites

In common with previous years, the USTR goes on to list several of the world’s top torrent sites but due to changes in circumstances, others have been delisted. ExtraTorrent, which shut down May 2017, is one such example.

As the world’s most famous torrent site, The Pirate Bay gets a prominent mention, with the USTR noting that the site is of “symbolic importance as one of the longest-running and most vocal torrent sites. The USTR underlines the site’s resilience by noting its hydra-like form while revealing an apparent secret concerning its hosting arrangements.

“The Pirate Bay has allegedly had more than a dozen domains hosted in various countries around the world, applies a reverse proxy service, and uses a hosting provider in Vietnam to evade further enforcement action,” the USTR notes.

Other torrent sites singled out for criticism include RARBG, which was nominated for the listing by the movie industry. According to the USTR, the site is hosted in Bosnia and Herzegovina and has changed hosting services to prevent shutdowns in recent years.

1337x.to and the meta-search engine Torrentz2 are also given a prime mention, with the USTR noting that they are “two of the most popular torrent sites that allegedly infringe U.S. content industry’s copyrights.” Russia’s RuTracker is also targeted for criticism, with the government noting that it’s now one of the most popular torrent sites in the world.

Streaming & Cyberlockers

While torrent sites are still important, the USTR reserves considerable space in its report for streaming portals and cyberlocker-type services.

4Shared.com, a file-hosting site that has been targeted by dozens of millions of copyright notices, is reportedly no longer able to use major US payment providers. Nevertheless, the British Virgin Islands company still collects significant sums from premium accounts, advertising, and offshore payment processors, USTR notes.

Cyberlocker Rapidgator gets another prominent mention in 2017, with the USTR noting that the Russian-hosted platform generates millions of dollars every year through premium memberships while employing rewards and affiliate schemes.

Due to its increasing popularity as a hosting and streaming operation, Openload.co (Romania) is now a big target for the USTR. “The site is used frequently in combination with add-ons in illicit streaming devices. In November 2017, users visited Openload.co a staggering 270 million times,” the USTR writes.

Owned by a Swiss company and hosted in the Netherlands, the popular site Uploaded is also criticized by the US alongside France’s 1Fichier.com, which allegedly hosts pirate games while being largely unresponsive to takedown notices. Dopefile.pk, a Pakistan-based storage outfit, is also highlighted.

On the video streaming front, it’s perhaps no surprise that the USTR focuses on sites like FMovies (Sweden), GoStream (Vietnam), Movie4K.tv (Russia) and PrimeWire. An organization collectively known as the MovShare group which encompasses Nowvideo.sx, WholeCloud.net, NowDownload.cd, MeWatchSeries.to and WatchSeries.ac, among others, is also listed.

Unauthorized music / research papers

While most of the above are either focused on video or feature it as part of their repertoire, other sites are listed for their attention to music. Convert2MP3.net is named as one of the most popular stream-ripping sites in the world and is highlighted due to the prevalence of YouTube-downloader sites and the 2017 demise of YouTube-MP3.

“Convert2MP3.net does not appear to have permission from YouTube or other sites and does not have permission from right holders for a wide variety of music represented by major U.S. labels,” the USTR notes.

Given the amount of attention the site has received in 2017 as ‘The Pirate Bay of Research’, Libgen.io and Sci-Hub.io (not to mention the endless proxy and mirror sites that facilitate access) are given a detailed mention in this year’s report.

“Together these sites make it possible to download — all without permission and without remunerating authors, publishers or researchers — millions of copyrighted books by commercial publishers and university presses; scientific, technical and medical journal articles; and publications of technological standards,” the USTR writes.

Service providers

But it’s not only sites that are being put under pressure. Following a growing list of nominations in previous years, Swiss service provider Private Layer is again singled out as a rogue player in the market for hosting 1337x.to and Torrentz2.eu, among others.

“While the exact configuration of websites changes from year to year, this is the fourth consecutive year that the List has stressed the significant international trade impact of Private Layer’s hosting services and the allegedly infringing sites it hosts,” the USTR notes.

“Other listed and nominated sites may also be hosted by Private Layer but are using
reverse proxy services to obfuscate the true host from the public and from law enforcement.”

The USTR notes Switzerland’s efforts to close a legal loophole that restricts enforcement and looks forward to a positive outcome when the draft amendment is considered by parliament.

Perhaps a little surprisingly given its recent anti-piracy efforts and overtures to the US, Russia’s leading social network VK.com again gets a place on the new list. The USTR recognizes VK’s efforts but insists that more needs to be done.

Social networking and e-commerce

“In 2016, VK reached licensing agreements with major record companies, took steps to limit third-party applications dedicated to downloading infringing content from the site, and experimented with content recognition technologies,” the USTR writes.

“Despite these positive signals, VK reportedly continues to be a hub of infringing activity and the U.S. motion picture industry reports that they find thousands of infringing files on the site each month.”

Finally, in addition to traditional pirate sites, the US also lists online marketplaces that allegedly fail to meet appropriate standards. Re-added to the list in 2016 after a brief hiatus in 2015, China’s Alibaba is listed again in 2017. The development provoked an angry response from the company.

Describing his company as a “scapegoat”, Alibaba Group President Michael Evans said that his platform had achieved a 25% drop in takedown requests and has even been removing infringing listings before they make it online.

“In light of all this, it’s clear that no matter how much action we take and progress we make, the USTR is not actually interested in seeing tangible results,” Evans said in a statement.

The full list of sites in the Notorious Markets Report 2017 (pdf) can be found below.

– 1fichier.com – (cyberlocker)
– 4shared.com – (cyberlocker)
– convert2mp3.net – (stream-ripper)
– Dhgate.com (e-commerce)
– Dopefile.pl – (cyberlocker)
– Firestorm-servers.com (pirate gaming service)
– Fmovies.is, Fmovies.se, Fmovies.to – (streaming)
– Gostream.is, Gomovies.to, 123movieshd.to (streaming)
– Indiamart.com (e-commerce)
– Kinogo.club, kinogo.co (streaming host, platform)
– Libgen.io, sci-hub.io, libgen.pw, sci-hub.cc, sci-hub.bz, libgen.info, lib.rus.ec, bookfi.org, bookzz.org, booker.org, booksc.org, book4you.org, bookos-z1.org, booksee.org, b-ok.org (research downloads)
– Movshare Group – Nowvideo.sx, wholecloud.net, auroravid.to, bitvid.sx, nowdownload.ch, cloudtime.to, mewatchseries.to, watchseries.ac (streaming)
– Movie4k.tv (streaming)
– MP3VA.com (music)
– Openload.co (cyberlocker / streaming)
– 1337x.to (torrent site)
– Primewire.ag (streaming)
– Torrentz2, Torrentz2.me, Torrentz2.is (torrent site)
– Rarbg.to (torrent site)
– Rebel (domain company)
– Repelis.tv (movie and TV linking)
– RuTracker.org (torrent site)
– Rapidgator.net (cyberlocker)
– Taobao.com (e-commerce)
– The Pirate Bay (torrent site)
– TVPlus, TVBrowser, Kuaikan (streaming apps and addons, China)
– Uploaded.net (cyberlocker)
– VK.com (social networking)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

A Look Back At 2017 – Tools & News Highlights

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/01/look-back-2017-tools-news-highlights/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

A Look Back At 2017 – Tools & News Highlights

So here we are in 2018, taking a look back at 2017, quite a year it was. We somehow forgot to do this last year so just have the 2015 summary and the 2014 summary but no 2016 edition.

2017 News Stories

All kinds of things happened in 2017 starting with some pretty comical shit and the MongoDB Ransack – Over 33,000 Databases Hacked, I’ve personally had very poor experienced with MongoDB in general and I did notice the sloppy defaults (listen on all interfaces, no password) when I used it, I believe the defaults have been corrected – but I still don’t have a good impression of it.

Read the rest of A Look Back At 2017 – Tools & News Highlights now! Only available at Darknet.

Yet Another FBI Proposal for Insecure Communications

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/yet_another_fbi.html

Deputy Attorney General Rosenstein has given talks where he proposes that tech companies decrease their communications and device security for the benefit of the FBI. In a recent talk, his idea is that tech companies just save a copy of the plaintext:

Law enforcement can also partner with private industry to address a problem we call “Going Dark.” Technology increasingly frustrates traditional law enforcement efforts to collect evidence needed to protect public safety and solve crime. For example, many instant-messaging services now encrypt messages by default. The prevent the police from reading those messages, even if an impartial judge approves their interception.

The problem is especially critical because electronic evidence is necessary for both the investigation of a cyber incident and the prosecution of the perpetrator. If we cannot access data even with lawful process, we are unable to do our job. Our ability to secure systems and prosecute criminals depends on our ability to gather evidence.

I encourage you to carefully consider your company’s interests and how you can work cooperatively with us. Although encryption can help secure your data, it may also prevent law enforcement agencies from protecting your data.

Encryption serves a valuable purpose. It is a foundational element of data security and essential to safeguarding data against cyber-attacks. It is critical to the growth and flourishing of the digital economy, and we support it. I support strong and responsible encryption.

I simply maintain that companies should retain the capability to provide the government unencrypted copies of communications and data stored on devices, when a court orders them to do so.

Responsible encryption is effective secure encryption, coupled with access capabilities. We know encryption can include safeguards. For example, there are systems that include central management of security keys and operating system updates; scanning of content, like your e-mails, for advertising purposes; simulcast of messages to multiple destinations at once; and key recovery when a user forgets the password to decrypt a laptop. No one calls any of those functions a “backdoor.” In fact, those very capabilities are marketed and sought out.

I do not believe that the government should mandate a specific means of ensuring access. The government does not need to micromanage the engineering.

The question is whether to require a particular goal: When a court issues a search warrant or wiretap order to collect evidence of crime, the company should be able to help. The government does not need to hold the key.

Rosenstein is right that many services like Gmail naturally keep plaintext in the cloud. This is something we pointed out in our 2016 paper: “Don’t Panic.” But forcing companies to build an alternate means to access the plaintext that the user can’t control is an enormous vulnerability.

Susan Landau’s New Book: Listening In

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/susan_landaus_n.html

Susan Landau has written a terrific book on cybersecurity threats and why we need strong crypto. Listening In: Cybersecurity in an Insecure Age. It’s based in part on her 2016 Congressional testimony in the Apple/FBI case; it examines how the Digital Revolution has transformed society, and how law enforcement needs to — and can — adjust to the new realities. The book is accessible to techies and non-techies alike, and is strongly recommended.

And if you’ve already read it, give it a review on Amazon. Reviews sell books, and this one needs more of them.

Judge Issues Devastating Order Against BitTorrent Copyright Troll

Post Syndicated from Ernesto original https://torrentfreak.com/judge-issues-devastating-order-bittorrent-copyright-troll-180110/

In recent years, file-sharers around the world have been pressured to pay significant settlement fees, or face legal repercussions.

These so-called “copyright trolling” efforts have been a common occurrence in the United States since the turn of the last decade.

Increasingly, however, courts are growing weary of these cases. Many districts have turned into no-go zones for copyright trolls and the people behind Prenda law were arrested and are being prosecuted in a criminal case.

In the Western District of Washington, the tide also appears to have turned. After Venice PI, a copyright holder of the film “Once Upon a Time in Venice”, sued a man who later passed away, concerns were raised over the validity of the evidence.

Venice PI responded to the concerns with a declaration explaining its data gathering technique and assuring the Court that false positives are out of the question.

That testimony didn’t help much though, as a recently filed minute order shows this week. The order applies to a dozen cases and prohibits the company from reaching out to any defendants until further notice, as there are several alarming issues that have to be resolved first.

One of the problems is that Venice PI declared that it’s owned by a company named Lost Dog Productions, which in turn is owned by Voltage Productions. Interestingly, these companies don’t appear in the usual records.

“A search of the California Secretary of State’s online database, however, reveals no registered entity with the name ‘Lost Dog’ or ‘Lost Dog Productions’,” the Court notes.

“Moreover, although ‘Voltage Pictures, LLC’ is registered with the California Secretary of State, and has the same address as Venice PI, LLC, the parent company named in plaintiff’s corporate disclosure form, ‘Voltage Productions, LLC,’ cannot be found in the California Secretary of State’s online database and does not appear to exist.”

In other words, the company that filed the lawsuit, as well as its parent company, are extremely questionable.

While the above is a reason for concern, it’s just the tip of the iceberg. The Court not only points out administrative errors, but it also has serious doubts about the evidence collection process. This was carried out by the German company MaverickEye, which used the tracking technology of another German company, GuardaLey.

GuardaLey CEO Benjamin Perino, who claims that he coded the tracking software, wrote a declaration explaining that the infringement detection system at issue “cannot yield a false positive.” However, the Court doubts this statement and Perino’s qualifications in general.

“Perino has been proffered as an expert, but his qualifications consist of a technical high school education and work experience unrelated to the peer-to-peer file-sharing technology known as BitTorrent,” the Court writes.

“Perino does not have the qualifications necessary to be considered an expert in the field in question, and his opinion that the surveillance program is incapable of error is both contrary to common sense and inconsistent with plaintiff’s counsel’s conduct in other matters in this district. Plaintiff has not submitted an adequate offer of proof”

It seems like the Court would prefer to see an assessment from a qualified independent expert instead of the person who wrote the software. For now, this means that the IP-address evidence, in these cases, is not good enough. That’s quite a blow for the copyright holder.

If that wasn’t enough the Court also highlights another issue that’s possibly even more problematic. When Venice PI requested the subpoenas to identify alleged pirates, they relied on declarations from Daniel Arheidt, a consultant for MaverickEye.

These declarations fail to mention, however, that MaverickEye has the proper paperwork to collect IP addresses.

“Nowhere in Arheidt’s declarations does he indicate that either he or MaverickEye is licensed in Washington to conduct private investigation work,” the order reads.

This is important, as doing private investigator work without a license is a gross misdemeanor in Washington. The copyright holder was aware of this requirement because it was brought up in related cases in the past.

“Plaintiff’s counsel has apparently been aware since October 2016, when he received a letter concerning LHF Productions, Inc. v. Collins, C16-1017 RSM, that Arheidt might be committing a crime by engaging in unlicensed surveillance of Washington citizens, but he did not disclose this fact to the Court.”

The order is very bad news for Venice PI. The company had hoped to score a few dozen easy settlements but the tables have now been turned. The Court instead asks the company to explain the deficiencies and provide additional details. In the meantime, the copyright holder is urged not to spend or transfer any of the settlement money that has been collected thus far.

The latter indicates that Venice PI might have to hand defendants their money back, which would be pretty unique.

The order suggests that the Judge is very suspicious of these trolling activities. In a footnote there’s a link to a Fight Copyright Trolls article which revealed that the same counsel dismissed several cases, allegedly to avoid having IP-address evidence scrutinized.

Even more bizarrely, in another footnote the Court also doubts if MaverickEye’s aforementioned consultant, Daniel Arheidt, actually exists.

“The Court has recently become aware that Arheidt is the latest in a series of German declarants (Darren M. Griffin, Daniel Macek, Daniel Susac, Tobias Fieser, Michael Patzer) who might be aliases or even fictitious.

“Plaintiff will not be permitted to rely on Arheidt’s declarations or underlying data without explaining to the Court’s satisfaction Arheidt’s relationship to the above-listed declarants and producing proof beyond a reasonable doubt of Arheidt’s existence,” the court adds.

These are serious allegations, to say the least.

If a copyright holder uses non-existent companies and questionable testimony from unqualified experts after obtaining evidence illegally to get a subpoena backed by a fictitious person….something’s not quite right.

A copy of the minute order, which affects a series of cases, is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Top 10 Most Popular Torrent Sites of 2018

Post Syndicated from Ernesto original https://torrentfreak.com/top-10-most-popular-torrent-sites-of-2018-180107/

Torrent sites have come and gone over past year. Now, at the start of 2018, we take a look to see what the most-used sites are in the current landscape.

The Pirate Bay remains the undisputed number one. The site has weathered a few storms over the years, but it looks like it will be able to celebrate its 15th anniversary, which is coming up in a few months.

The list also includes various newcomers including Idope and Zooqle. While many people are happy to see new torrent sites emerge, this often means that others have called it quits.

Last year’s runner-up Extratorrent, for example, has shut down and left a gaping hole behind. And it wasn’t the only site that went away. TorrentProject also disappeared without a trace and the same was true for isohunt.to.

The unofficial Torrentz reincarnation Torrentz2.eu, the highest newcomer last year, is somewhat of an unusual entry. A few weeks ago all links to externally hosted torrents were removed, as was the list of indexed pages.

We decided to include the site nonetheless, given its history and because it’s still possible to find hashes through the site. As Torrentz2’s future is uncertain, we added an extra site (10.1) as compensation.

Finally, RuTracker also deserves a mention. The torrent site generates enough traffic to warrant a listing, but we traditionally limit the list to sites that are targeted primarily at an English or international audience.

Below is the full list of the ten most-visited torrent sites at the start of the new year. The list is based on various traffic reports and we display the Alexa rank for each. In addition, we include last year’s ranking.

Most Popular Torrent Sites

1. The Pirate Bay

The Pirate Bay is the “king of torrents” once again and also the oldest site in this list. The past year has been relatively quiet for the notorious torrent site, which is currently operating from its original .org domain name.

Alexa Rank: 104/ Last year #1

2. RARBG

RARBG, which started out as a Bulgarian tracker, has captured the hearts and minds of many video pirates. The site was founded in 2008 and specializes in high quality video releases.

Alexa Rank: 298 / Last year #3

3. 1337x

1337x continues where it left off last year. The site gained a lot of traffic and, unlike some other sites in the list, has a dedicated group of uploaders that provide fresh content.

Alexa Rank: 321 / Last year #6

4. Torrentz2

Torrentz2 launched as a stand-in for the original Torrentz.eu site, which voluntarily closed its doors in 2016. At the time of writing, the site only lists torrent hashes and no longer any links to external torrent sites. While browser add-ons and plugins still make the site functional, its future is uncertain.

Alexa Rank: 349 / Last year #5

5. YTS.ag

YTS.ag is the unofficial successors of the defunct YTS or YIFY group. Not all other torrent sites were happy that the site hijacked the popuar brand and several are actively banning its releases.

Alexa Rank: 563 / Last year #4

6. EZTV.ag

The original TV-torrent distribution group EZTV shut down after a hostile takeover in 2015, with new owners claiming ownership of the brand. The new group currently operates from EZTV.ag and releases its own torrents. These releases are banned on some other torrent sites due to this controversial history.

Alexa Rank: 981 / Last year #7

7. LimeTorrents

Limetorrents has been an established torrent site for more than half a decade. The site’s operator also runs the torrent cache iTorrents, which is used by several other torrent search engines.

Alexa Rank: 2,433 / Last year #10

8. NYAA.si

NYAA.si is a popular resurrection of the anime torrent site NYAA, which shut down last year. Previously we left anime-oriented sites out of the list, but since we also include dedicated TV and movie sites, we decided that a mention is more than warranted.

Alexa Rank: 1,575 / Last year #NA

9. Torrents.me

Torrents.me is one of the torrent sites that enjoyed a meteoric rise in traffic this year. It’s a meta-search engine that links to torrent files and magnet links from other torrent sites.

Alexa Rank: 2,045 / Last year #NA

10. Zooqle

Zooqle, which boasts nearly three million verified torrents, has stayed under the radar for years but has still kept growing. The site made it into the top 10 for the first time this year.

Alexa Rank: 2,347 / Last year #NA

10.1 iDope

The special 10.1 mention goes to iDope. Launched in 2016, the site is a relative newcomer to the torrent scene. The torrent indexer has steadily increased its audience over the past year. With similar traffic numbers to Zooqle, a listing is therefore warranted.

Alexa Rank: 2,358 / Last year #NA

Disclaimer: Yes, we know that Alexa isn’t perfect, but it helps to compare sites that operate in a similar niche. We also used other traffic metrics to compile the top ten. Please keep in mind that many sites have mirrors or alternative domains, which are not taken into account here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Combine Transactional and Analytical Data Using Amazon Aurora and Amazon Redshift

Post Syndicated from Re Alvarez-Parmar original https://aws.amazon.com/blogs/big-data/combine-transactional-and-analytical-data-using-amazon-aurora-and-amazon-redshift/

A few months ago, we published a blog post about capturing data changes in an Amazon Aurora database and sending it to Amazon Athena and Amazon QuickSight for fast analysis and visualization. In this post, I want to demonstrate how easy it can be to take the data in Aurora and combine it with data in Amazon Redshift using Amazon Redshift Spectrum.

With Amazon Redshift, you can build petabyte-scale data warehouses that unify data from a variety of internal and external sources. Because Amazon Redshift is optimized for complex queries (often involving multiple joins) across large tables, it can handle large volumes of retail, inventory, and financial data without breaking a sweat.

In this post, we describe how to combine data in Aurora in Amazon Redshift. Here’s an overview of the solution:

  • Use AWS Lambda functions with Amazon Aurora to capture data changes in a table.
  • Save data in an Amazon S3
  • Query data using Amazon Redshift Spectrum.

We use the following services:

Serverless architecture for capturing and analyzing Aurora data changes

Consider a scenario in which an e-commerce web application uses Amazon Aurora for a transactional database layer. The company has a sales table that captures every single sale, along with a few corresponding data items. This information is stored as immutable data in a table. Business users want to monitor the sales data and then analyze and visualize it.

In this example, you take the changes in data in an Aurora database table and save it in Amazon S3. After the data is captured in Amazon S3, you combine it with data in your existing Amazon Redshift cluster for analysis.

By the end of this post, you will understand how to capture data events in an Aurora table and push them out to other AWS services using AWS Lambda.

The following diagram shows the flow of data as it occurs in this tutorial:

The starting point in this architecture is a database insert operation in Amazon Aurora. When the insert statement is executed, a custom trigger calls a Lambda function and forwards the inserted data. Lambda writes the data that it received from Amazon Aurora to a Kinesis data delivery stream. Kinesis Data Firehose writes the data to an Amazon S3 bucket. Once the data is in an Amazon S3 bucket, it is queried in place using Amazon Redshift Spectrum.

Creating an Aurora database

First, create a database by following these steps in the Amazon RDS console:

  1. Sign in to the AWS Management Console, and open the Amazon RDS console.
  2. Choose Launch a DB instance, and choose Next.
  3. For Engine, choose Amazon Aurora.
  4. Choose a DB instance class. This example uses a small, since this is not a production database.
  5. In Multi-AZ deployment, choose No.
  6. Configure DB instance identifier, Master username, and Master password.
  7. Launch the DB instance.

After you create the database, use MySQL Workbench to connect to the database using the CNAME from the console. For information about connecting to an Aurora database, see Connecting to an Amazon Aurora DB Cluster.

The following screenshot shows the MySQL Workbench configuration:

Next, create a table in the database by running the following SQL statement:

Create Table
CREATE TABLE Sales (
InvoiceID int NOT NULL AUTO_INCREMENT,
ItemID int NOT NULL,
Category varchar(255),
Price double(10,2), 
Quantity int not NULL,
OrderDate timestamp,
DestinationState varchar(2),
ShippingType varchar(255),
Referral varchar(255),
PRIMARY KEY (InvoiceID)
)

You can now populate the table with some sample data. To generate sample data in your table, copy and run the following script. Ensure that the highlighted (bold) variables are replaced with appropriate values.

#!/usr/bin/python
import MySQLdb
import random
import datetime

db = MySQLdb.connect(host="AURORA_CNAME",
                     user="DBUSER",
                     passwd="DBPASSWORD",
                     db="DB")

states = ("AL","AK","AZ","AR","CA","CO","CT","DE","FL","GA","HI","ID","IL","IN",
"IA","KS","KY","LA","ME","MD","MA","MI","MN","MS","MO","MT","NE","NV","NH","NJ",
"NM","NY","NC","ND","OH","OK","OR","PA","RI","SC","SD","TN","TX","UT","VT","VA",
"WA","WV","WI","WY")

shipping_types = ("Free", "3-Day", "2-Day")

product_categories = ("Garden", "Kitchen", "Office", "Household")
referrals = ("Other", "Friend/Colleague", "Repeat Customer", "Online Ad")

for i in range(0,10):
    item_id = random.randint(1,100)
    state = states[random.randint(0,len(states)-1)]
    shipping_type = shipping_types[random.randint(0,len(shipping_types)-1)]
    product_category = product_categories[random.randint(0,len(product_categories)-1)]
    quantity = random.randint(1,4)
    referral = referrals[random.randint(0,len(referrals)-1)]
    price = random.randint(1,100)
    order_date = datetime.date(2016,random.randint(1,12),random.randint(1,30)).isoformat()

    data_order = (item_id, product_category, price, quantity, order_date, state,
    shipping_type, referral)

    add_order = ("INSERT INTO Sales "
                   "(ItemID, Category, Price, Quantity, OrderDate, DestinationState, \
                   ShippingType, Referral) "
                   "VALUES (%s, %s, %s, %s, %s, %s, %s, %s)")

    cursor = db.cursor()
    cursor.execute(add_order, data_order)

    db.commit()

cursor.close()
db.close() 

The following screenshot shows how the table appears with the sample data:

Sending data from Amazon Aurora to Amazon S3

There are two methods available to send data from Amazon Aurora to Amazon S3:

  • Using a Lambda function
  • Using SELECT INTO OUTFILE S3

To demonstrate the ease of setting up integration between multiple AWS services, we use a Lambda function to send data to Amazon S3 using Amazon Kinesis Data Firehose.

Alternatively, you can use a SELECT INTO OUTFILE S3 statement to query data from an Amazon Aurora DB cluster and save it directly in text files that are stored in an Amazon S3 bucket. However, with this method, there is a delay between the time that the database transaction occurs and the time that the data is exported to Amazon S3 because the default file size threshold is 6 GB.

Creating a Kinesis data delivery stream

The next step is to create a Kinesis data delivery stream, since it’s a dependency of the Lambda function.

To create a delivery stream:

  1. Open the Kinesis Data Firehose console
  2. Choose Create delivery stream.
  3. For Delivery stream name, type AuroraChangesToS3.
  4. For Source, choose Direct PUT.
  5. For Record transformation, choose Disabled.
  6. For Destination, choose Amazon S3.
  7. In the S3 bucket drop-down list, choose an existing bucket, or create a new one.
  8. Enter a prefix if needed, and choose Next.
  9. For Data compression, choose GZIP.
  10. In IAM role, choose either an existing role that has access to write to Amazon S3, or choose to generate one automatically. Choose Next.
  11. Review all the details on the screen, and choose Create delivery stream when you’re finished.

 

Creating a Lambda function

Now you can create a Lambda function that is called every time there is a change that needs to be tracked in the database table. This Lambda function passes the data to the Kinesis data delivery stream that you created earlier.

To create the Lambda function:

  1. Open the AWS Lambda console.
  2. Ensure that you are in the AWS Region where your Amazon Aurora database is located.
  3. If you have no Lambda functions yet, choose Get started now. Otherwise, choose Create function.
  4. Choose Author from scratch.
  5. Give your function a name and select Python 3.6 for Runtime
  6. Choose and existing or create a new Role, the role would need to have access to call firehose:PutRecord
  7. Choose Next on the trigger selection screen.
  8. Paste the following code in the code window. Change the stream_name variable to the Kinesis data delivery stream that you created in the previous step.
  9. Choose File -> Save in the code editor and then choose Save.
import boto3
import json

firehose = boto3.client('firehose')
stream_name = ‘AuroraChangesToS3’


def Kinesis_publish_message(event, context):
    
    firehose_data = (("%s,%s,%s,%s,%s,%s,%s,%s\n") %(event['ItemID'], 
    event['Category'], event['Price'], event['Quantity'],
    event['OrderDate'], event['DestinationState'], event['ShippingType'], 
    event['Referral']))
    
    firehose_data = {'Data': str(firehose_data)}
    print(firehose_data)
    
    firehose.put_record(DeliveryStreamName=stream_name,
    Record=firehose_data)

Note the Amazon Resource Name (ARN) of this Lambda function.

Giving Aurora permissions to invoke a Lambda function

To give Amazon Aurora permissions to invoke a Lambda function, you must attach an IAM role with appropriate permissions to the cluster. For more information, see Invoking a Lambda Function from an Amazon Aurora DB Cluster.

Once you are finished, the Amazon Aurora database has access to invoke a Lambda function.

Creating a stored procedure and a trigger in Amazon Aurora

Now, go back to MySQL Workbench, and run the following command to create a new stored procedure. When this stored procedure is called, it invokes the Lambda function you created. Change the ARN in the following code to your Lambda function’s ARN.

DROP PROCEDURE IF EXISTS CDC_TO_FIREHOSE;
DELIMITER ;;
CREATE PROCEDURE CDC_TO_FIREHOSE (IN ItemID VARCHAR(255), 
									IN Category varchar(255), 
									IN Price double(10,2),
                                    IN Quantity int(11),
                                    IN OrderDate timestamp,
                                    IN DestinationState varchar(2),
                                    IN ShippingType varchar(255),
                                    IN Referral  varchar(255)) LANGUAGE SQL 
BEGIN
  CALL mysql.lambda_async('arn:aws:lambda:us-east-1:XXXXXXXXXXXXX:function:CDCFromAuroraToKinesis', 
     CONCAT('{ "ItemID" : "', ItemID, 
            '", "Category" : "', Category,
            '", "Price" : "', Price,
            '", "Quantity" : "', Quantity, 
            '", "OrderDate" : "', OrderDate, 
            '", "DestinationState" : "', DestinationState, 
            '", "ShippingType" : "', ShippingType, 
            '", "Referral" : "', Referral, '"}')
     );
END
;;
DELIMITER ;

Create a trigger TR_Sales_CDC on the Sales table. When a new record is inserted, this trigger calls the CDC_TO_FIREHOSE stored procedure.

DROP TRIGGER IF EXISTS TR_Sales_CDC;
 
DELIMITER ;;
CREATE TRIGGER TR_Sales_CDC
  AFTER INSERT ON Sales
  FOR EACH ROW
BEGIN
  SELECT  NEW.ItemID , NEW.Category, New.Price, New.Quantity, New.OrderDate
  , New.DestinationState, New.ShippingType, New.Referral
  INTO @ItemID , @Category, @Price, @Quantity, @OrderDate
  , @DestinationState, @ShippingType, @Referral;
  CALL  CDC_TO_FIREHOSE(@ItemID , @Category, @Price, @Quantity, @OrderDate
  , @DestinationState, @ShippingType, @Referral);
END
;;
DELIMITER ;

If a new row is inserted in the Sales table, the Lambda function that is mentioned in the stored procedure is invoked.

Verify that data is being sent from the Lambda function to Kinesis Data Firehose to Amazon S3 successfully. You might have to insert a few records, depending on the size of your data, before new records appear in Amazon S3. This is due to Kinesis Data Firehose buffering. To learn more about Kinesis Data Firehose buffering, see the “Amazon S3” section in Amazon Kinesis Data Firehose Data Delivery.

Every time a new record is inserted in the sales table, a stored procedure is called, and it updates data in Amazon S3.

Querying data in Amazon Redshift

In this section, you use the data you produced from Amazon Aurora and consume it as-is in Amazon Redshift. In order to allow you to process your data as-is, where it is, while taking advantage of the power and flexibility of Amazon Redshift, you use Amazon Redshift Spectrum. You can use Redshift Spectrum to run complex queries on data stored in Amazon S3, with no need for loading or other data prep.

Just create a data source and issue your queries to your Amazon Redshift cluster as usual. Behind the scenes, Redshift Spectrum scales to thousands of instances on a per-query basis, ensuring that you get fast, consistent performance even as your dataset grows to beyond an exabyte! Being able to query data that is stored in Amazon S3 means that you can scale your compute and your storage independently. You have the full power of the Amazon Redshift query model and all the reporting and business intelligence tools at your disposal. Your queries can reference any combination of data stored in Amazon Redshift tables and in Amazon S3.

Redshift Spectrum supports open, common data types, including CSV/TSV, Apache Parquet, SequenceFile, and RCFile. Files can be compressed using gzip or Snappy, with other data types and compression methods in the works.

First, create an Amazon Redshift cluster. Follow the steps in Launch a Sample Amazon Redshift Cluster.

Next, create an IAM role that has access to Amazon S3 and Athena. By default, Amazon Redshift Spectrum uses the Amazon Athena data catalog. Your cluster needs authorization to access your external data catalog in AWS Glue or Athena and your data files in Amazon S3.

In the demo setup, I attached AmazonS3FullAccess and AmazonAthenaFullAccess. In a production environment, the IAM roles should follow the standard security of granting least privilege. For more information, see IAM Policies for Amazon Redshift Spectrum.

Attach the newly created role to the Amazon Redshift cluster. For more information, see Associate the IAM Role with Your Cluster.

Next, connect to the Amazon Redshift cluster, and create an external schema and database:

create external schema if not exists spectrum_schema
from data catalog 
database 'spectrum_db' 
region 'us-east-1'
IAM_ROLE 'arn:aws:iam::XXXXXXXXXXXX:role/RedshiftSpectrumRole'
create external database if not exists;

Don’t forget to replace the IAM role in the statement.

Then create an external table within the database:

 CREATE EXTERNAL TABLE IF NOT EXISTS spectrum_schema.ecommerce_sales(
  ItemID int,
  Category varchar,
  Price DOUBLE PRECISION,
  Quantity int,
  OrderDate TIMESTAMP,
  DestinationState varchar,
  ShippingType varchar,
  Referral varchar)
ROW FORMAT DELIMITED
      FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION 's3://{BUCKET_NAME}/CDC/'

Query the table, and it should contain data. This is a fact table.

select top 10 * from spectrum_schema.ecommerce_sales

 

Next, create a dimension table. For this example, we create a date/time dimension table. Create the table:

CREATE TABLE date_dimension (
  d_datekey           integer       not null sortkey,
  d_dayofmonth        integer       not null,
  d_monthnum          integer       not null,
  d_dayofweek                varchar(10)   not null,
  d_prettydate        date       not null,
  d_quarter           integer       not null,
  d_half              integer       not null,
  d_year              integer       not null,
  d_season            varchar(10)   not null,
  d_fiscalyear        integer       not null)
diststyle all;

Populate the table with data:

copy date_dimension from 's3://reparmar-lab/2016dates' 
iam_role 'arn:aws:iam::XXXXXXXXXXXX:role/redshiftspectrum'
DELIMITER ','
dateformat 'auto';

The date dimension table should look like the following:

Querying data in local and external tables using Amazon Redshift

Now that you have the fact and dimension table populated with data, you can combine the two and run analysis. For example, if you want to query the total sales amount by weekday, you can run the following:

select sum(quantity*price) as total_sales, date_dimension.d_season
from spectrum_schema.ecommerce_sales 
join date_dimension on spectrum_schema.ecommerce_sales.orderdate = date_dimension.d_prettydate 
group by date_dimension.d_season

You get the following results:

Similarly, you can replace d_season with d_dayofweek to get sales figures by weekday:

With Amazon Redshift Spectrum, you pay only for the queries you run against the data that you actually scan. We encourage you to use file partitioning, columnar data formats, and data compression to significantly minimize the amount of data scanned in Amazon S3. This is important for data warehousing because it dramatically improves query performance and reduces cost.

Partitioning your data in Amazon S3 by date, time, or any other custom keys enables Amazon Redshift Spectrum to dynamically prune nonrelevant partitions to minimize the amount of data processed. If you store data in a columnar format, such as Parquet, Amazon Redshift Spectrum scans only the columns needed by your query, rather than processing entire rows. Similarly, if you compress your data using one of the supported compression algorithms in Amazon Redshift Spectrum, less data is scanned.

Analyzing and visualizing Amazon Redshift data in Amazon QuickSight

Modify the Amazon Redshift security group to allow an Amazon QuickSight connection. For more information, see Authorizing Connections from Amazon QuickSight to Amazon Redshift Clusters.

After modifying the Amazon Redshift security group, go to Amazon QuickSight. Create a new analysis, and choose Amazon Redshift as the data source.

Enter the database connection details, validate the connection, and create the data source.

Choose the schema to be analyzed. In this case, choose spectrum_schema, and then choose the ecommerce_sales table.

Next, we add a custom field for Total Sales = Price*Quantity. In the drop-down list for the ecommerce_sales table, choose Edit analysis data sets.

On the next screen, choose Edit.

In the data prep screen, choose New Field. Add a new calculated field Total Sales $, which is the product of the Price*Quantity fields. Then choose Create. Save and visualize it.

Next, to visualize total sales figures by month, create a graph with Total Sales on the x-axis and Order Data formatted as month on the y-axis.

After you’ve finished, you can use Amazon QuickSight to add different columns from your Amazon Redshift tables and perform different types of visualizations. You can build operational dashboards that continuously monitor your transactional and analytical data. You can publish these dashboards and share them with others.

Final notes

Amazon QuickSight can also read data in Amazon S3 directly. However, with the method demonstrated in this post, you have the option to manipulate, filter, and combine data from multiple sources or Amazon Redshift tables before visualizing it in Amazon QuickSight.

In this example, we dealt with data being inserted, but triggers can be activated in response to an INSERT, UPDATE, or DELETE trigger.

Keep the following in mind:

  • Be careful when invoking a Lambda function from triggers on tables that experience high write traffic. This would result in a large number of calls to your Lambda function. Although calls to the lambda_async procedure are asynchronous, triggers are synchronous.
  • A statement that results in a large number of trigger activations does not wait for the call to the AWS Lambda function to complete. But it does wait for the triggers to complete before returning control to the client.
  • Similarly, you must account for Amazon Kinesis Data Firehose limits. By default, Kinesis Data Firehose is limited to a maximum of 5,000 records/second. For more information, see Monitoring Amazon Kinesis Data Firehose.

In certain cases, it may be optimal to use AWS Database Migration Service (AWS DMS) to capture data changes in Aurora and use Amazon S3 as a target. For example, AWS DMS might be a good option if you don’t need to transform data from Amazon Aurora. The method used in this post gives you the flexibility to transform data from Aurora using Lambda before sending it to Amazon S3. Additionally, the architecture has the benefits of being serverless, whereas AWS DMS requires an Amazon EC2 instance for replication.

For design considerations while using Redshift Spectrum, see Using Amazon Redshift Spectrum to Query External Data.

If you have questions or suggestions, please comment below.


Additional Reading

If you found this post useful, be sure to check out Capturing Data Changes in Amazon Aurora Using AWS Lambda and 10 Best Practices for Amazon Redshift Spectrum


About the Authors

Re Alvarez-Parmar is a solutions architect for Amazon Web Services. He helps enterprises achieve success through technical guidance and thought leadership. In his spare time, he enjoys spending time with his two kids and exploring outdoors.

 

 

 

timeShift(GrafanaBuzz, 1w) Issue 28

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/01/05/timeshiftgrafanabuzz-1w-issue-28/

Happy new year! Grafana Labs is getting back in the swing of things after taking some time off to celebrate 2017, and spending time with family and friends. We’re diligently working on the new Grafana v5.0 release (planning v5.0 beta release by end of January), which includes a ton of new features, a new layout engine, and a polished UI. We’d love to hear your feedback!


Latest Stable Release

Grafana 4.6.3 is now available. Latest bugfixes include:

  • Gzip: Fixes bug Gravatar images when gzip was enabled #5952
  • Alert list: Now shows alert state changes even after adding manual annotations on dashboard #99513
  • Alerting: Fixes bug where rules evaluated as firing when all conditions was false and using OR operator. #93183
  • Cloudwatch: CloudWatch no longer display metrics’ default alias #101514, thx @mtanda

Download Grafana 4.6.3 Now


From the Blogosphere

Why Observability Matters – Now and in the Future: Our own Carl Bergquist teamed up with Neil Gehani, Director of Product at Weaveworks to discuss best practices on how to get started with monitoring your application and infrastructure. This video focuses on modern containerized applications instrumented to use Prometheus to generate metrics and Grafana to visualize them.

How to Install and Secure Grafana on Ubuntu 16.04: In this tutorial, you’ll learn how to install and secure Grafana with a SSL certificate and a Nginx reverse proxy, then you’ll modify Grafana’s default settings for even tighter security.

Monitoring Informix with Grafana: Ben walks us through how to use Grafana to visualize data from IBM Informix and offers a practical demonstration using Docker containers. He also talks about his philosophy of sharing dashboards across teams, important metrics to collect, and how he would like to improve his monitoring stack.

Monitor your hosts with Glances + InfluxDB + Grafana: Glances is a cross-platform system monitoring tool written in Python. This article takes you step by step through the pieces of the stack, installation, confirguration and provides a sample dashboard to get you up and running.


GrafanaCon Tickets are Going Fast!

Lock in your seat for GrafanaCon EU while there are still tickets avaialable! Join us March 1-2, 2018 in Amsterdam for 2 days of talks centered around Grafana and the surrounding monitoring ecosystem including Graphite, Prometheus, InfluxData, Elasticsearch, Kubernetes, and more.

We have some exciting talks lined up from Google, CERN, Bloomberg, eBay, Red Hat, Tinder, Fastly, Automattic, Prometheus, InfluxData, Percona and more! You can see the full list of speakers below, but be sure to get your ticket now.

Get Your Ticket Now

GrafanaCon EU will feature talks from:

“Google Bigtable”
Misha Brukman
PROJECT MANAGER,
GOOGLE CLOUD
GOOGLE

“Monitoring at Bloomberg”
Stig Sorensen
HEAD OF TELEMETRY
BLOOMBERG

“Monitoring at Bloomberg”
Sean Hanson
SOFTWARE DEVELOPER
BLOOMBERG

“Monitoring Tinder’s Billions of Swipes with Grafana”
Utkarsh Bhatnagar
SR. SOFTWARE ENGINEER
TINDER

“Grafana at CERN”
Borja Garrido
PROJECT ASSOCIATE
CERN

“Monitoring the Huge Scale at Automattic”
Abhishek Gahlot
SOFTWARE ENGINEER
Automattic

“Real-time Engagement During the 2016 US Presidential Election”
Anna MacLachlan
CONTENT MARKETING MANAGER
Fastly

“Real-time Engagement During the 2016 US Presidential Election”
Gerlando Piro
FRONT END DEVELOPER
Fastly

“Grafana v5 and the Future”
Torkel Odegaard
CREATOR | PROJECT LEAD
GRAFANA

“Prometheus for Monitoring Metrics”
Brian Brazil
FOUNDER
ROBUST PERCEPTION

“What We Learned Integrating Grafana with Prometheus”
Peter Zaitsev
CO-FOUNDER | CEO
PERCONA

“The Biz of Grafana”
Raj Dutt
CO-FOUNDER | CEO
GRAFANA LABS

“What’s New In Graphite”
Dan Cech
DIR, PLATFORM SERVICES
GRAFANA LABS

“The Design of IFQL, the New Influx Functional Query Language”
Paul Dix
CO-FOUNTER | CTO
INFLUXDATA

“Writing Grafana Dashboards with Jsonnet”
Julien Pivotto
OPEN SOURCE CONSULTANT
INUITS

“Monitoring AI Platform at eBay”
Deepak Vasthimal
MTS-2 SOFTWARE ENGINEER
EBAY

“Running a Power Plant with Grafana”
Ryan McKinley
DEVELOPER
NATEL ENERGY

“Performance Metrics and User Experience: A “Tinder” Experience”
Susanne Greiner
DATA SCIENTIST
WÜRTH PHOENIX S.R.L.

“Analyzing Performance of OpenStack with Grafana Dashboards”
Alex Krzos
SENIOR SOFTWARE ENGINEER
RED HAT INC.

“Storage Monitoring at Shell Upstream”
Arie Jan Kraai
STORAGE ENGINEER
SHELL TECHNICAL LANDSCAPE SERVICE

“The RED Method: How To Instrument Your Services”
Tom Wilkie
FOUNDER
KAUSAL

“Grafana Usage in the Quality Assurance Process”
Andrejs Kalnacs
LEAD SOFTWARE DEVELOPER IN TEST
EVOLUTION GAMING

“Using Prometheus and Grafana for Monitoring my Power Usage”
Erwin de Keijzer
LINUX ENGINEER
SNOW BV

“Weather, Power & Market Forecasts with Grafana”
Max von Roden
DATA SCIENTIST
ENERGY WEATHER

“Weather, Power & Market Forecasts with Grafana”
Steffen Knott
HEAD OF IT
ENERGY WEATHER

“Inherited Technical Debt – A Tale of Overcoming Enterprise Inertia”
Jordan J. Hamel
HEAD OF MONITORING PLATFORMS
AMGEN

“Grafanalib: Dashboards as Code”
Jonathan Lange
VP OF ENGINEERING
WEAVEWORKS

“The Journey of Shifting the MQTT Broker HiveMQ to Kubernetes”
Arnold Bechtoldt
SENIOR SYSTEMS ENGINEER
INOVEX

“Graphs Tell Stories”
Blerim Sheqa
SENIOR DEVELOPER
NETWAYS

[email protected] or How to Store Millions of Metrics per Second”
Vladimir Smirnov
SYSTEM ADMINISTRATOR
Booking.com


Upcoming Events:

In between code pushes we like to speak at, sponsor and attend all kinds of conferences and meetups. We also like to make sure we mention other Grafana-related events happening all over the world. If you’re putting on just such an event, let us know and we’ll list it here.

FOSDEM | Brussels, Belgium – Feb 3-4, 2018: FOSDEM is a free developer conference where thousands of developers of free and open source software gather to share ideas and technology. There is no need to register; all are welcome.

Jfokus | Stockholm, Sweden – Feb 5-7, 2018:
Carl Bergquist – Quickie: Monitoring? Not OPS Problem

Why should we monitor our system? Why can’t we just rely on the operations team anymore? They use to be able to do that. What’s currently changing? Presentation content: – Why do we monitor our system – How did it use to work? – Whats changing – Why do we need to shift focus – Everyone should be on call. – Resilience is the goal (Best way of having someone care about quality is to make them responsible).

Register Now

Jfokus | Stockholm, Sweden – Feb 5-7, 2018:
Leonard Gram – Presentation: DevOps Deconstructed

What’s a Site Reliability Engineer and how’s that role different from the DevOps engineer my boss wants to hire? I really don’t want to be on call, should I? Is Docker the right place for my code or am I better of just going straight to Serverless? And why should I care about any of it? I’ll try to answer some of these questions while looking at what DevOps really is about and how commodisation of servers through “the cloud” ties into it all. This session will be an opinionated piece from a developer who’s been on-call for the past 6 years and would like to convince you to do the same, at least once.

Register Now

Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Awesome! Let us know if you have any questions – we’re happy to help out. We also have a bunch of screencasts to help you get going.


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


How are we doing?

That’s a wrap! Let us know what you think about timeShift. Submit a comment on this article below, or post something at our community forum. See you next year!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Is Your Kodi Setup Being Spied On?

Post Syndicated from Andy original https://torrentfreak.com/is-your-kodi-setup-being-spied-on-180101/

As quite possibly the most people media player on earth, Kodi is installed on millions of machines – around 38 million according to the MPAA. The software has a seriously impressive range of features but one, if not configured properly, raises security issues for Kodi users.

For many years, Kodi has had a remote control feature, whereby the software can be remotely managed via a web interface.

This means that you’re able to control your Kodi setup installed on a computer or set-top box using a convenient browser-based interface on another device, from the same room or indeed anywhere in the world. Earlier versions of the web interface look like the one in the image below.

The old Kodi web-interface – functional but basic

But while this is a great feature, people don’t always password-protect the web-interface, meaning that outsiders can access their Kodi setups, if they have that person’s IP address and a web-browser. In fact, the image shown above is from a UK Kodi user’s setup that was found in seconds using a specialist search engine.

While the old web-interface for Kodi was basically a remote control, things got more interesting in late 2016 when the much more functional Chorus2 interface was included in Kodi by default. It’s shown in the image below.

Chorus 2 Kodi Web-Interface

Again, the screenshot above was taken from the setup of a Kodi user whose setup was directly open to the Internet. In every way the web-interface of Kodi acts as a web page, allowing anyone with the user’s IP address (with :8080 appended to the end) to access the user’s setup. It’s no different than accessing Google with an IP address (216.58.216.142), instead of Google.com.

However, Chorus 2 is much more comprehensive that its predecessors which means that it’s possible for outsiders to browse potentially sensitive items, including their addons if a password hasn’t been enabled in the appropriate section in Kodi.

Kodi users probably don’t want this seen in public

While browsing someone’s addons isn’t the most engaging thing in the world, things get decidedly spicier when one learns that the Chorus 2 interface allows both authorized and unauthorized users to go much further.

For example, it’s possible to change Kodi’s system settings from the interface, including mischievous things such as disabling keyboards and mice. As seen (or not seen) in the redacted section in the image below, it can also give away system usernames, for example.

Access to Kodi settings – and more

But aside from screwing with people’s settings (which is both pointless and malicious), the Chorus 2 interface has a trick up its sleeve. If people’s Kodi setups contain video or music files (which is what Kodi was originally designed for), in many cases it’s possible to play these over the web interface.

In basic terms, someone with your IP address can view the contents of your video library on the other side of the world, with just a couple of clicks.

The image below shows that a Kodi setup has been granted access to some kind of storage (network or local disk, for example) and it can be browsed, revealing movies. (To protect the user, redactions have been made to remove home video titles, network, and drive names)

Network storage accessed via Chorus 2

The big question is, however, whether someone accessing a Kodi setup remotely can view these videos via a web browser. Answer: Absolutely.

Clicking through on each piece of media reveals a button to the right of its title. Clicking that reveals two options – ‘Queue in Kodi’ (to play on the installation itself) or ‘Download’, which plays/stores the content via a remote browser located anywhere in the world. Chrome works like a charm.

Queue to Kodi or watch remotely in a browser

While this is ‘fun’ and potentially useful for outsiders looking for content, it’s not great if it’s your system that’s open to the world. The good news is that something can be done about it.

In their description for Chorus 2, the Kodi team explain all of its benefits of the interface but it appears many people don’t take their advice to introduce a new password. The default password and username are both ‘kodi’ which is terrible for security if people leave things the way they are.

If you run Kodi, now is probably the time to fix the settings, disable the web interface if you don’t use it, or enable stronger password protection if you do.

Change that password – now

Just recently, Kodi addon repository TVAddons issued a warning to people using jailbroken Apple TV 2 devices. That too was a default password issue and one that can be solved relatively easily.

“People need to realize that their Kodi boxes are actually mini computers and need to be treated as such,” a TVAddons spokesperson told TF.

“When you install a build, or follow a guide from an unreputable source, you’re opening yourself up to potential risk. Since Kodi boxes aren’t normally used to handle sensitive data, people seem to disregard the potential risks that are posed to their network.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

За испанската съдебна независимост отвътре

Post Syndicated from Йовко Ламбрев original https://yovko.net/spain-judiciary-independence/

Този текст е специално зa тези мои познати и приятели, които продължават да не разбират иронията, с която се отнасям към „независимата съдебна власт“ в Испания и намират паралелите, които правя с България, за прекалени и неуместни. Голяма част от тях ме обвиняват в пристрастия, на които гледам с насмешка не заради друго, а защото те самите отказват да вдигнат глава от собствената си романтична представа и клишета в главите си и да погледнат действителността, с различна нагласа от тази на фенщината към Реал Мадрид например. За разлика от повечето си критици, се старая да защитавам своите „пристрастия“ с факти, които сверявам през два, а понякога и повече канали, а когато източникът е медия, проверявам даже и собствеността и сондирам репутацията им, преди да се позовавам на нея.

Твърдото ми убеждение е, че испанската съдебна система често се ползва за репресии, когато политическата власт не иска да си намокри краката или не може да намери политическо решение. Аз лично съм потресен колко разследвания, арести, обвинения и запокитени в затвора (при това не за кратко време) има буквално ежемесечно в Испания, в които потърпевши са политически противници или критично настроени към властта обикновени хора. Включително за „провинения“ като шеги и вицове в хумористични издания или социални мрежи. Организират се показни съдебни процеси заради провокативни артистични инсталации – наскоро приключи едно такова дело, след което обвиненият в крайна сметка влезе в затвора (за тероризъм!).

Но да оставим моите впечатления и лично мнение настрана. Със сигурност обобщените мнения на 11%-ова извадка от испанските съдии към собственото им ежедневие би трябвало да тежи повече. Затова предлагам да разгледаме едно изследване, в което има доста данни. Предполагам, че някои хора вече са го чели, но са го направили от българска гледна точка. Нека да го погледнем и от испанска, а понеже е големичко, ще резюмирам само това, което мен лично ме стряска най-много.

Изследването е на European Network of Councils for the Judiciary, за периода 2016-2017 и е базирано на отговорите на въпросник, който е попълнен от 11712 съдии от 26 страни (22 от ЕС плюс Албания, Сърбия, Черна гора и Норвегия). Испанските съдии, които са участвали, са 718, или това са 11% от всички съдии в държавата. Числата по-долу са на база техните отговори и се отнасят за обследвания период от две години.

  • 18.1% считат, че през последните две години е имало дела, които са били разпределени към съдии по начин, различен от установената процедура, и с цел да се повлияе на изхода от делото. Испания е лидер в тази класация.
  • 64.3% считат, че съдии са назначавани по критерии, различни от тяхната пригодност и опит. Испания води и тази класация с цели 40% над средното ниво за всички държави в проучването.
  • 77.7% намират, че съдии са били повишавани на база критерии, различни от тяхната пригодност и опит. Испания води и тази черна класация, с повече от 40% над средното ниво.
  • 45% считат, че решения на съдии са били повлияни от действия на медиите (преса, телевизия и радио). Тук Испания отстъпва само на Италия, Хърватска, България и Словакия. А 17% смятат, че решения на съдии са били повлияни от социалните мрежи, като Испания е след Италия, Хърватска, Албания и България.
  • 27.7% считат, че правителството не зачита тяхната независимост като съдии. Това е десетият най-лош резултат. Тук води Полша (с над 70%), следвана от България (с над 50%). 15.5% отбелязват, че собствените им началници в съдилищата не уважават тяхната независимост. Това е вторият най-лош резултат след Португалия (21.7%). А 25.6% считат, че тяхната независимост не се уважава от Съдебния съвет. Испания е отново лидер с повече от 20% над средното ниво (като тук се отбелязва, че не всички държави имат такъв орган). България и Португалия следваме испанците плътно и в този срам.
  • 62.2% считат, че Съдебният съвет не разполага с нужните механизми и процедури да защити ефективно независимостта на съдебната власт. Това е вторият най-лош резултат в Европа (след Полша).
  • 10% признават, че през последните две години им е оказван неуместен натиск да вземат решение по дело по определен начин. Испания е петата страна по „натиск“ след Албания (24%), Хърватска (12%), Литва (12%) и Латвия (11%).
  • 10% признават, че са били санкционирани или са били заплашени, че ще бъдат санкционирани за това как са решили дадено дело. Испания е шеста по този критерий след Литва (19%), Латвия (18%), Полша (14%), Румъния (14%) и Италия (11%).
  • 25.6% са били притискани от администрацията на съда им да решат дадено дело в определен срок. Испания е четвърта след Хърватска, Полша и Словения.

Нямам коментар. Числата говорят.

FilePursuit Finds Amazing Files All Year Round, Not Just at Christmas

Post Syndicated from Andy original https://torrentfreak.com/filepursuit-finds-amazing-files-all-year-round-not-just-at-christmas-171225/

Ask someone to name a search engine and it’s likely that 95 out of 100 will say ‘Google’. There are plenty of others, of course, but its sheer dominance means that even giants like Bing have to wait around for a mention.

However, if people are looking for something special, such as video and music files, for example, there’s an interesting search engine that’s largely flying under the radar. FilePursuit, accessible via the web or directly from its dedicated Android apps, is somewhat of a revelation.

What FilePursuit does is trawl the Internet looking for web servers that are not only packed with content but are readily accessible to the outside world. This means that a search on the site invariably turns up treasure troves of material, all of it for immediate and direct HTTP download.

TorrentFreak caught up with the operator of the site who himself is a very interesting character.

“I’m a 21-year-old undergrad student from New Delhi, India, currently studying engineering. I started this file search engine project all by myself to learn web development and this is my first project,” he informs TF.

“I picked this project because I was surprised to find that there are lots of ‘open directory’ websites and no one is maintaining any type of record or database on them. There are thousands of ‘open directory’ websites containing a lot of amazing stuff not discovered yet, so I made them discoverable.”

Plenty of files from almost any search

FilePursuit began its life around September 2016 and since then has been receiving website submission requests (sites to be indexed by FilePursuit) from people all over the world. As such the platform is somewhat of a community effort but in respect of running the operation, it’s all done by one man.

“FilePursuit saves time in two ways: by eliminating the need to find file manually, and by performing searches at high speeds efficiently. Without this, you would have to look at sites one by one and pore over the contents of each carefully – a tedious prospect,” he explains.

“FilePursuit automatically compares your criteria to billions of webpages and gives you results in a fraction of a second. You can perform hundreds of searches in the course of a few minutes, altering the criteria as you narrow down results.”

So if Google dominates the search space, why doesn’t it do a better job of finding files than the relatively low-key FilePursuit? Its operator says it’s all about functionality.

“FilePursuit is a file search engine, it generates file links as results while other search engines give out webpages as results. However, it’s possible to search for file links directly from Google too but it’s limited to documents only. On FilePursuit you can search for almost any filetype just by selecting ‘custom’ and typing filetype in search results.”

Of course, it would be impossible for FilePursuit to find any files if webmasters and server operators didn’t leave them open to the public. Considering it’s simplicity itself to find all the latest movies and TV shows widely accessible, is this a question of stupidity, kindness, carelessness, or something else?

“In my opinion, most people are unaware that they have created an open directory and on the other hand some people want to share interesting files from their servers, which is very generous of them,” FilePursuit’s creator says.

When carrying out searches it really is amazing what FilePursuit can turn up. Files lead to directory results and some can contain many thousands of files, from every music artist one can think of through to otherwise private text files that people really should take more care over. Other things are really quite odd.

“When I look for ‘open directory’ websites, sometimes I find really amazing stuff and sometimes even bizarre stuff too. This one time, I found a collection of funeral recordings,” FilePursuit’s owner says.

While even funeral recordings can have a copyright owner somewhere, it’s the more regular mainstream content that’s most easily found with the service. The site doesn’t carry any copyrighted content at all but that doesn’t mean it’s unresponsive to takedown demands.

“I have more than three million file links indexed in my database so it can be a bit hard for me to check for copyrighted content. Although whenever I receive a mail from copyright holders or someone representing copyright holders, I always uphold their request of deleting the file link from my database and also explain to them that the file link they requested me to delete, that particular file may still exist.”

In recent months, FilePursuit has enjoyed a significant upsurge in traffic but it’s still a relatively small player in the search engine space with around 7,000 to 10,000 hits per day. However, this clever site is able to deal with five times that traffic and upgrading servers to cope with surges can be carried out in two to three minutes, “at most.”

So the big question remains – What will you find under the tree today?

FilePursuit website here, Android apps (free, pro)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

2017 в ретроспекция

Post Syndicated from Йовко Ламбрев original https://yovko.net/2017-flashback/

2017 година беше особена година. Протече, разминавайки се тотално с плановете ми за нея, които кроях през есента на 2016, а част от тях се случиха, но по напълно различен начин. Случиха се и няколко неща, които изобщо не планирах, но пък заеха центъра на вниманието ми.

Най-ключовото от нещата, които не успях да свърша през годината, беше старт на нов собствен бизнес. Имам усещането, че се разфокусирах в твърде много посоки и оставих точно това само̀ да ме намери, вместо аз да го сложа на първи пирон и да го преследвам активно. Но това не беше непременно лошо, защото разшири в неочаквани посоки мрежата ми от контакти в града (и не само), от който се бях откъснал сериозно през 14-те години, прекарани в София и за кратко в Барселона. В последните няколко месеца се чувствам отново на мястото си, срещайки интересни хора със сходно мислене и посоки, с които се отварят интересни възможности за съвместни проекти.

Годината обаче беше пълна с екшън. Започна с учредяването на „Да, България!“ още на 7 януари и продължи с много безсъние до края на кампанията през март.

През лятото регистрирахме Тракия Тех, което беше замислено като fintech инкубатор още в края на 2016, но понеже е трудно да се мечтае скромно, решихме да се прицелим в още по-мащабната идея за семейство от вертикални инкубатори като ще започнем с fintech и дигитална трансформация на индустрията, но ако нещата потръгнат, може би ще добавим и други теми към първите две. През 2017 успяхме да направим няколко събития за отскок, понеже още си нямаме място, където да подслоняваме екипите. Имаме щедро обещание от един от партньорите ни за чудесна сграда на централната пешеходна главна улица на града, но тя има нужда от ремонт и още не сме официализирали взаимоотношенията и условията, при които бихме могли да я ползваме.

Междувременно чуждестранен институт ни дава шанс да започнем заедно (с още партньори, разбира се) сериозен проект с потенциал да побутне и Пловдив, пък и индустрията около града към интересна тема, свързана с новите материали. Всъщност дори само това ако се случи в началото на 2018, ще осмисли адски много усилия и неща занапред.

През есента беше регистрирана още една организация, наречена Тоест, за която обаче ще напиша отделен специален текст, като му дойде времето. И то трябваше вече да е дошло, защото първоначалните планове на екипа ни бяха да се появим преди коледните и новогодишни празници, но не бяхме напълно готови за старт и решихме да отложим за началото на 2018.

Та не започнах нов бизнес през 2017, но пък ударих по едно рамо за стартирането на нова партия и още две нови и интересни организации през тази година. Пожелавам попътен вятър в платната на всички, свързани и с трите начинания!

За хобита почти нямаше време през 2017, но следвайки минималистичните настроения, които са ме обзели, от 2018 планирам да снимам само с iPhone, да работя предимно през таблет и може би да показвам на хората, че това не е толкова екстравагантно, колкото изглежда. Дано само да остане време за показване. 🙂

Весели празници на всички!

Swedish Police Set to Take Over Pirate Bay Domains

Post Syndicated from Andy original https://torrentfreak.com/swedish-police-set-to-take-over-pirate-bay-domains-171222/

Way back in 2013, anti-piracy prosecutor Fredrik Ingblad filed a motion targeting two key Pirate Bay domain names – ThePirateBay.se and PirateBay.se.

Ingblad filed a complaint against Punkt SE (IIS), the organization responsible for Sweden’s top level .SE domain, arguing that the domains are tools that The Pirate Bay uses to infringe copyright.

In April 2015 the case was heard and a month later the Stockholm District Court ruled that The Pirate Bay should forfeit both ThePirateBay.se and PirateBay.se to the state. The case later went to appeal.

In May 2016, the Svea Court of Appeal handed down its decision which upheld the decision of the Stockholm District Court, finding that since they assisted with crimes, the domains could be seized.

With that established a question remained – should the domains be seized from Pirate Bay co-founder and domain owner Fredrik Neij or from IIS, the organization responsible for Sweden’s top-level .SE domain?

The Court subsequently found that domain names should be considered a type of intellectual property, property owned by the purchaser of the domain. In this case, therefore, IIS was not considered the owner of the Pirate Bay domains, Fredrik Neij was.

Neij subsequently appealed to the Supreme Court, arguing that the District Court and the Court of Appeal wrongly concluded that a domain name is a type of property that can be confiscated.

Today the Supreme Court handed down its decision, siding with the lower courts and determining that the domains – ThePirateBay.se and PirateBay.se – can indeed be seized by the state.

“The Supreme Court declares that the right to domain names constitutes property that may be forfeited as the Court of Appeal previously found,” its judgment reads.

Since the decision was handed down, things have been moving quickly. Kjetil Jensen of Online Group, the parent company of domain registry Binero, informs TorrentFreak that the police have already moved to take over the domains in question.

“Today Binero, Binero.se, (registrar for thepiratebay.se and piratebay.se) received an executive request from Swedish Police to take over ownership of the domain names thepiratebay.se and piratebay.se because the Swedish Supreme Court now allows the domain names to be seized,” Jensen says.

“The WHOIS of the domain names shows that the domain names no longer have any active name servers and the next step in this process is that the Police will take over the ownership of the domain names.”

WHOIS entry for ThePirateBay.se

While Binero will cooperate with the authorities, the company doesn’t believe that seizure will solve the online copyright infringement problem.

“Binero considers that the confiscation of a domain name is an ineffective approach to prevent criminal activity on the internet,” Jensen says.

“Moving a site to another top-level domain is very easy. And even if you want to close the domain, content is still available over the internet, using both the IP address and search engines etc.”

Indeed, The Pirate Bay saw this day coming a long way off and has already completely migrated to its original domain, ThePirateBay.org.

Despite the ruling, the site remains fully accessible, but it appears a line has been drawn in the sand in Sweden when it comes to domains that are used to break the law. They will be easier to seize in future, thanks to this lengthy legal process.

The judgment is available here (PDF, Swedish)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons