Tag Archives: 2017

Serverless Architectures with AWS Lambda: Overview and Best Practices

Post Syndicated from Andrew Baird original https://aws.amazon.com/blogs/architecture/serverless-architectures-with-aws-lambda-overview-and-best-practices/

For some organizations, the idea of “going serverless” can be daunting. But with an understanding of best practices – and the right tools — many serverless applications can be fully functional with only a few lines of code and little else.

Examples of fully-serverless-application use cases include:

  • Web or mobile backends – Create fully-serverless, mobile applications or websites by creating user-facing content in a native mobile application or static web content in an S3 bucket. Then have your front-end content integrate with Amazon API Gateway as a backend service API. Lambda functions will then execute the business logic you’ve written for each of the API Gateway methods in your backend API.
  • Chatbots and virtual assistants – Build new serverless ways to interact with your customers, like customer support assistants and bots ready to engage customers on your company-run social media pages. The Amazon Alexa Skills Kit (ASK) and Amazon Lex have the ability to apply natural-language understanding to user-voice and freeform-text input so that a Lambda function you write can intelligently respond and engage with them.
  • Internet of Things (IoT) backends – AWS IoT has direct-integration for device messages to be routed to and processed by Lambda functions. That means you can implement serverless backends for highly secure, scalable IoT applications for uses like connected consumer appliances and intelligent manufacturing facilities.

Using AWS Lambda as the logic layer of a serverless application can enable faster development speed and greater experimentation – and innovation — than in a traditional, server-based environment.

We recently published the “Serverless Architectures with AWS Lambda: Overview and Best Practices” whitepaper to provide the guidance and best practices you need to write better Lambda functions and build better serverless architectures.

Once you’ve finished reading the whitepaper, below are a couple additional resources I recommend as your next step:

  1. If you would like to better understand some of the architecture pattern possibilities for serverless applications: Thirty Serverless Architectures in 30 Minutes (re:Invent 2017 video)
  2. If you’re ready to get hands-on and build a sample serverless application: AWS Serverless Workshops (GitHub Repository)
  3. If you’ve already built a serverless application and you’d like to ensure your application has been Well Architected: The Serverless Application Lens: AWS Well Architected Framework (Whitepaper)

About the Author

 

Andrew Baird is a Sr. Solutions Architect for AWS. Prior to becoming a Solutions Architect, Andrew was a developer, including time as an SDE with Amazon.com. He has worked on large-scale distributed systems, public-facing APIs, and operations automation.

Промяна в Правилника на НС относно изготвянето на програмата по въпросите на ЕС

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/04/23/parl_eu/

В Държавен вестник от 20 април 2018 г. е обнародвано решение за изменение на Правилника за организацията и дейността на Народното събрание (ДВ, бр. 35 от 2017 г.):
Народното събрание на основание чл. 86, ал. 1 от Конституцията на Република България и § 1, ал. 1 от допълнителните разпоредби на Правилника за организацията и дейността на Народното събрание
РЕШИ:
Параграф единствен. Параграф 10 от заключителните разпоредби се изменя така:
„§ 10. (1) Член 118 не се прилага от 1 юли 2017 г. до 31 декември 2018 г.*
(2) Постоянните комисии, с изключение на Комисията по европейските въпроси и контрол на европейските фондове, изработват по компетентност своите предложения за Годишна работна програма на Народното събрание по въпросите на Европейския съюз за 2018 г., като вземат предвид публикуваната Работна програма на Европейската комисия за 2018 г.
Комисията по европейските въпроси и контрол на европейските фондове, като взема предвид и предложенията на другите постоянни комисии, изработва проект на Годишна работна програма на Народното събрание по въпросите на Европейския съюз за 2018 г.
Годишната работна програма съдържа списък на проектите на актове на институциите на Европейския съюз, по които Народното събрание осъществява наблюдение и контрол.
Проектът на Годишната работна програма се обсъжда и приема от Народното събрание. Председателят на Народното събрание изпраща на Министерския съвет приетата Годишна работна програма.
При нововъзникнали обстоятелства Комисията по европейските въпроси и контрол на европейските фондове може да предлага по своя инициатива или по предложение на други постоянни комисии допълнения в Годишната работна програма на Народното събрание по въпросите на Европейския съюз за 2018 г., които се приемат по реда на изречения второ, трето и четвърто. Членове 119, 120 и 121 се прилагат съответно.
(3) Министерският съвет внася в Народното събрание за сведение и други документи, свързани с провеждането на Българското председателство на Съвета на Европейския съюз през 2018 г.“
*
Чл.118 от ПОДНС гласи:
Чл. 118. (1) Министерският съвет внася в Народното събрание приетата от него Годишна програма за участие на Република България в процеса на вземане на решения на Европейския съюз в 7-дневен срок от приемането и.
(2) Председателят на Народното събрание разпределя годишната програма по ал. 1 на постоянните комисии. В триседмичен срок от получаването и постоянните комисии, с изключение на Комисията по европейските въпроси и контрол на европейските фондове, изработват своите предложения за Годишна работна програма на Народното събрание по въпросите на Европейския съюз, като вземат предвид и Работната програма на Европейската комисия за съответната година.
(3) В 14-дневен срок от изтичането на срока по ал. 2 Комисията по европейските въпроси и контрол на европейските фондове, като взема предвид и предложенията на другите постоянни комисии, изработва проект на Годишна работна програма на Народното събрание по въпросите на Европейския съюз. Годишната работна програма съдържа списък на проектите на актове на институциите на Европейския съюз, по които Народното събрание осъществява наблюдение и контрол. Проектът на годишната работна програма се обсъжда и приема от Народното събрание.
(4) Председателят на Народното събрание изпраща на Министерския съвет приетата годишна работна програма по ал. 3.
(5) При нововъзникнали обстоятелства Комисията по европейските въпроси и контрол на европейските фондове може да предлага по своя инициатива или по предложение на други постоянни комисии допълнения в Годишната работна програма на Народното събрание по въпросите на Европейския съюз, които се приемат по реда на ал. 3.

Доклад на Държавния департамент на САЩ за състоянието на правата на човека 2017

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/04/22/hr2017/

Публикуван е поредният доклад на Държавния департамент на САЩ за състоянието на правата на човека през 2017 г.

Докладът има раздел за свободата на изразяване и медиите.

Законодателството предвижда съответните права, констатира се в доклада.

Но корпоративният и политически натиск, съчетан с нарастващата и непрозрачна концентрация на мрежи за собственост и разпространение на медиите, както и правителственото регулиране на ресурсите   за медиите, сериозно засягат медийната свобода и медийния плурализъм.

Индексът за устойчивост на медиите в Международния институт за изследвания и обмен (IREX) от 2017 г. сочи нарастването на политическия натиск и използването на медиите от олигарсите за “упражняване на влияние, разрушаване на репутацията на политически и бизнес опоненти и манипулиране на общественото мнение” като основни заплахи за общественото доверие в медиите. IREX отбелязва, че правителството активно възпрепятства свободното медийно развитие. Съобщенията за сплашване и насилие срещу журналисти продължават.

  • Свобода на изразяване:

Критика на  правителството  обичайно не води до репресии,  но са докладвани и няколко такива случая (напр. Васил Коцев за разпит във връзка с постинг за министър-председателя).

Напомня се глобата на Икономедиа, наложена от Комисията за финансов надзор/Стоян Мавродиев – да напомним връзката му с  фигура от организираната престъпност.

По отношение на речта на омразата  се отбелязва, че присъствието на националистически партии в правителството  оправомощава  някои да прибягват до речта на омразата като норма, а не изключение.

  • Преса и медийна свобода:

Според “Репортери без граници” от 2017 г.  пресата е “доминирана от корупция и тайни споразумения между медиите, политиците и олигарсите”. Докладът отбелязва, че липсва прозрачност при разпределянето от страна на правителството  – което води до ефект на подкупване:  да са снизходителни в отразяването на политиците  или да се въздържат от отразяване на проблемни случаи.

Вътрешни и международни организации критикуват както печатни, така и електронни медии за липса на прозрачност на собствеността и финансовата прозрачност, както и за податливост на  икономическо и политическо влияние.

На 21 март издателите на  Прас Прес  подават жалба до Комисията за защита на конкуренцията, като заявиха, че Националната дистрибуторска компания е злоупотребила с господстващото си положение на пазара за разпространение на пресата и е спряла първото издание на Прас Прес от продажба в неговите магазини.

  • Насилие и тормоз:

Отбелязва се нападението над Иво Никодимов, БНТ.

Депутатът от управляващите Антон Тодоров казва в ефир на журналиста Виктор Николаев, че “би го уволнил” за въпрос. Вицепремиерът Валери Симеонов по подобен начин заплашва журналиста. По-късно Симеонов призовава (чрез сайта на правителството!) за извинение  медиите, които интерпретират думите му като заплашване.

  •  Цензуриране или ограничения на съдържанието:

Журналистите продължават да отчитат автоцензура, редакционни забрани за отразяване на конкретни лица и теми и налагането на политически възгледи от корпоративни лидери. През март бизнесменът и издателят Сашо Дончев заявява на бизнес форум, че е бил поканен на частна среща с главния прокурор, където главният прокурор го обвинява в подкрепа за конкретна политическа партия и  предупреждава, че комуникациите му се наблюдават. Главният прокурор  излага  друга версия – че Дончев иска влияние върху прокурорите, работещи по дело, свързано с него.

Правителството не ограничава интернет( 63,5% от домакинствата имат достъп до интернет през 2016 г. според ITU).

 

How Many Piracy Warnings Would Get You to Stop?

Post Syndicated from Andy original https://torrentfreak.com/how-many-piracy-warnings-would-get-you-to-stop-180422/

For the past several years, copyright holders in the US and Europe have been trying to reach out to file-sharers in an effort to change their habits.

Whether via high-profile publicity lawsuits or a simple email, it’s hoped that by letting people know they aren’t anonymous, they’ll stop pirating and buy more content instead.

Traditionally, most ISPs haven’t been that keen on passing infringement notices on. However, the BMG v Cox lawsuit seems to have made a big difference, with a growing number of ISPs now visibly warning their users that they operate a repeat infringer policy.

But perhaps the big question is how seriously users take these warnings because – let’s face it – that’s the entire point of their existence.

There can be little doubt that a few recipients will be scurrying away at the slightest hint of trouble, intimidated by the mere suggestion that they’re being watched.

Indeed, a father in the UK – who received a warning last year as part of the Get it Right From a Genuine Site campaign – confidently and forcefully assured TF that there would be no more illegal file-sharing taking place on his ten-year-old son’s computer again – ever.

In France, where the HADOPI anti-piracy scheme received much publicity, people receiving an initial notice are most unlikely to receive additional ones in future. A December 2017 report indicated that of nine million first warning notices sent to alleged pirates since 2012, ‘just’ 800,000 received a follow-up warning on top.

The suggestion is that people either stop their piracy after getting a notice or two, or choose to “go dark” instead, using streaming sites for example or perhaps torrenting behind a decent VPN.

But for some people, the message simply doesn’t sink in early on.

A post on Reddit this week by a TWC Spectrum customer revealed that despite a wealth of readily available information (including masses in the specialist subreddit where the post was made), even several warnings fail to have an effect.

“Was just hit with my 5th copyright violation. They halted my internet and all,” the self-confessed pirate wrote.

There are at least three important things to note from this opening sentence.

Firstly, the first four warnings did nothing to change the user’s piracy habits. Secondly, Spectrum presumably had enough at five warnings and kicked in a repeat-infringer suspension, presumably to avoid the same fate as Cox in the BMG case. Third, the account suspension seems to have changed the game.

Notably, rather than some huge blockbuster movie, that fifth warning came due to something rather less prominent.

“Thought I could sneak in a random episode of Rosanne. The new one that aired LOL. That fast. Under 24 hours I got shut off. Which makes me feel like [ISPs] do monitor your traffic and its not just the people sending them notices,” the post read.

Again, some interesting points here.

Any content can be monitored by rightsholders but if it’s popular in the US then a warning delivered via an ISP seems to be more likely than elsewhere. However, the misconception that the monitoring is done by ISPs persists, despite that not being the case.

ISPs do not monitor users’ file-sharing activity, anti-piracy companies do. They can grab an IP address the second someone enters a torrent swarm, or even connects to a tracker. It happens in an instant, at a time of their choosing. Quickly jumping in and out of a torrent is no guarantee and the fallacy of not getting caught due to a failure to seed is just that – a fallacy.

But perhaps the most important thing is that after five warnings and a disconnection, the Reddit user decided to take action. Sadly for the people behind Rosanne, it’s not exactly the reaction they’d have hoped for.

“I do not want to push it but I am curious to what happens 6th time, and if I would even be safe behind a VPN,” he wrote.

“Just want to learn how to use a VPN and Sonarr and have a guilt free stress free torrent watching.”

Of course, there was no shortage of advice.

“If you have gotten 5 notices, you really should of learnt [sic] how to use a VPN before now,” one poster noted, perhaps inevitably.

But curiously, or perhaps obviously given the number of previous warnings, the fifth warning didn’t come as a surprise to the user.

“I knew they were going to hit me for it. I just didn’t think a 195mb file would do it. They were getting me for Disney movies in the past,” he added.

So how do you grab the attention of a persistent infringer like this? Five warnings and a suspension apparently. But clearly, not even that is a guarantee of success. Perhaps this is why most ‘strike’ schemes tend to give up on people who can’t be rehabilitated.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Securing Elections

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/securing_electi_1.html

Elections serve two purposes. The first, and obvious, purpose is to accurately choose the winner. But the second is equally important: to convince the loser. To the extent that an election system is not transparently and auditably accurate, it fails in that second purpose. Our election systems are failing, and we need to fix them.

Today, we conduct our elections on computers. Our registration lists are in computer databases. We vote on computerized voting machines. And our tabulation and reporting is done on computers. We do this for a lot of good reasons, but a side effect is that elections now have all the insecurities inherent in computers. The only way to reliably protect elections from both malice and accident is to use something that is not hackable or unreliable at scale; the best way to do that is to back up as much of the system as possible with paper.

Recently, there have been two graphic demonstrations of how bad our computerized voting system is. In 2007, the states of California and Ohio conducted audits of their electronic voting machines. Expert review teams found exploitable vulnerabilities in almost every component they examined. The researchers were able to undetectably alter vote tallies, erase audit logs, and load malware on to the systems. Some of their attacks could be implemented by a single individual with no greater access than a normal poll worker; others could be done remotely.

Last year, the Defcon hackers’ conference sponsored a Voting Village. Organizers collected 25 pieces of voting equipment, including voting machines and electronic poll books. By the end of the weekend, conference attendees had found ways to compromise every piece of test equipment: to load malicious software, compromise vote tallies and audit logs, or cause equipment to fail.

It’s important to understand that these were not well-funded nation-state attackers. These were not even academics who had been studying the problem for weeks. These were bored hackers, with no experience with voting machines, playing around between parties one weekend.

It shouldn’t be any surprise that voting equipment, including voting machines, voter registration databases, and vote tabulation systems, are that hackable. They’re computers — often ancient computers running operating systems no longer supported by the manufacturers — and they don’t have any magical security technology that the rest of the industry isn’t privy to. If anything, they’re less secure than the computers we generally use, because their manufacturers hide any flaws behind the proprietary nature of their equipment.

We’re not just worried about altering the vote. Sometimes causing widespread failures, or even just sowing mistrust in the system, is enough. And an election whose results are not trusted or believed is a failed election.

Voting systems have another requirement that makes security even harder to achieve: the requirement for a secret ballot. Because we have to securely separate the election-roll system that determines who can vote from the system that collects and tabulates the votes, we can’t use the security systems available to banking and other high-value applications.

We can securely bank online, but can’t securely vote online. If we could do away with anonymity — if everyone could check that their vote was counted correctly — then it would be easy to secure the vote. But that would lead to other problems. Before the US had the secret ballot, voter coercion and vote-buying were widespread.

We can’t, so we need to accept that our voting systems are insecure. We need an election system that is resilient to the threats. And for many parts of the system, that means paper.

Let’s start with the voter rolls. We know they’ve already been targeted. In 2016, someone changed the party affiliation of hundreds of voters before the Republican primary. That’s just one possibility. A well-executed attack that deletes, for example, one in five voters at random — or changes their addresses — would cause chaos on election day.

Yes, we need to shore up the security of these systems. We need better computer, network, and database security for the various state voter organizations. We also need to better secure the voter registration websites, with better design and better internet security. We need better security for the companies that build and sell all this equipment.

Multiple, unchangeable backups are essential. A record of every addition, deletion, and change needs to be stored on a separate system, on write-only media like a DVD. Copies of that DVD, or — even better — a paper printout of the voter rolls, should be available at every polling place on election day. We need to be ready for anything.

Next, the voting machines themselves. Security researchers agree that the gold standard is a voter-verified paper ballot. The easiest (and cheapest) way to achieve this is through optical-scan voting. Voters mark paper ballots by hand; they are fed into a machine and counted automatically. That paper ballot is saved, and serves as a final true record in a recount in case of problems. Touch-screen machines that print a paper ballot to drop in a ballot box can also work for voters with disabilities, as long as the ballot can be easily read and verified by the voter.

Finally, the tabulation and reporting systems. Here again we need more security in the process, but we must always use those paper ballots as checks on the computers. A manual, post-election, risk-limiting audit varies the number of ballots examined according to the margin of victory. Conducting this audit after every election, before the results are certified, gives us confidence that the election outcome is correct, even if the voting machines and tabulation computers have been tampered with. Additionally, we need better coordination and communications when incidents occur.

It’s vital to agree on these procedures and policies before an election. Before the fact, when anyone can win and no one knows whose votes might be changed, it’s easy to agree on strong security. But after the vote, someone is the presumptive winner — and then everything changes. Half of the country wants the result to stand, and half wants it reversed. At that point, it’s too late to agree on anything.

The politicians running in the election shouldn’t have to argue their challenges in court. Getting elections right is in the interest of all citizens. Many countries have independent election commissions that are charged with conducting elections and ensuring their security. We don’t do that in the US.

Instead, we have representatives from each of our two parties in the room, keeping an eye on each other. That provided acceptable security against 20th-century threats, but is totally inadequate to secure our elections in the 21st century. And the belief that the diversity of voting systems in the US provides a measure of security is a dangerous myth, because few districts can be decisive and there are so few voting-machine vendors.

We can do better. In 2017, the Department of Homeland Security declared elections to be critical infrastructure, allowing the department to focus on securing them. On 23 March, Congress allocated $380m to states to upgrade election security.

These are good starts, but don’t go nearly far enough. The constitution delegates elections to the states but allows Congress to “make or alter such Regulations”. In 1845, Congress set a nationwide election day. Today, we need it to set uniform and strict election standards.

This essay originally appeared in the Guardian.

Confused About the Hybrid Cloud? You’re Not Alone

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/confused-about-the-hybrid-cloud-youre-not-alone/

Hybrid Cloud. What is it?

Do you have a clear understanding of the hybrid cloud? If you don’t, it’s not surprising.

Hybrid cloud has been applied to a greater and more varied number of IT solutions than almost any other recent data management term. About the only thing that’s clear about the hybrid cloud is that the term hybrid cloud wasn’t invented by customers, but by vendors who wanted to hawk whatever solution du jour they happened to be pushing.

Let’s be honest. We’re in an industry that loves hype. We can’t resist grafting hyper, multi, ultra, and super and other prefixes onto the beginnings of words to entice customers with something new and shiny. The alphabet soup of cloud-related terms can include various options for where the cloud is located (on-premises, off-premises), whether the resources are private or shared in some degree (private, community, public), what type of services are offered (storage, computing), and what type of orchestrating software is used to manage the workflow and the resources. With so many moving parts, it’s no wonder potential users are confused.

Let’s take a step back, try to clear up the misconceptions, and come up with a basic understanding of what the hybrid cloud is. To be clear, this is our viewpoint. Others are free to do what they like, so bear that in mind.

So, What is the Hybrid Cloud?

The hybrid cloud refers to a cloud environment made up of a mixture of on-premises private cloud resources combined with third-party public cloud resources that use some kind of orchestration between them.

To get beyond the hype, let’s start with Forrester Research‘s idea of the hybrid cloud: “One or more public clouds connected to something in my data center. That thing could be a private cloud; that thing could just be traditional data center infrastructure.”

To put it simply, a hybrid cloud is a mash-up of on-premises and off-premises IT resources.

To expand on that a bit, we can say that the hybrid cloud refers to a cloud environment made up of a mixture of on-premises private cloud[1] resources combined with third-party public cloud resources that use some kind of orchestration[2] between them. The advantage of the hybrid cloud model is that it allows workloads and data to move between private and public clouds in a flexible way as demands, needs, and costs change, giving businesses greater flexibility and more options for data deployment and use.

In other words, if you have some IT resources in-house that you are replicating or augmenting with an external vendor, congrats, you have a hybrid cloud!

Private Cloud vs. Public Cloud

The cloud is really just a collection of purpose built servers. In a private cloud, the servers are dedicated to a single tenant or a group of related tenants. In a public cloud, the servers are shared between multiple unrelated tenants (customers). A public cloud is off-site, while a private cloud can be on-site or off-site — or on-prem or off-prem.

As an example, let’s look at a hybrid cloud meant for data storage, a hybrid data cloud. A company might set up a rule that says all accounting files that have not been touched in the last year are automatically moved off-prem to cloud storage to save cost and reduce the amount of storage needed on-site. The files are still available; they are just no longer stored on your local systems. The rules can be defined to fit an organization’s workflow and data retention policies.

The hybrid cloud concept also contains cloud computing. For example, at the end of the quarter, order processing application instances can be spun up off-premises in a hybrid computing cloud as needed to add to on-premises capacity.

Hybrid Cloud Benefits

If we accept that the hybrid cloud combines the best elements of private and public clouds, then the benefits of hybrid cloud solutions are clear, and we can identify the primary two benefits that result from the blending of private and public clouds.

Diagram of the Components of the Hybrid Cloud

Benefit 1: Flexibility and Scalability

Undoubtedly, the primary advantage of the hybrid cloud is its flexibility. It takes time and money to manage in-house IT infrastructure and adding capacity requires advance planning.

The cloud is ready and able to provide IT resources whenever needed on short notice. The term cloud bursting refers to the on-demand and temporary use of the public cloud when demand exceeds resources available in the private cloud. For example, some businesses experience seasonal spikes that can put an extra burden on private clouds. These spikes can be taken up by a public cloud. Demand also can vary with geographic location, events, or other variables. The public cloud provides the elasticity to deal with these and other anticipated and unanticipated IT loads. The alternative would be fixed cost investments in on-premises IT resources that might not be efficiently utilized.

For a data storage user, the on-premises private cloud storage provides, among other benefits, the highest speed access. For data that is not frequently accessed, or needed with the absolute lowest levels of latency, it makes sense for the organization to move it to a location that is secure, but less expensive. The data is still readily available, and the public cloud provides a better platform for sharing the data with specific clients, users, or with the general public.

Benefit 2: Cost Savings

The public cloud component of the hybrid cloud provides cost-effective IT resources without incurring capital expenses and labor costs. IT professionals can determine the best configuration, service provider, and location for each service, thereby cutting costs by matching the resource with the task best suited to it. Services can be easily scaled, redeployed, or reduced when necessary, saving costs through increased efficiency and avoiding unnecessary expenses.

Comparing Private vs Hybrid Cloud Storage Costs

To get an idea of the difference in storage costs between a purely on-premises solutions and one that uses a hybrid of private and public storage, we’ll present two scenarios. For each scenario we’ll use data storage amounts of 100 terabytes, 1 petabyte, and 2 petabytes. Each table is the same format, all we’ve done is change how the data is distributed: private (on-premises) cloud or public (off-premises) cloud. We are using the costs for our own B2 Cloud Storage in this example. The math can be adapted for any set of numbers you wish to use.

Scenario 1    100% of data on-premises storage

Data Stored
Data stored On-Premises: 100% 100 TB 1,000 TB 2,000 TB
On-premises cost range Monthly Cost
Low — $12/TB/Month $1,200 $12,000 $24,000
High — $20/TB/Month $2,000 $20,000 $40,000

Scenario 2    20% of data on-premises with 80% public cloud storage (B2)

Data Stored
Data stored On-Premises: 20% 20 TB 200 TB 400 TB
Data stored in Cloud: 80% 80 TB 800 TB 1,600 TB
On-premises cost range Monthly Cost
Low — $12/TB/Month $240 $2,400 $4,800
High — $20/TB/Month $400 $4,000 $8,000
Public cloud cost range Monthly Cost
Low — $5/TB/Month (B2) $400 $4,000 $8,000
High — $20/TB/Month $1,600 $16,000 $32,000
On-premises + public cloud cost range Monthly Cost
Low $640 $6,400 $12,800
High $2,000 $20,000 $40,000

As can be seen in the numbers above, using a hybrid cloud solution and storing 80% of the data in the cloud with a provider such as Backblaze B2 can result in significant savings over storing only on-premises. For other cost scenarios, see the B2 Cost Calculator.

When Hybrid Might Not Always Be the Right Fit

There are circumstances where the hybrid cloud might not be the best solution. Smaller organizations operating on a tight IT budget might best be served by a purely public cloud solution. The cost of setting up and running private servers is substantial.

An application that requires the highest possible speed might not be suitable for hybrid, depending on the specific cloud implementation. While latency does play a factor in data storage for some users, it is less of a factor for uploading and downloading data than it is for organizations using the hybrid cloud for computing. Because Backblaze recognized the importance of speed and low-latency for customers wishing to use computing on data stored in B2, we directly connected our data centers with those of our computing partners, ensuring that latency would not be an issue even for a hybrid cloud computing solution.

It is essential to have a good understanding of workloads and their essential characteristics in order to make the hybrid cloud work well for you. Each application needs to be examined for the right mix of private cloud, public cloud, and traditional IT resources that fit the particular workload in order to benefit most from a hybrid cloud architecture.

The Hybrid Cloud Can Be a Win-Win Solution

From the high altitude perspective, any solution that enables an organization to respond in a flexible manner to IT demands is a win. Avoiding big upfront capital expenses for in-house IT infrastructure will appeal to the CFO. Being able to quickly spin up IT resources as they’re needed will appeal to the CTO and VP of Operations.

Should You Go Hybrid?

We’ve arrived at the bottom line and the question is, should you or your organization embrace hybrid cloud infrastructures?

According to 451 Research, by 2019, 69% of companies will operate in hybrid cloud environments, and 60% of workloads will be running in some form of hosted cloud service (up from 45% in 2017). That indicates that the benefits of the hybrid cloud appeal to a broad range of companies.

In Two Years, More Than Half of Workloads Will Run in Cloud

Clearly, depending on an organization’s needs, there are advantages to a hybrid solution. While it might have been possible to dismiss the hybrid cloud in the early days of the cloud as nothing more than a buzzword, that’s no longer true. The hybrid cloud has evolved beyond the marketing hype to offer real solutions for an increasingly complex and challenging IT environment.

If an organization approaches the hybrid cloud with sufficient planning and a structured approach, a hybrid cloud can deliver on-demand flexibility, empower legacy systems and applications with new capabilities, and become a catalyst for digital transformation. The result can be an elastic and responsive infrastructure that has the ability to quickly respond to changing demands of the business.

As data management professionals increasingly recognize the advantages of the hybrid cloud, we can expect more and more of them to embrace it as an essential part of their IT strategy.

Tell Us What You’re Doing with the Hybrid Cloud

Are you currently embracing the hybrid cloud, or are you still uncertain or hanging back because you’re satisfied with how things are currently? Maybe you’ve gone totally hybrid. We’d love to hear your comments below on how you’re dealing with the hybrid cloud.


[1] Private cloud can be on-premises or a dedicated off-premises facility.

[2] Hybrid cloud orchestration solutions are often proprietary, vertical, and task dependent.

The post Confused About the Hybrid Cloud? You’re Not Alone appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

КЗК: глоба за Музикаутор в спора с Българското национално радио

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/04/18/bnr-6/

Прессъобщение на КЗК от днес:

Комисията за защита на конкуренцията (КЗК) наложи санкция в размер на 56 678 лв. за извършено нарушение по чл. 21, т. 5 от Закона за защита на конкуренцията (ЗЗК) на Сдружение на композитори, автори на литературни произведения, свързани с музиката и музикални издатели за колективно управление на авторски права „МУЗИКАУТОР“, изразяващо се в злоупотреба с господстващо положение на пазара на предоставяне на права за излъчване на музикални и литературни произведения от репертоара на Музикаутор по безжичен път, предаването или препредаването на произведенията по електронна съобщителна мрежа за доставчици на радиоуслуги на територията на страната, което може да предотврати, ограничи или наруши конкуренцията и да засегне интересите на потребителите посредством необосновано прекратяване на съществуващите договорни отношения с Българското национално радио, с което се препятства осъществяваната от радиото дейност.

Предмет на проучване в производството е поведението на Сдружение „Музикаутор“, във връзка с прекратяване на съществуващите по силата на Договор за разрешаване използването на музикални и литературни произведения по радио отношения между „Музикаутор“ и БНР и следващата от това невъзможност за общественото радио да използва в програмите си репертоара на Сдружението.

В хода на проучването се установи, че отношенията между страните в производството по предоставянето на права за използване на репертоара на Музикаутор се уреждат с договор за разрешаване използването на музикални и литературни произведения в радио, сключен между страните на 19.12.2011 г. Договорът е прекратен считано от 01.01.2017 г. чрез отправено на 21.11. 2016 г. предизвестие от „Музикаутор“.

От водените между страните в производството преговори и от цялостното поведение на „Музикаутор“ в тази връзка, може да се направи извод, че прекратяването на действащия договор между страните представлява начин за постигане на исканите от Сдружението условия за определяне размера на дължимите му възнаграждения за лицензирането на съответните права.  Чрез прекратяването на договора с БНР Музикаутор, като предприятие с господстващо положение, лишава националното радио от възможност за ефективно конкуриране на пазара, на който последното осъществява дейност.

Решението може да бъде обжалвано от страните и всяко трето лице, което има правен интерес, в 14-дневен срок, който започва да тече от съобщаването му по реда на АПК, а за третите лица – от публикуването в електронния регистър на Комисията.

Решението

Към решението е приложено и Особено мнение на Димитър Кюмюрджиев, заместник – председател на КЗК, наблюдаващ член по преписка № 141/ 2017 г.

TV Broadcaster Wants App Stores Blocked to Prevent Piracy

Post Syndicated from Andy original https://torrentfreak.com/tv-broadcaster-wants-app-stores-blocked-to-prevent-piracy-180416/

After first targeting torrent and regular streaming platforms with blocking injunctions, last year Village Roadshow and studios including Disney, Universal, Warner Bros, Twentieth Century Fox, and Paramount began looking at a new threat.

The action targeted HDSubs+, a reasonably popular IPTV service that provides hundreds of otherwise premium live channels, movies, and sports for a relatively small monthly fee. The application was filed during October 2017 and targeted Australia’s largest ISPs.

In parallel, Hong Kong-based broadcaster Television Broadcasts Limited (TVB) launched a similar action, demanding that the same ISPs (including Telstra, Optus, TPG, and Vocus, plus subsidiaries) block several ‘pirate’ IPTV services, named in court as A1, BlueTV, EVPAD, FunTV, MoonBox, Unblock, and hTV5.

Due to the similarity of the cases, both applications were heard in Federal Court in Sydney on Friday. Neither case is as straightforward as blocking a torrent or basic streaming portal, so both applicants are having to deal with additional complexities.

The TVB case is of particular interest. Up to a couple of dozen URLs maintain the services, which are used to provide the content, an EPG (electronic program guide), updates and sundry other features. While most of these appear to fit the description of an “online location” designed to assist copyright infringement, where the Android-based software for the IPTV services is hosted provides an interesting dilemma.

ComputerWorld reports that the apps – which offer live broadcasts, video-on-demand, and catch-up TV – are hosted on as-yet-unnamed sites which are functionally similar to Google Play or Apple’s App Store. They’re repositories of applications that also carry non-infringing apps, such as those for Netflix and YouTube.

Nevertheless, despite clear knowledge of this dual use, TVB wants to have these app marketplaces blocked by Australian ISPs, which would not only render the illicit apps inaccessible to the public but all of the non-infringing ones too. Part of its argument that this action would be reasonable appears to be that legal apps – such as Netflix’s for example – can also be freely accessed elsewhere.

It will be up to Justice Nicholas to decide whether the “primary purpose” of these marketplaces is to infringe or facilitate the infringement of TVB’s copyrights. However, TVB also appears to have another problem which is directly connected to the copyright status in Australia of its China-focused live programming.

Justice Nicholas questioned whether watching a stream in Australia of TVB’s live Chinese broadcasts would amount to copyright infringement because no copy of that content is being made.

“If most of what is occurring here is a reproduction of broadcasts that are not protected by copyright, then the primary purpose is not to facilitate copyright infringement,” Justice Nicholas said.

One of the problems appears to be that China is not a party to the 1961 Rome Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organisations. However, TVB is arguing that it should still receive protection because it airs pre-recorded content and the live broadcasts are also archived for re-transmission via catch-up services.

The question over whether unchoreographed live broadcasts receive protection has been raised in other regions but in most cases, a workaround has been found. The presence of broadcaster logos on screen (which receive copyright protection) is a factor and it’s been reported that broadcasters are able to record the ‘live’ action and transmit a copy just a couple of seconds later, thereby broadcasting an already-copyrighted work.

While TVB attempts to overcome its issues, Village Roadshow is facing some of its own in its efforts to take down HDSubs+.

It appears that at least partly in response to the Roadshow legal action, the service has undergone some modifications, including a change of brand to ‘Press Play Extra’. As reported by ZDNet, there have been structural changes too, which means that Roadshow can no longer “see under the hood”.

According to Justice Nicholas, there is no evidence that the latest version of the app infringes copyright but according to counsel for Village Roadshow, the new app is merely transitional and preparing for a possible future change.

“We submit the difference to be drawn is reactive to my clients serving on the operators a notice,” counsel for Roadshow argued, with an expert describing the new app as “almost like a placeholder.”

In short, Roadshow still wants all of the target domains in its original application blocked because the company believes there’s a good chance they’ll be reactivated in the future.

None of the ISPs involved in either case turned up to the hearings on Friday, which removes one layer of complexity in what appears thus far to be less than straightforward cases.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

EС: лични данни на пътниците, писмо на WG29

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/04/14/pnr-11/

Личниет  данни на пътниците (PNR) са сложен въпрос, изискващ повишено внимание поради различната степен на защита на личните данни в ЕС и в трети страни.

На 26 юли 2017 г. Съдът на Европейския съюз (СЕС) констатира в своето становище 1/2015 по повод  споразумението ЕС – Канада за PNR, че то   е частично несъвместимо с членове 7, 8, 21 и 52 от Хартата на ЕС за основните права.

Работната група 29 (WP29), включваща представители на националните регулатори в областта на данните, публикува писмо, в което изразява тревогата си, че   до  момента няма  данни  становището да е взето предвид – както в споразумението ЕС- Канада, така и  в досиетата на другите споразумения за PNR с Австралия  и Съединените щати.

Правни актове на ЕС продължават да се прилагат, без да са в съответствие с Хартата на основните права на ЕС  според становището на Съда на ЕС.

WP29 излага подробни аргументи по повод противоречия, неясни и непрецизни формулировки.

IP Address Fail: ISP Doesn’t Have to Hand ‘Pirates’ Details to Copyright Trolls

Post Syndicated from Andy original https://torrentfreak.com/ip-address-fail-isp-doesnt-have-to-hand-pirates-details-to-copyright-trolls-180414/

On October 27, 2016, UK-based Copyright Management Services (CMS) filed a case against Sweden-based ISP, Tele2.

CMS, run by Patrick Achache of German-based anti-piracy outfit MaverickEye (which in turn is deeply involved with infamous copyright troll outfit Guardaley), claimed that Tele2 customers had infringed its clients’ copyrights on the movies Cell and IT by sharing them via BitTorrent.

Since Tele2 had the personal details of the customers behind those IP addresses, CMS asked the Patent and Market Court to prevent the ISP from deleting the data before it could be handed over. Once in its possession, CMS would carry out the usual process of writing to customers and demanding cash settlements to make supposed lawsuits go away.

Tele2 complained that it could not hand over the details of customers using NAT addresses since it simply doesn’t hold that information. The ISP also said it could not hand over details of customers if IP address information had previously been deleted.

Taking these objections into consideration, in November 2017 the Court approved an interim order in respect of the remaining IP addresses. But there were significant problems which led the ISP to appeal.

According to tests carried out by Tele2, many of the IP addresses in the case did not relate to Sweden or indeed Tele2. In fact, some IP addresses belonged to foreign companies or mere affiliates of the ISP.

“Tele2 thus lacks the actual ability to provide information regarding a large part of the IP addresses covered by the submission,” the Court of Appeal noted in a decision published this week.

The problem appears to lie with the way the MaverickEye monitoring system attributed monitored IP addresses to Tele2.

The Court notes that the company relied on the RIPE Database which stated that the IP addresses in question were allocated to the “geographic area of Sweden”. According to Tele2, however, that wasn’t the case and as such, it had no information to hand over.

CMS, on the other hand, maintained that according to RIPE’s records, Tele2 was indeed the controller of the IP addresses in question so must hand over the information as requested.

While the Patent and Market Court said that Tele2 didn’t object to the MaverickEye monitoring software in terms of the data it collects on file-sharers, it noted that CMS had failed to initiate an investigation in respect of the IP addresses allegedly not belonging to Tele2.

“CMS has not invoked any investigation showing how the identification of the IP addresses in question is made in this case or who at Maverickeye UG was responsible for this,” the Court writes.

“Nor did CMS use the opportunity to hear representatives of Tele2 or others with Tele2 in mind to discover if the company has access to any of the current IP addresses and, if so, which.”

Considering the above, the Court notes that Tele2’s statement, that it doesn’t have access to the data, must stand.

“In these circumstances, CMS, against Tele2’s appeal, has not shown that Tele2 holds the information requested by the disclosure order. CMS’ application for a disclosure order should therefore be rejected,” the Court concludes.

The decision cannot be appealed so Copyright Management Services won’t get its hands on the personal details of the people behind the IP addresses, at least through this process.

The decision (Swedish, pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

AWS AppSync – Production-Ready with Six New Features

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-appsync-production-ready-with-six-new-features/

If you build (or want to build) data-driven web and mobile apps and need real-time updates and the ability to work offline, you should take a look at AWS AppSync. Announced in preview form at AWS re:Invent 2017 and described in depth here, AWS AppSync is designed for use in iOS, Android, JavaScript, and React Native apps. AWS AppSync is built around GraphQL, an open, standardized query language that makes it easy for your applications to request the precise data that they need from the cloud.

I’m happy to announce that the preview period is over and that AWS AppSync is now generally available and production-ready, with six new features that will simplify and streamline your application development process:

Console Log Access – You can now see the CloudWatch Logs entries that are created when you test your GraphQL queries, mutations, and subscriptions from within the AWS AppSync Console.

Console Testing with Mock Data – You can now create and use mock context objects in the console for testing purposes.

Subscription Resolvers – You can now create resolvers for AWS AppSync subscription requests, just as you can already do for query and mutate requests.

Batch GraphQL Operations for DynamoDB – You can now make use of DynamoDB’s batch operations (BatchGetItem and BatchWriteItem) across one or more tables. in your resolver functions.

CloudWatch Support – You can now use Amazon CloudWatch Metrics and CloudWatch Logs to monitor calls to the AWS AppSync APIs.

CloudFormation Support – You can now define your schemas, data sources, and resolvers using AWS CloudFormation templates.

A Brief AppSync Review
Before diving in to the new features, let’s review the process of creating an AWS AppSync API, starting from the console. I click Create API to begin:

I enter a name for my API and (for demo purposes) choose to use the Sample schema:

The schema defines a collection of GraphQL object types. Each object type has a set of fields, with optional arguments:

If I was creating an API of my own I would enter my schema at this point. Since I am using the sample, I don’t need to do this. Either way, I click on Create to proceed:

The GraphQL schema type defines the entry points for the operations on the data. All of the data stored on behalf of a particular schema must be accessible using a path that begins at one of these entry points. The console provides me with an endpoint and key for my API:

It also provides me with guidance and a set of fully functional sample apps that I can clone:

When I clicked Create, AWS AppSync created a pair of Amazon DynamoDB tables for me. I can click Data Sources to see them:

I can also see and modify my schema, issue queries, and modify an assortment of settings for my API.

Let’s take a quick look at each new feature…

Console Log Access
The AWS AppSync Console already allows me to issue queries and to see the results, and now provides access to relevant log entries.In order to see the entries, I must enable logs (as detailed below), open up the LOGS, and check the checkbox. Here’s a simple mutation query that adds a new event. I enter the query and click the arrow to test it:

I can click VIEW IN CLOUDWATCH for a more detailed view:

To learn more, read Test and Debug Resolvers.

Console Testing with Mock Data
You can now create a context object in the console where it will be passed to one of your resolvers for testing purposes. I’ll add a testResolver item to my schema:

Then I locate it on the right-hand side of the Schema page and click Attach:

I choose a data source (this is for testing and the actual source will not be accessed), and use the Put item mapping template:

Then I click Select test context, choose Create New Context, assign a name to my test content, and click Save (as you can see, the test context contains the arguments from the query along with values to be returned for each field of the result):

After I save the new Resolver, I click Test to see the request and the response:

Subscription Resolvers
Your AWS AppSync application can monitor changes to any data source using the @aws_subscribe GraphQL schema directive and defining a Subscription type. The AWS AppSync client SDK connects to AWS AppSync using MQTT over Websockets and the application is notified after each mutation. You can now attach resolvers (which convert GraphQL payloads into the protocol needed by the underlying storage system) to your subscription fields and perform authorization checks when clients attempt to connect. This allows you to perform the same fine grained authorization routines across queries, mutations, and subscriptions.

To learn more about this feature, read Real-Time Data.

Batch GraphQL Operations
Your resolvers can now make use of DynamoDB batch operations that span one or more tables in a region. This allows you to use a list of keys in a single query, read records multiple tables, write records in bulk to multiple tables, and conditionally write or delete related records across multiple tables.

In order to use this feature the IAM role that you use to access your tables must grant access to DynamoDB’s BatchGetItem and BatchPutItem functions.

To learn more, read the DynamoDB Batch Resolvers tutorial.

CloudWatch Logs Support
You can now tell AWS AppSync to log API requests to CloudWatch Logs. Click on Settings and Enable logs, then choose the IAM role and the log level:

CloudFormation Support
You can use the following CloudFormation resource types in your templates to define AWS AppSync resources:

AWS::AppSync::GraphQLApi – Defines an AppSync API in terms of a data source (an Amazon Elasticsearch Service domain or a DynamoDB table).

AWS::AppSync::ApiKey – Defines the access key needed to access the data source.

AWS::AppSync::GraphQLSchema – Defines a GraphQL schema.

AWS::AppSync::DataSource – Defines a data source.

AWS::AppSync::Resolver – Defines a resolver by referencing a schema and a data source, and includes a mapping template for requests.

Here’s a simple schema definition in YAML form:

  AppSyncSchema:
    Type: "AWS::AppSync::GraphQLSchema"
    DependsOn:
      - AppSyncGraphQLApi
    Properties:
      ApiId: !GetAtt AppSyncGraphQLApi.ApiId
      Definition: |
        schema {
          query: Query
          mutation: Mutation
        }
        type Query {
          singlePost(id: ID!): Post
          allPosts: [Post]
        }
        type Mutation {
          putPost(id: ID!, title: String!): Post
        }
        type Post {
          id: ID!
          title: String!
        }

Available Now
These new features are available now and you can start using them today! Here are a couple of blog posts and other resources that you might find to be of interest:

Jeff;

 

 

WHOIS Limits Under GDPR Will Make Pirates Harder to Catch, Groups Fear

Post Syndicated from Andy original https://torrentfreak.com/whois-limits-under-gdpr-will-make-pirates-harder-to-catch-groups-fear-180413/

The General Data Protection Regulation (GDPR) is a regulation in EU law covering data protection and privacy for all individuals within the European Union.

As more and more personal data is gathered, stored and (ab)used online, the aim of the GDPR is to protect EU citizens from breaches of privacy. The regulation applies to all companies processing the personal data of subjects residing in the Union, no matter where in the world the company is located.

Penalties for non-compliance can be severe. While there is a tiered approach according to severity, organizations can be fined up to 4% of annual global turnover or €20 million, whichever is greater. Needless to say, the regulations will need to be taken seriously.

Among those affected are domain name registries and registrars who publish the personal details of domain name owners in the public WHOIS database. In a full entry, a person or organization’s name, address, telephone numbers and email addresses can often be found.

This raises a serious issue. While registries and registrars are instructed and contractually obliged to publish data in the WHOIS database by global domain name authority ICANN, in millions of cases this conflicts with the requirements of the GDPR, which prevents the details of private individuals being made freely available on the Internet.

As explained in detail by the EFF, ICANN has been trying to resolve this clash. Its proposed interim model for GDPR compliance (pdf) envisions registrars continuing to collect full WHOIS data but not necessarily publishing it, to “allow the existing data
to be preserved while the community discussions continue on the next generation of WHOIS.”

But the proposed changes that will inevitably restrict free access to WHOIS information has plenty of people spooked, including thousands of companies belonging to entertainment industry groups such as the MPAA, IFPI, RIAA and the Copyright Alliance.

In a letter sent to Vice President Andrus Ansip of the European Commission, these groups and dozens of others warn that restricted access to WHOIS will have a serious effect on their ability to protect their intellectual property rights from “cybercriminals” which pose a threat to their businesses.

Signed by 50 organizations involved in IP protection and other areas of online security, the letter expresses concern that in attempting to comply with the GDPR, ICANN is on a course to “over-correct” while disregarding proportionality, accountability and transparency.

A small sample of the groups calling on ICANN

“We strongly assert that this model does not properly account for the critical public and legitimate interests served by maintaining a sufficient amount of data publicly available while respecting privacy interests of registrants by instituting a tiered or layered access system for the vast majority of personal data as defined by the GDPR,” the groups write.

The letter focuses on two aspects of “over-correction”, the first being ICANN’s proposal that no personal data whatsoever of a domain name registrant will be made available “without appropriate consideration or balancing of the countervailing interests in public disclosure of a limited amount of such data.”

In response to ICANN’s proposal that only the province/state and country of a domain name registrant be made publicly available, the groups advise the organization that publishing “a natural person registrant’s e-mail address” in a publicly accessible WHOIS directory will not constitute a breach of the GDPR.

“[W]e strongly believe that the continued public availability of the registrant’s e-mail address – specifically the e-mail address that the registrant supplies to the registrar at the time the domain name is purchased and which e-mail address the registrar is required to validate – is critical for several reasons,” the groups write.

“First, it is the data element that is typically the most important to have readily available for law enforcement, consumer protection, particularly child protection, intellectual property enforcement and cybersecurity/anti-malware purposes.

“Second, the public accessibility of the registrant’s e-mail address permits a broad array of threats and illegal activities to be addressed quickly and the damage from such threats mitigated and contained in a timely manner, particularly where the abusive/illegal activity may be spawned from a variety of different domain names on different generic Top Level Domains,” they add.

The groups also argue that since making email addresses is effectively required in light of Article 5.1(c) ECD, “there is no legitimate justification to discontinue public availability of the registrant’s e-mail address in the WHOIS directory and especially not in light of other legitimate purposes.”

The EFF, on the other hand, says that being able to contact a domain owner wouldn’t necessarily require an email address to be made public.

“There are other cases in which it makes sense to allow members of the public to contact the owner of a domain, without having to obtain a court order,” EFF writes.

“But this could be achieved very simply if ICANN were simply to provide something like a CAPTCHA-protected contact form, which would deliver email to the appropriate contact point with no need to reveal the registrant’s actual email address.”

The groups’ second main concern is that ICANN reportedly makes no distinction between name registrants that are “natural persons versus those that are legal entities” and intends to treat them all as if they are subject to the GDPR, despite the fact that the regulation only applies to data associated with an “identified or identifiable natural person”.

They say it is imperative that EU Data Protection Authorities are made to understand that when registrants obtain a domain for illegal purposes, they often only register it as a “natural person” when registering as a legal person (legal entity) would be more appropriate, despite that granting them less privacy.

“Consequently, the test for differentiating between a legal and natural person should not merely be the legal status of the registrant, but also whether the registrant is, in fact, acting as a legal or natural person vis a vis the use of the domain name,” the groups note.

“We therefore urge that ICANN be given appropriate guidance as to the importance of maintaining a distinction between natural person and legal person registrants and keeping as much data about legal person domain name registrants as publicly accessible as possible,” they conclude.

What will happen with WHOIS on May 25 still isn’t clear. It wasn’t until October 2017 that ICANN finally determined that it would be affected by the GDPR, meaning that it’s been scrambling ever since to meet the compliance date. And it still is, according to the latest available documentation (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Schrems II – преюдициално запитване към Съда на ЕС относно трансфера на данни ЕС- САЩ

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/04/13/20166schrems-ii/

Максимилиан Шремс е отново във фокуса на вниманието: вчера станаха известни 11 въпроса, които ирландският съд   формулира за преюдициално запитване към Съда на ЕС   по чл.267 ДФЕС.  Както е известно, в решението си от октомври 2017 (вж т.335)  съдът  вече обяви, че ще отправи преюдициално запитване.  И Шремс, и Facebook са против –    според Шремс няма нужда: уредбата е ясна, според Facebook няма нужда: предоставяната защита на гражданите на ЕС е адекватна.

Но ирландският съд ще пита дали личните данни, прехвърлени от ЕС в САЩ съгласно решението на ЕК за новия механизъм (Privacy Shield decision), нарушават правата на гражданите на ЕС  по чл. 7 и 8 от Хартата на ЕС. Ще пита и дали ограниченията, пред които са изправени гражданите на ЕС в САЩ,  са пропорционални или строго необходими по смисъла на член 52, параграф 1 от Хартата.

Ето 11-те въпроса.

 

*

Максимилиан Шремс е австрийски докторант по право,  завел дело за защита на личните данни във Фейсбук – което   доведе до обявяване на невалидността на споразуменията ЕС-САЩ за личните данни (Safe Harbor).  По -късно ЕС и САЩ въведоха нов механизъм  –  “щит за защита на личните данни между ЕС и САЩ”  (Privacy Shield).

Шремс  смята, че мерките в рамките на щита отново не са адекватни за защитата на данните на гражданите на ЕС, в частност по повод функционирането на Facebook: прехвърлянето на личните данни от Facebook в Ирландия на компанията-майка в САЩ  се ръководи от въпросния механизъм, който според Шремс не защитава гражданите на ЕС ефективно. Като аргумент Шремс излага отношението на Facebook c програмата за събиране на данни  Prism на NSA чрез Facebook: Правото на САЩ изисква Facebook да помага на NSA, правото на ЕС забранява точно това.

Шремс смята по-специално, че трансферът  на личните му данни към FB в САЩ е   в нарушение на неговото право на личен живот  – като гражданин на ЕС –  по правото на ЕС.

Ето защо Шремс се обръща към Ирландския орган за защита на личните данни (централата на FB за Европа е в Ирландия), въпросът стига до съда, през октомври 2017  съдът взема решение да се обърне с преюдициално запитване към Съда на ЕС,  а  на 12 април 2018 г. съдията Керълайн Костело оповестява  въпросите.

Според експерти до произнасянето има поне година  и половина, но Съдът на ЕС може да приоритизира, когато реши.

How to retain system tables’ data spanning multiple Amazon Redshift clusters and run cross-cluster diagnostic queries

Post Syndicated from Karthik Sonti original https://aws.amazon.com/blogs/big-data/how-to-retain-system-tables-data-spanning-multiple-amazon-redshift-clusters-and-run-cross-cluster-diagnostic-queries/

Amazon Redshift is a data warehouse service that logs the history of the system in STL log tables. The STL log tables manage disk space by retaining only two to five days of log history, depending on log usage and available disk space.

To retain STL tables’ data for an extended period, you usually have to create a replica table for every system table. Then, for each you load the data from the system table into the replica at regular intervals. By maintaining replica tables for STL tables, you can run diagnostic queries on historical data from the STL tables. You then can derive insights from query execution times, query plans, and disk-spill patterns, and make better cluster-sizing decisions. However, refreshing replica tables with live data from STL tables at regular intervals requires schedulers such as Cron or AWS Data Pipeline. Also, these tables are specific to one cluster and they are not accessible after the cluster is terminated. This is especially true for transient Amazon Redshift clusters that last for only a finite period of ad hoc query execution.

In this blog post, I present a solution that exports system tables from multiple Amazon Redshift clusters into an Amazon S3 bucket. This solution is serverless, and you can schedule it as frequently as every five minutes. The AWS CloudFormation deployment template that I provide automates the solution setup in your environment. The system tables’ data in the Amazon S3 bucket is partitioned by cluster name and query execution date to enable efficient joins in cross-cluster diagnostic queries.

I also provide another CloudFormation template later in this post. This second template helps to automate the creation of tables in the AWS Glue Data Catalog for the system tables’ data stored in Amazon S3. After the system tables are exported to Amazon S3, you can run cross-cluster diagnostic queries on the system tables’ data and derive insights about query executions in each Amazon Redshift cluster. You can do this using Amazon QuickSight, Amazon Athena, Amazon EMR, or Amazon Redshift Spectrum.

You can find all the code examples in this post, including the CloudFormation templates, AWS Glue extract, transform, and load (ETL) scripts, and the resolution steps for common errors you might encounter in this GitHub repository.

Solution overview

The solution in this post uses AWS Glue to export system tables’ log data from Amazon Redshift clusters into Amazon S3. The AWS Glue ETL jobs are invoked at a scheduled interval by AWS Lambda. AWS Systems Manager, which provides secure, hierarchical storage for configuration data management and secrets management, maintains the details of Amazon Redshift clusters for which the solution is enabled. The last-fetched time stamp values for the respective cluster-table combination are maintained in an Amazon DynamoDB table.

The following diagram covers the key steps involved in this solution.

The solution as illustrated in the preceding diagram flows like this:

  1. The Lambda function, invoke_rs_stl_export_etl, is triggered at regular intervals, as controlled by Amazon CloudWatch. It’s triggered to look up the AWS Systems Manager parameter store to get the details of the Amazon Redshift clusters for which the system table export is enabled.
  2. The same Lambda function, based on the Amazon Redshift cluster details obtained in step 1, invokes the AWS Glue ETL job designated for the Amazon Redshift cluster. If an ETL job for the cluster is not found, the Lambda function creates one.
  3. The ETL job invoked for the Amazon Redshift cluster gets the cluster credentials from the parameter store. It gets from the DynamoDB table the last exported time stamp of when each of the system tables was exported from the respective Amazon Redshift cluster.
  4. The ETL job unloads the system tables’ data from the Amazon Redshift cluster into an Amazon S3 bucket.
  5. The ETL job updates the DynamoDB table with the last exported time stamp value for each system table exported from the Amazon Redshift cluster.
  6. The Amazon Redshift cluster system tables’ data is available in Amazon S3 and is partitioned by cluster name and date for running cross-cluster diagnostic queries.

Understanding the configuration data

This solution uses AWS Systems Manager parameter store to store the Amazon Redshift cluster credentials securely. The parameter store also securely stores other configuration information that the AWS Glue ETL job needs for extracting and storing system tables’ data in Amazon S3. Systems Manager comes with a default AWS Key Management Service (AWS KMS) key that it uses to encrypt the password component of the Amazon Redshift cluster credentials.

The following table explains the global parameters and cluster-specific parameters required in this solution. The global parameters are defined once and applicable at the overall solution level. The cluster-specific parameters are specific to an Amazon Redshift cluster and repeat for each cluster for which you enable this post’s solution. The CloudFormation template explained later in this post creates these parameters as part of the deployment process.

Parameter name Type Description
Global parametersdefined once and applied to all jobs
redshift_query_logs.global.s3_prefix String The Amazon S3 path where the query logs are exported. Under this path, each exported table is partitioned by cluster name and date.
redshift_query_logs.global.tempdir String The Amazon S3 path that AWS Glue ETL jobs use for temporarily staging the data.
redshift_query_logs.global.role> String The name of the role that the AWS Glue ETL jobs assume. Just the role name is sufficient. The complete Amazon Resource Name (ARN) is not required.
redshift_query_logs.global.enabled_cluster_list StringList A comma-separated list of cluster names for which system tables’ data export is enabled. This gives flexibility for a user to exclude certain clusters.
Cluster-specific parametersfor each cluster specified in the enabled_cluster_list parameter
redshift_query_logs.<<cluster_name>>.connection String The name of the AWS Glue Data Catalog connection to the Amazon Redshift cluster. For example, if the cluster name is product_warehouse, the entry is redshift_query_logs.product_warehouse.connection.
redshift_query_logs.<<cluster_name>>.user String The user name that AWS Glue uses to connect to the Amazon Redshift cluster.
redshift_query_logs.<<cluster_name>>.password Secure String The password that AWS Glue uses to connect the Amazon Redshift cluster’s encrypted-by key that is managed in AWS KMS.

For example, suppose that you have two Amazon Redshift clusters, product-warehouse and category-management, for which the solution described in this post is enabled. In this case, the parameters shown in the following screenshot are created by the solution deployment CloudFormation template in the AWS Systems Manager parameter store.

Solution deployment

To make it easier for you to get started, I created a CloudFormation template that automatically configures and deploys the solution—only one step is required after deployment.

Prerequisites

To deploy the solution, you must have one or more Amazon Redshift clusters in a private subnet. This subnet must have a network address translation (NAT) gateway or a NAT instance configured, and also a security group with a self-referencing inbound rule for all TCP ports. For more information about why AWS Glue ETL needs the configuration it does, described previously, see Connecting to a JDBC Data Store in a VPC in the AWS Glue documentation.

To start the deployment, launch the CloudFormation template:

CloudFormation stack parameters

The following table lists and describes the parameters for deploying the solution to export query logs from multiple Amazon Redshift clusters.

Property Default Description
S3Bucket mybucket The bucket this solution uses to store the exported query logs, stage code artifacts, and perform unloads from Amazon Redshift. For example, the mybucket/extract_rs_logs/data bucket is used for storing all the exported query logs for each system table partitioned by the cluster. The mybucket/extract_rs_logs/temp/ bucket is used for temporarily staging the unloaded data from Amazon Redshift. The mybucket/extract_rs_logs/code bucket is used for storing all the code artifacts required for Lambda and the AWS Glue ETL jobs.
ExportEnabledRedshiftClusters Requires Input A comma-separated list of cluster names from which the system table logs need to be exported.
DataStoreSecurityGroups Requires Input A list of security groups with an inbound rule to the Amazon Redshift clusters provided in the parameter, ExportEnabledClusters. These security groups should also have a self-referencing inbound rule on all TCP ports, as explained on Connecting to a JDBC Data Store in a VPC.

After you launch the template and create the stack, you see that the following resources have been created:

  1. AWS Glue connections for each Amazon Redshift cluster you provided in the CloudFormation stack parameter, ExportEnabledRedshiftClusters.
  2. All parameters required for this solution created in the parameter store.
  3. The Lambda function that invokes the AWS Glue ETL jobs for each configured Amazon Redshift cluster at a regular interval of five minutes.
  4. The DynamoDB table that captures the last exported time stamps for each exported cluster-table combination.
  5. The AWS Glue ETL jobs to export query logs from each Amazon Redshift cluster provided in the CloudFormation stack parameter, ExportEnabledRedshiftClusters.
  6. The IAM roles and policies required for the Lambda function and AWS Glue ETL jobs.

After the deployment

For each Amazon Redshift cluster for which you enabled the solution through the CloudFormation stack parameter, ExportEnabledRedshiftClusters, the automated deployment includes temporary credentials that you must update after the deployment:

  1. Go to the parameter store.
  2. Note the parameters <<cluster_name>>.user and redshift_query_logs.<<cluster_name>>.password that correspond to each Amazon Redshift cluster for which you enabled this solution. Edit these parameters to replace the placeholder values with the right credentials.

For example, if product-warehouse is one of the clusters for which you enabled system table export, you edit these two parameters with the right user name and password and choose Save parameter.

Querying the exported system tables

Within a few minutes after the solution deployment, you should see Amazon Redshift query logs being exported to the Amazon S3 location, <<S3Bucket_you_provided>>/extract_redshift_query_logs/data/. In that bucket, you should see the eight system tables partitioned by customer name and date: stl_alert_event_log, stl_dlltext, stl_explain, stl_query, stl_querytext, stl_scan, stl_utilitytext, and stl_wlm_query.

To run cross-cluster diagnostic queries on the exported system tables, create external tables in the AWS Glue Data Catalog. To make it easier for you to get started, I provide a CloudFormation template that creates an AWS Glue crawler, which crawls the exported system tables stored in Amazon S3 and builds the external tables in the AWS Glue Data Catalog.

Launch this CloudFormation template to create external tables that correspond to the Amazon Redshift system tables. S3Bucket is the only input parameter required for this stack deployment. Provide the same Amazon S3 bucket name where the system tables’ data is being exported. After you successfully create the stack, you can see the eight tables in the database, redshift_query_logs_db, as shown in the following screenshot.

Now, navigate to the Athena console to run cross-cluster diagnostic queries. The following screenshot shows a diagnostic query executed in Athena that retrieves query alerts logged across multiple Amazon Redshift clusters.

You can build the following example Amazon QuickSight dashboard by running cross-cluster diagnostic queries on Athena to identify the hourly query count and the key query alert events across multiple Amazon Redshift clusters.

How to extend the solution

You can extend this post’s solution in two ways:

  • Add any new Amazon Redshift clusters that you spin up after you deploy the solution.
  • Add other system tables or custom query results to the list of exports from an Amazon Redshift cluster.

Extend the solution to other Amazon Redshift clusters

To extend the solution to more Amazon Redshift clusters, add the three cluster-specific parameters in the AWS Systems Manager parameter store following the guidelines earlier in this post. Modify the redshift_query_logs.global.enabled_cluster_list parameter to append the new cluster to the comma-separated string.

Extend the solution to add other tables or custom queries to an Amazon Redshift cluster

The current solution ships with the export functionality for the following Amazon Redshift system tables:

  • stl_alert_event_log
  • stl_dlltext
  • stl_explain
  • stl_query
  • stl_querytext
  • stl_scan
  • stl_utilitytext
  • stl_wlm_query

You can easily add another system table or custom query by adding a few lines of code to the AWS Glue ETL job, <<cluster-name>_extract_rs_query_logs. For example, suppose that from the product-warehouse Amazon Redshift cluster you want to export orders greater than $2,000. To do so, add the following five lines of code to the AWS Glue ETL job product-warehouse_extract_rs_query_logs, where product-warehouse is your cluster name:

  1. Get the last-processed time-stamp value. The function creates a value if it doesn’t already exist.

salesLastProcessTSValue = functions.getLastProcessedTSValue(trackingEntry=”mydb.sales_2000",job_configs=job_configs)

  1. Run the custom query with the time stamp.

returnDF=functions.runQuery(query="select * from sales s join order o where o.order_amnt > 2000 and sale_timestamp > '{}'".format (salesLastProcessTSValue) ,tableName="mydb.sales_2000",job_configs=job_configs)

  1. Save the results to Amazon S3.

functions.saveToS3(dataframe=returnDF,s3Prefix=s3Prefix,tableName="mydb.sales_2000",partitionColumns=["sale_date"],job_configs=job_configs)

  1. Get the latest time-stamp value from the returned data frame in Step 2.

latestTimestampVal=functions.getMaxValue(returnDF,"sale_timestamp",job_configs)

  1. Update the last-processed time-stamp value in the DynamoDB table.

functions.updateLastProcessedTSValue(“mydb.sales_2000",latestTimestampVal[0],job_configs)

Conclusion

In this post, I demonstrate a serverless solution to retain the system tables’ log data across multiple Amazon Redshift clusters. By using this solution, you can incrementally export the data from system tables into Amazon S3. By performing this export, you can build cross-cluster diagnostic queries, build audit dashboards, and derive insights into capacity planning by using services such as Athena. I also demonstrate how you can extend this solution to other ad hoc query use cases or tables other than system tables by adding a few lines of code.


Additional Reading

If you found this post useful, be sure to check out Using Amazon Redshift Spectrum, Amazon Athena, and AWS Glue with Node.js in Production and Amazon Redshift – 2017 Recap.


About the Author

Karthik Sonti is a senior big data architect at Amazon Web Services. He helps AWS customers build big data and analytical solutions and provides guidance on architecture and best practices.

 

 

 

 

MPAA Quietly Shut Down Its ‘Legal’ Movie Search Engine

Post Syndicated from Ernesto original https://torrentfreak.com/mpaa-quietly-shut-down-its-legal-movie-search-engine-180411/

During the fall of 2014, Hollywood launched WhereToWatch, its very own search engine for movies and TV-shows.

The site enabled people to check if and where the latest entertainment was available, hoping to steer U.S. visitors away from pirate sites.

Aside from the usual critics, the launch received a ton of favorable press. This was soon followed up by another release highlighting some of the positive responses and praise from the press.

“The initiative marks a further attempt by the MPAA to combat rampant online piracy by reminding consumers of legal means to watch movies and TV shows,” the LA Times wrote, for example.

Over the past several years, the site hasn’t appeared in the news much, but it did help thousands of people find legal sources for the latest entertainment. However, those who try to access it today will notice that WhereToWatch has been abandoned, quietly.

The MPAA pulled the plug on the service a few months ago. And where the mainstream media covered its launch in detail, the shutdown received zero mentions. So why did the site fold?

According to MPAA Vice President of Corporate Communications, Chris Ortman, it was no longer needed as there are many similar search engines out there.

“Given the many search options commercially available today, which can be found on the MPAA website, WheretoWatch.com was discontinued at the conclusion of 2017,” Ortman informs TF.

“There are more than 140 lawful online platforms in the United States for accessing film and television content, and more than 460 around the world,” he adds.

The MPAA lists several of these alternative search engines on its new website. The old WhereToWatch domain now forwards to the MPAA’s online magazine ‘The Credits,’ which features behind-the-scenes stories and industry profiles.

While the MPAA is right that there are alternative search engines, many of these were already available when WhereToWatch launched. In fact, the site used the services of the competing service GoWatchIt for its search results.

Perhaps the lack of interest from the U.S. public played a role as well. The site never really took off and according to traffic estimates from SimilarWeb and Alexa, most of the visitors came from Iran, where the site was unusable due to a geo-block.

After searching long and hard we were able to track down a former WhereToWatch user on Reddit. This person just started to get into the service and was disappointed to see it go.

“So, does anyone know of better places or simply other places where this information lives in an easily accessible place?” he or she asked.

One person responded by recommending Icefilms.info, a pirate site. This is a response the MPAA would cringe at, but luckily, most people mentioned justwatch.com as the best alternative.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Конкурси… и алманаси :)

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=2131

Две обяви, насочени към всички любители на фантастиката:

1

НА ВАШЕТО ВНИМАНИЕ – „ФАНТАSTIKA 2017“

Излезе от печат осмият пореден алманах „ФантАstika“. Негов съставител, както винаги досега, е Атанас П. Славов – председател на Дружеството на българските фантасти „Тера Фантазия“.
Алманахът е интересен не само за читателите, запознати с предишните ежегодници, но и за ценителите на супержанра (във всичките му форми), които за пръв път ще вземат това издание в ръцете си.

Преводните автори са застъпени с оригинална новела на аржентинката Тереса Мира де Ечеверия, класически разказ на американеца Томас Шеред и една творба от македонския фантаст Никола Суботич, наскоро отличена в конкурса „Агоп Мелконян“.

В големия раздел на родните фантасти ще се срещнете както с доайена Христо Пощаков, представен като майстор на научната фантастика, фентъзито и хумора, така и с нови произведения от Ценка Бакърджиева, Валентин Д. Иванов, Мартин Петков, Янчо Чолаков, а също и с приказка от дебютната книга на Мел.

И сега разделът „Фантастология“ е посветен на обзори и тенденции в развитието на нашата и световната фантастика, плюс задочни срещи с класици като Светослав Минков и Елин Пелин, видени през погледа на Боряна Владимирова и Александър Карапанчев. Няколко статии разглеждат испаноезични писателки, руски тематични направления в модерната НФ, българската фантастика в нова аудио форма и последния брой на списание „Тера фантастика“.

В раздела „Съзвездие Кинотавър“ ще се запознаете с някои от актуалните екранизации на фантастични романи, с англичанина, създал сценария на „Изкуствен интелект“, и с шеговит комикс (за това как на Кубрик му е изглеждало бъдещето през 2019 година).

Броят обявява уникалния по темата си конкурс „Изгревът на следващото“ – за разкази, посветени на едно желаемо бъдеще. Разделът „Футурум“ включва статии за новите информационни религии, несъстояли се финали на света и особено любопитна фаКтастика.

И още по страниците на този алманах: подбрани картини от художника Андриан Бекяров… пристрастен репортаж за Еврокон 2017 в Дортмунд… поезия… и много други събития от неизчерпаемата сфера на въображението.

За повече информация: http://choveshkata.net/blog/?p=6617.

2

Дружество на българските фантасти „Тера Фантазия“ и фондация „Човешката библиотека“ канят всички автори да участват в първия Конкурс „Изгревът на следващото“.

В момента се провежда не един конкурс за български художествени текстове, но този е единственият, който има за тема възможното движение към позитивно бъдеще. Днес, в епохата на ширещи се антиутопии и безкритично катастрофично мислене, се изисква истинска интелектуална смелост, за да потърсим формите за Изхода. Смелост да допуснем, че Човешкият дух е в състояние да намери пътя си към по-високото ниво, интелект да си го представим и талант да го защитим художествено.

Какво е решението на задачата, наречена „Кризисно съвремие“?

Какво е решението, което води до по-висше състояние на ЧоВечността и Човечеството, към бъдеще, в което ЧоВечният Разум е надрасъл безчовечното невежество?

Какво е решението, което ще създаде свят, в който науките и технологиите ще се развиват, за да расте качеството на Човека, а не богатствата на единици?

Какво е решението, което ще избегне застиналите утопиянства, където позьорис бели хитони рецитират един на друг надути речи?

Конкурсът „Изгревът на следващото“ ще бъде мястото, където ще се публикуват истории, посветени на това търсене. Произведения, които с художествен талант и моделираща сила ще защитават нови светове от този вид по един от следните два начина:

  • По спиралата към следващото: Съдби на индивиди и общества, търсещи изхода от съвременното кризисно състояние на света ни; образи на учени, мислители и обикновени хора, напипващи в мрака на неизвестното пътищата към тази цел; приключения на личности, въвлечени в такъв спирален процес и постепенно осъзнаващи смисъла му.
  • Визии на следващото: Изграждане на образи, възникнали в нашето съвремие, но носещи белезите на новото, притежаващи вътрешната свобода, въпреки че са затворени в клетката на настоящата социална несвобода; образи на групи и общества, постигнали белези на следващото, без ескейпизъм, фанатизъм и аскетизъм. Хуманитарни технологии, водещи до освобождаване от опредметяването, разкриващи етическите и интелектуалните ресурси на ЧоВечното. Непротиворечиви и реалистично обрисувани общества на бъдещето, в които всяка личност е пълноценно разгърната и осъществена, без да зависи или да бъде притежавана от друга.

Приемливи са всички жанрове – достатъчно е разказите да засягат поне една от горните две теми.

Крайният срок за участие е 1 юни 2018 г.

Трите най-високо класирани разказа ще получат награди по 200 лв. и заедно с други подбрани заглавия от конкурса ще бъдат публикувани в следващите издания на алманаха „ФантАstika“.

Пълните условия са описани в сайта на Човешката библиотека: http://choveshkata.net/blog/?p=6668

Там ще откриете и най-актуална информация в случай на промени.