Tag Archives: innovation

Google on Collision Course With Movie Biz Over Piracy & Safe Harbor

Post Syndicated from Andy original https://torrentfreak.com/google-on-collision-course-with-movie-biz-over-piracy-safe-harbor-180219/

Wherever Google has a presence, rightsholders are around to accuse the search giant of not doing enough to deal with piracy.

Over the past several years, the company has been attacked by both the music and movie industries but despite overtures from Google, criticism still floods in.

In Australia, things are definitely heating up. Village Roadshow, one of the nation’s foremost movie companies, has been an extremely vocal Google critic since 2015 but now its co-chief, the outspoken Graham Burke, seems to want to take things to the next level.

As part of yet another broadside against Google, Burke has for the second time in a month accused Google of playing a large part in online digital crime.

“My view is they are complicit and they are facilitating crime,” Burke said, adding that if Google wants to sue him over his comments, they’re very welcome to do so.

It’s highly unlikely that Google will take the bait. Burke’s attempt at pushing the issue further into the spotlight will have been spotted a mile off but in any event, legal battles with Google aren’t really something that Burke wants to get involved in.

Australia is currently in the midst of a consultation process for the Copyright Amendment (Service Providers) Bill 2017 which would extend the country’s safe harbor provisions to a broader range of service providers including educational institutions, libraries, archives, key cultural institutions and organizations assisting people with disabilities.

For its part, Village Roadshow is extremely concerned that these provisions may be extended to other providers – specifically Google – who might then use expanded safe harbor to deflect more liability in respect of piracy.

“Village Roadshow….urges that there be no further amendments to safe harbor and in particular there is no advantage to Australia in extending safe harbor to Google,” Burke wrote in his company’s recent submission to the government.

“It is very unlikely given their size and power that as content owners we would ever sue them but if we don’t have that right then we stand naked. Most importantly if Google do the right thing by Australia on the question of piracy then there will be no issues. However, they are very far from this position and demonstrably are facilitating crime.”

Accusations of crime facilitation are nothing new for Google, with rightsholders in the US and Europe having accused the company of the same a number of times over the years. In response, Google always insists that it abides by relevant laws and actually goes much further in tackling piracy than legislation currently requires.

On the safe harbor front, Google begins by saying that not expanding provisions to service providers will have a seriously detrimental effect on business development in the region.

“[Excluding] online service providers falls far short of a balanced, pro-innovation environment for Australia. Further, it takes Australia out of step with other digital economies by creating regulatory uncertainty for [venture capital] investment and startup/entrepreneurial success,” Google’s submission reads.

“[T]he Draft Bill’s narrow safe harbor scheme places Australian-based startups and online service providers — including individual bloggers, websites, small startups, video-hosting services, enterprise cloud companies, auction sites, online marketplaces, hosting providers for real-estate listings, photo hosting services, search engines, review sites, and online platforms —in a disadvantaged position compared with global startups in countries that have strong safe harbor frameworks, such as the United States, Canada, United Kingdom, Singapore, South Korea, Japan, and other EU countries.

“Under the new scheme, Australian-based startups and service providers, unlike their international counterparts, will not receive clear and consistent legal protection when they respond to complaints from rightsholders about alleged instances of online infringement by third-party users on their services,” Google notes.

Interestingly, Google then delivers what appears to be a loosely veiled threat.

One of the key anti-piracy strategies touted by the mainstream entertainment companies is collaboration between rightsholders and service providers, including the latter providing voluntary tools to police infringement online. Google says that if service providers are given a raw deal on safe harbor, the extent of future cooperation may be at risk.

“If Australian-based service providers are carved out of the new safe harbor regime post-reform, they will operate from a lower incentive to build and test new voluntary tools to combat online piracy, potentially reducing their contributions to innovation in best practices in both Australia and international markets,” the company warns.

But while Village Roadshow argue against safe harbors and warn that piracy could kill the movie industry, it is quietly optimistic that the tide is turning.

In a presentation to investors last week, the company said that reducing piracy would have “only an upside” for its business but also added that new research indicates that “piracy growth [is] getting arrested.” As a result, the company says that it will build on the notion that “74% of people see piracy as ‘wrong/theft’” and will call on Australians to do the right thing.

In the meantime, the pressure on Google will continue but lawsuits – in either direction – won’t provide an answer.

Village Roadshow’s submission can be found here, Google’s here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

EFF Urges US Copyright Office To Reject Proactive ‘Piracy’ Filters

Post Syndicated from Andy original https://torrentfreak.com/eff-urges-us-copyright-office-to-reject-proactive-piracy-filters-180213/

Faced with millions of individuals consuming unlicensed audiovisual content from a variety of sources, entertainment industry groups have been seeking solutions closer to the roots of the problem.

As widespread site-blocking attempts to tackle ‘pirate’ sites in the background, greater attention has turned to legal platforms that host both licensed and unlicensed content.

Under current legislation, these sites and services can do business relatively comfortably due to the so-called safe harbor provisions of the US Digital Millennium Copyright Act (DMCA) and the European Union Copyright Directive (EUCD).

Both sets of legislation ensure that Internet platforms can avoid being held liable for the actions of others provided they themselves address infringement when they are made aware of specific problems. If a video hosting site has a copy of an unlicensed movie uploaded by a user, for example, it must be removed within a reasonable timeframe upon request from the copyright holder.

However, in both the US and EU there is mounting pressure to make it more difficult for online services to achieve ‘safe harbor’ protections.

Entertainment industry groups believe that platforms use the law to turn a blind eye to infringing content uploaded by users, content that is often monetized before being taken down. With this in mind, copyright holders on both sides of the Atlantic are pressing for more proactive regimes, ones that will see Internet platforms install filtering mechanisms to spot and discard infringing content before it can reach the public.

While such a system would be welcomed by rightsholders, Internet companies are fearful of a future in which they could be held more liable for the infringements of others. They’re supported by the EFF, who yesterday presented a petition to the US Copyright Office urging caution over potential changes to the DMCA.

“As Internet users, website owners, and online entrepreneurs, we urge you to preserve and strengthen the Digital Millennium Copyright Act safe harbors for Internet service providers,” the EFF writes.

“The DMCA safe harbors are key to keeping the Internet open to all. They allow anyone to launch a website, app, or other service without fear of crippling liability for copyright infringement by users.”

It is clear that pressure to introduce mandatory filtering is a concern to the EFF. Filters are blunt instruments that cannot fathom the intricacies of fair use and are liable to stifle free speech and stymie innovation, they argue.

“Major media and entertainment companies and their surrogates want Congress to replace today’s DMCA with a new law that would require websites and Internet services to use automated filtering to enforce copyrights.

“Systems like these, no matter how sophisticated, cannot accurately determine the copyright status of a work, nor whether a use is licensed, a fair use, or otherwise non-infringing. Simply put, automated filters censor lawful and important speech,” the EFF warns.

While its introduction was voluntary and doesn’t affect the company’s safe harbor protections, YouTube already has its own content filtering system in place.

ContentID is able to detect the nature of some content uploaded by users and give copyright holders a chance to remove or monetize it. The company says that the majority of copyright disputes are now handled by ContentID but the system is not perfect and mistakes are regularly flagged by users and mentioned in the media.

However, ContentID was also very expensive to implement so expecting smaller companies to deploy something similar on much more limited budgets could be a burden too far, the EFF warns.

“What’s more, even deeply flawed filters are prohibitively expensive for all but the largest Internet services. Requiring all websites to implement filtering would reinforce the market power wielded by today’s large Internet services and allow them to stifle competition. We urge you to preserve effective, usable DMCA safe harbors, and encourage Congress to do the same,” the EFF notes.

The same arguments, for and against, are currently raging in Europe where the EU Commission proposed mandatory upload filtering in 2016. Since then, opposition to the proposals has been fierce, with warnings of potential human rights breaches and conflicts with existing copyright law.

Back in the US, there are additional requirements for a provider to qualify for safe harbor, including having a named designated agent tasked with receiving copyright infringement notifications. This person’s name must be listed on a platform’s website and submitted to the US Copyright Office, which maintains a centralized online directory of designated agents’ contact information.

Under new rules, agents must be re-registered with the Copyright Office every three years, despite that not being a requirement under the DMCA. The EFF is concerned that by simply failing to re-register an agent, an otherwise responsible website could lose its safe harbor protections, even if the agent’s details have remained the same.

“We’re concerned that the new requirement will particularly disadvantage small and nonprofit websites. We ask you to reconsider this rule,” the EFF concludes.

The EFF’s letter to the Copyright Office can be found here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Addressing Data Residency with AWS

Post Syndicated from Min Hyun original https://aws.amazon.com/blogs/security/addressing-data-residency-with-aws/

Whitepaper image

AWS has released a new whitepaper that has been requested by many AWS customers: AWS Policy Perspectives: Data Residency. Data residency is the requirement that all customer content processed and stored in an IT system must remain within a specific country’s borders, and it is one of the foremost concerns of governments that want to use commercial cloud services. General cybersecurity concerns and concerns about government requests for data have contributed to a continued focus on keeping data within countries’ borders. In fact, some governments have determined that mandating data residency provides an extra layer of security.

This approach, however, is counterproductive to the data protection objectives and the IT modernization and global economic growth goals that many governments have set as milestones. This new whitepaper addresses the real and perceived security risks expressed by governments when they demand in-country data residency by identifying the most likely and prevalent IT vulnerabilities and security risks, explaining the native security embedded in cloud services, and highlighting the roles and responsibilities of cloud service providers (CSPs), governments, and customers in protecting data.

Large-scale, multinational CSPs, often called hyperscale CSPs, represent a transformational disruption in technology because of how they support their customers with high degrees of efficiency, agility, and innovation as part of world-class security offerings. The whitepaper explains how hyperscale CSPs, such as AWS, that might be located out of country provide their customers the ability to achieve high levels of data protection through safeguards on their own platform and with turnkey tooling for their customers. They do this while at the same time preserving nation-state regulatory sovereignty.

The whitepaper also considers the commercial, public-sector, and economic effects of data residency policies and offers considerations for governments to evaluate before enforcing requirements that can unintentionally limit public-sector digital transformation goals, in turn possibly leading to increased cybersecurity risk.

AWS continues to engage with governments around the world to hear and address their top-of-mind security concerns. We take seriously our commitment to advocate for our customers’ interests and enforce security from “ground zero.” This means that when customers use AWS, they can have the confidence that their data is protected with a level of assurance that meets, if not exceeds, their needs, regardless of where the data resides.

– Min Hyun, Cloud Security Policy Strategist

NAFTA Negotiations Heat Up Copyright “Safe Harbor” Clash

Post Syndicated from Ernesto original https://torrentfreak.com/nafta-negotiations-heat-up-copyright-safe-harbor-clash-180123/

The North American Free Trade Agreement (NAFTA) between the United States, Canada, and Mexico was negotiated more than 25 years ago.

Over the past quarter-century trade has changed drastically, especially online, so the United States is now planning to modernize the international deal.

One of the topics that has received a lot of interest from various experts and stakeholders are safe harbors. In the US, Internet services are shielded from copyright infringement liability under the safe harbor provisions of the DMCA, but in Mexico and Canada, that’s not the case.

The latest round of NAFTA renegotiations are currently taking place in Montreal and this is heating up the debate once again. Several legal scholars and advocacy groups believe that such US-style safe harbor provisions are essential for Internet services to operate freely on the Internet.

A group of more than fifty Internet law experts and organizations made this clear in a letter sent to the negotiators this week, urging them to make safe harbors part of the new deal.

“When NAFTA was negotiated, the Internet was an obscure electronic network. Since then, the Internet has become a significant — and essential — part of our societies and our economies,” the letter reads.

“To acknowledge this, if a modernized NAFTA contains a digital trade chapter, it should contain protections for online intermediaries from liability for third party online content, similar to the United States’ ‘Section 230’.”

The safe harbors in the Communications Decency Act and the DMCA ensure that services which deal with user-generated content, including Google, YouTube, Facebook, Twitter, and Wikipedia, are shielded from liability.

This immunity makes it easier for new user-generated services to launch, without the fear of expensive lawsuits, the argument goes.

However, not everyone sees it this way. In a letter cited by Variety, a group of 37 industry groups urges U.S. Trade Representative Robert Lighthizer to negotiate ‘strong’ safe harbor protections. Strong, in this case, means that simply responding to takedown notices is not always enough.

“If these anti-IP voices succeed, they will turn long-standing trade policy, with creativity and innovation at its core, on its head by transforming our trade agreements into blueprints for how to evade liability for IP theft,” they write.

The MPAA and RIAA, which also signed the letter, previously stressed that the current US safe harbors are not working. These industry groups believe that services such as YouTube exploit their safe harbor immunity and profit from it.

The RIAA, therefore, wants any negotiated safe harbor provisions in NAFTA to be flexible in the event that the DMCA is tightened up in response to the ongoing safe harbor rules study.

So, what should a content industry-approved safe harbor look like then?

The music industry group says that these should only be available to passive platforms that are not actively engaged in communicating and do not generate any revenue from pirated content. This would exclude YouTube and many other Internet services.

While it’s clear that the ideas of both camps are hard to unite, there’s still the question of whether there will be a new and improved NAFTA version at all. President Trump has previously threatened to terminate the agreement.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Article from a Former Chinese PLA General on Cyber Sovereignty

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/article_from_a_.html

Interesting article by Major General Hao Yeli, Chinese People’s Liberation Army (ret.), a senior advisor at the China International Institute for Strategic Society, Vice President of China Institute for Innovation and Development Strategy, and the Chair of the Guanchao Cyber Forum.

Against the background of globalization and the internet era, the emerging cyber sovereignty concept calls for breaking through the limitations of physical space and avoiding misunderstandings based on perceptions of binary opposition. Reinforcing a cyberspace community with a common destiny, it reconciles the tension between exclusivity and transferability, leading to a comprehensive perspective. China insists on its cyber sovereignty, meanwhile, it transfers segments of its cyber sovereignty reasonably. China rightly attaches importance to its national security, meanwhile, it promotes international cooperation and open development.

China has never been opposed to multi-party governance when appropriate, but rejects the denial of government’s proper role and responsibilities with respect to major issues. The multilateral and multiparty models are complementary rather than exclusive. Governments and multi-stakeholders can play different leading roles at the different levels of cyberspace.

In the internet era, the law of the jungle should give way to solidarity and shared responsibilities. Restricted connections should give way to openness and sharing. Intolerance should be replaced by understanding. And unilateral values should yield to respect for differences while recognizing the importance of diversity.

AWS IoT, Greengrass, and Machine Learning for Connected Vehicles at CES

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-iot-greengrass-and-machine-learning-for-connected-vehicles-at-ces/

Last week I attended a talk given by Bryan Mistele, president of Seattle-based INRIX. Bryan’s talk provided a glimpse into the future of transportation, centering around four principle attributes, often abbreviated as ACES:

Autonomous – Cars and trucks are gaining the ability to scan and to make sense of their environments and to navigate without human input.

Connected – Vehicles of all types have the ability to take advantage of bidirectional connections (either full-time or intermittent) to other cars and to cloud-based resources. They can upload road and performance data, communicate with each other to run in packs, and take advantage of traffic and weather data.

Electric – Continued development of battery and motor technology, will make electrics vehicles more convenient, cost-effective, and environmentally friendly.

Shared – Ride-sharing services will change usage from an ownership model to an as-a-service model (sound familiar?).

Individually and in combination, these emerging attributes mean that the cars and trucks we will see and use in the decade to come will be markedly different than those of the past.

On the Road with AWS
AWS customers are already using our AWS IoT, edge computing, Amazon Machine Learning, and Alexa products to bring this future to life – vehicle manufacturers, their tier 1 suppliers, and AutoTech startups all use AWS for their ACES initiatives. AWS Greengrass is playing an important role here, attracting design wins and helping our customers to add processing power and machine learning inferencing at the edge.

AWS customer Aptiv (formerly Delphi) talked about their Automated Mobility on Demand (AMoD) smart vehicle architecture in a AWS re:Invent session. Aptiv’s AMoD platform will use Greengrass and microservices to drive the onboard user experience, along with edge processing, monitoring, and control. Here’s an overview:

Another customer, Denso of Japan (one of the world’s largest suppliers of auto components and software) is using Greengrass and AWS IoT to support their vision of Mobility as a Service (MaaS). Here’s a video:

AWS at CES
The AWS team will be out in force at CES in Las Vegas and would love to talk to you. They’ll be running demos that show how AWS can help to bring innovation and personalization to connected and autonomous vehicles.

Personalized In-Vehicle Experience – This demo shows how AWS AI and Machine Learning can be used to create a highly personalized and branded in-vehicle experience. It makes use of Amazon Lex, Polly, and Amazon Rekognition, but the design is flexible and can be used with other services as well. The demo encompasses driver registration, login and startup (including facial recognition), voice assistance for contextual guidance, personalized e-commerce, and vehicle control. Here’s the architecture for the voice assistance:

Connected Vehicle Solution – This demo shows how a connected vehicle can combine local and cloud intelligence, using edge computing and machine learning at the edge. It handles intermittent connections and uses AWS DeepLens to train a model that responds to distracted drivers. Here’s the overall architecture, as described in our Connected Vehicle Solution:

Digital Content Delivery – This demo will show how a customer uses a web-based 3D configurator to build and personalize their vehicle. It will also show high resolution (4K) 3D image and an optional immersive AR/VR experience, both designed for use within a dealership.

Autonomous Driving – This demo will showcase the AWS services that can be used to build autonomous vehicles. There’s a 1/16th scale model vehicle powered and driven by Greengrass and an overview of a new AWS Autonomous Toolkit. As part of the demo, attendees drive the car, training a model via Amazon SageMaker for subsequent on-board inferencing, powered by Greengrass ML Inferencing.

To speak to one of my colleagues or to set up a time to see the demos, check out the Visit AWS at CES 2018 page.

Some Resources
If you are interested in this topic and want to learn more, the AWS for Automotive page is a great starting point, with discussions on connected vehicles & mobility, autonomous vehicle development, and digital customer engagement.

When you are ready to start building a connected vehicle, the AWS Connected Vehicle Solution contains a reference architecture that combines local computing, sophisticated event rules, and cloud-based data processing and storage. You can use this solution to accelerate your own connected vehicle projects.

Jeff;

Wanted: Sales Engineer

Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-sales-engineer/

At inception, Backblaze was a consumer company. Thousands upon thousands of individuals came to our website and gave us $5/mo to keep their data safe. But, we didn’t sell business solutions. It took us years before we had a sales team. In the last couple of years, we’ve released products that businesses of all sizes love: Backblaze B2 Cloud Storage and Backblaze for Business Computer Backup. Those businesses want to integrate Backblaze deeply into their infrastructure, so it’s time to hire our first Sales Engineer!

Company Description:
Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 – robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.

We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.

We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple – grow sustainably and profitably.

Some Backblaze Perks:

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive Option grants
  • Unlimited vacation days
  • Strong coffee
  • Fully stocked Micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • Childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office – located near Caltrain and Highways 101 & 280.

Backblaze B2 cloud storage is a building block for almost any computing service that requires storage. Customers need our help integrating B2 into iOS apps to Docker containers. Some customers integrate directly to the API using the programming language of their choice, others want to solve a specific problem using ready made software, already integrated with B2.

At the same time, our computer backup product is deepening it’s integration into enterprise IT systems. We are commonly asked for how to set Windows policies, integrate with Active Directory, and install the client via remote management tools.

We are looking for a sales engineer who can help our customers navigate the integration of Backblaze into their technical environments.

Are you 1/2” deep into many different technologies, and unafraid to dive deeper?

Can you confidently talk with customers about their technology, even if you have to look up all the acronyms right after the call?

Are you excited to setup complicated software in a lab and write knowledge base articles about your work?

Then Backblaze is the place for you!

Enough about Backblaze already, what’s in it for me?
In this role, you will be given the opportunity to learn about the technologies that drive innovation today; diverse technologies that customers are using day in and out. And more importantly, you’ll learn how to learn new technologies.

Just as an example, in the past 12 months, we’ve had the opportunity to learn and become experts in these diverse technologies:

  • How to setup VM servers for lab environments, both on-prem and using cloud services.
  • Create an automatically “resetting” demo environment for the sales team.
  • Setup Microsoft Domain Controllers with Active Directory and AD Federation Services.
  • Learn the basics of OAUTH and web single sign on (SSO).
  • Archive video workflows from camera to media asset management systems.
  • How upload/download files from Javascript by enabling CORS.
  • How to install and monitor online backup installations using RMM tools, like JAMF.
  • Tape (LTO) systems. (Yes – people still use tape for storage!)

How can I know if I’ll succeed in this role?

You have:

  • Confidence. Be able to ask customers questions about their environments and convey to them your technical acumen.
  • Curiosity. Always want to learn about customers’ situations, how they got there and what problems they are trying to solve.
  • Organization. You’ll work with customers, integration partners, and Backblaze team members on projects of various lengths. You can context switch and either have a great memory or keep copious notes. Your checklists have their own checklists.

You are versed in:

  • The fundamentals of Windows, Linux and Mac OS X operating systems. You shouldn’t be afraid to use a command line.
  • Building, installing, integrating and configuring applications on any operating system.
  • Debugging failures – reading logs, monitoring usage, effective google searching to fix problems excites you.
  • The basics of TCP/IP networking and the HTTP protocol.
  • Novice development skills in any programming/scripting language. Have basic understanding of data structures and program flow.
  • Your background contains:

  • Bachelor’s degree in computer science or the equivalent.
  • 2+ years of experience as a pre or post-sales engineer.
  • The right extra credit:
    There are literally hundreds of previous experiences you can have had that would make you perfect for this job. Some experiences that we know would be helpful for us are below, but make sure you tell us your stories!

  • Experience using or programming against Amazon S3.
  • Experience with large on-prem storage – NAS, SAN, Object. And backing up data on such storage with tools like Veeam, Veritas and others.
  • Experience with photo or video media. Media archiving is a key market for Backblaze B2.
  • Program arduinos to automatically feed your dog.
  • Experience programming against web or REST APIs. (Point us towards your projects, if they are open source and available to link to.)
  • Experience with sales tools like Salesforce.
  • 3D print door stops.
  • Experience with Windows Servers, Active Directory, Group policies and the like.
  • What’s it like working with the Sales team?
    The Backblaze sales team collaborates. We help each other out by sharing ideas, templates, and our customer’s experiences. When we talk about our accomplishments, there is no “I did this,” only “we”. We are truly a team.

    We are honest to each other and our customers and communicate openly. We aim to have fun by embracing crazy ideas and creative solutions. We try to think not outside the box, but with no boxes at all. Customers are the driving force behind the success of the company and we care deeply about their success.

    If this all sounds like you:

    1. Send an email to [email protected] with the position in the subject line.
    2. Tell us a bit about your Sales Engineering experience.
    3. Include your resume.

    The post Wanted: Sales Engineer appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Blockchain Startup White Rabbit Calls on Pirate Sites to Do Business, Legally

    Post Syndicated from Andy original https://torrentfreak.com/blockchain-startup-white-rabbit-calls-on-pirate-sites-to-do-business-legally-180102/

    For as long as piracy has been mainstream, people have tried to find ways to monetize the system. While many have had good intentions, only models focusing on the negative (copyright trolling, for example) have enjoyed any level of success.

    Blockchain startup White Rabbit is hoping to buck that trend but it’s not going to be easy. Then again, nothing worthwhile is, so what do they have to offer?

    White Rabbit begins with the assumption that while they love their pirate sites, a many as 60% of pirates would happily reward creators if it was made easy enough. The startup deals with this by inviting pirates to carry on using the kinds of unauthorized sites and services they’re using already, but with a twist.

    By installing the White Rabbit browser plug-in, the company will be able to see what content the user is accessing. It will then attempt to match that download to deals it’s made with the companies behind those movies or TV shows. They’ll then get paid a set amount.

    “White Rabbit is a content ecosystem accessed through a plugin that recognizes the film and series you stream. The streaming sites are P2P or open server, meaning users can choose where they want to stream,” White Rabbit CEO Alan R. Milligan informs TF.

    “We already have a library of films that have won and been nominated for Oscars, Cannes, Berlin and Venice film festival best film prizes – but will continue adding more films and series as we near launch.”

    It’s envisioned that this mechanism will prove popular with reluctant pirates since instead of paying Netflix, Amazon, and dozens of other services, users can pay for content through one channel. And, since White Rabbit uses blockchain technology, rights holders can be ensured complete financial transparency, with user payments going straight to them without delay, cutting out the middleman.

    “Users are anonymous but can offer filmmakers, artists or other content right holders (investors, distributors, sales agents) our tokens (WRT) as good faith that they are willing to pay for the content. Should the rights holders accept, we enter into a contract with the rights holder that allows them to receive revenue – and accept P2P streaming. We find, and research shows, that most people that are forced to piracy [do so] because they are just not able to access content,” Milligan adds.

    White Rabbit’s CEO, who is a filmmaker himself, also sees opportunities to bring fans and filmmakers closer together. Once users have paid for content, they continue to get access via something called the Rabbit Hole, an interface which provides extras that are normally found on a DVD, such as deleted scenes etc.

    The team behind White Rabbit describe themselves as “responsible rebels” hoping to spark a revolution. While that’s clearly the goal, by any measure there is a mountain to climb, not least on the content front.

    When TorrentFreak first started speaking with the startup in October last year, we were told they were “closing in on 500 films” with contracts, although they wouldn’t elaborate on who might be on board. Nevertheless, that is quite a lot of movies, especially given the mainstream studios’ hatred of pirate sites and anything they might be involved in.

    However, subsequent discussion suggests that those with more niche tastes might be White Rabbit’s initial target audience.

    “I believe timing is of big relevance and right now a lot of producers are scared of where they´re going to go now that Netflix is enforcing its 50/50 policy. There are also so many amazing films out there that get no or little digital distribution at all,” Milligan says.

    “As a Norwegian film producer there is little chance of the film being streamed in my home country – even if we won awards in Cannes and Venice. My latest film Valley of Shadows got US digital distribution, but in Norway – nada.

    “My colleagues around the world are suffering the same way, not to mention all the fans who cant watch local films and series. So the indie part of the industry – which is most of us (and still representing 20-30% of cinema sales) – are very ready for change.”

    But while indie producers could benefit nicely from White Rabbit, Milligan highlights problems that the big studios have, and suggests that they might like to see the startup succeed too.

    “The studios will likely want to see our business model work – but they also have a problem with Netflix which has become a studio. So they´re competitors now, but Netflix has a 100M subscriber advantage. Will they all break out and create each their streaming site for their content only? That would be terrible for fans,” he notes.

    That would indeed be a huge problem and it’s an issue we’ve raised here on TF on several occasions. However, if White Rabbit is to succeed, it needs to overcome significant hurdles. We raised just a handful of these with its CEO. First up, Partner Streaming Sites (PSS).

    PSS sites appear to be pirate sites that will partner with White Rabbit, so the latter can tap into the formers’ userbases. When White Rabbit users stream ‘pirate’ content from a PSS, that content will be monetized, with the creator getting paid quickly and transparently. At that point, it seems, the content will become non-infringing.

    But while that sounds intriguing in theory, plenty of questions remain. White Rabbit says it will share “up to $1M” from its token sale “with the most innovative, brand conscious, film and series loving streaming sites either already out there, planned or about to launch.”

    The start-up says the best projects could get $100,000 each but, since its goal is to convert pirates, that necessarily means doing business with pirate sites.

    So we asked; how will it be possible to do business with people that are regularly described as criminals? How will it then become possible to secure deals with filmmakers that will undoubtedly come under huge pressure from industry players not to participate in the White Rabbit scheme?

    “What we are trying to do is to change digital distribution to everyone´s benefit. We have no interest in financing illegal content, we are interested in spurring innovation in streaming, access for fans and due payment for the rights holders,” Milligan explains.

    “That´s what PSS can help us achieve using the WRT (White Rabbit Token) – that helps us find out who wants to be part of this model. No revenue exchanges hands until rights holders accept the token. What is important for rights holders is that we generate more revenue for them than current business models, and we haven´t even included the Rabbit Hole revenue yet.”

    So what happens if a White Rabbit user tries to stream something that isn’t part of the program? According to Milligan, PSS sites must remove the content and let White Rabbit users know they must get the content legally elsewhere.

    Clearly, the vast majority of pirate site users aren’t White Rabbit users now, nor will they be so in the future, so the removal of content is massively counter-productive for pirate sites. Indeed, it’s this reluctance to take down infringing content that causes them most of their problems.

    So, hypothetically, what happens when the operators of streaming site X (that previously partnered with White Rabbit) get arrested and their site shut down for distributing Hollywood content that isn’t part of the program?

    “PSS´s would never distribute illegal content, we are offering an opportunity to monetize. We are allowing a platform to those that see monetized P2P as beneficial to their income stream,” Milligan says.

    “Hollywood is tricky though, I admit. The proof is in the pudding, so if we have to prove the value through indie and arthouse films first that´s OK. That is still 30% of the multi-billion dollar film market, so we are OK to start with that.”

    The final issue is the price and where revenue goes. White Rabbit envisions a user paying $2 for film and $1 for a TV show, although producers are free to set their own price. That means 11 TV shows or five movies per month, given the Netflix model/budget of roughly $11.00 for the same period.

    Revenue generated would then be split, with 75% going to the rightsholders, 15% to White Rabbit, and 10% to PSS sites. There’s also a provision for non-PSS sites to be a part of the program, but they would only get 5%, with the remaining 5% going to White Rabbit.

    With an incredibly ambitious project like this, it’s easy to find reasons why it might not succeed or even fail to get off the ground. But the team behind the operation have lots of experience in relevant fields and from what we’ve seen are putting considerable effort into getting things moving, as their white paper (pdf) explains.

    Currently, White Rabbit is seeking conversation with prospective Partner Streaming Sites, who will provide the content on which White Rabbit will survive. It will certainly be interesting to see which sites put themselves forward for consideration.

    This is one of those projects that raises a dizzying volume of questions, with each living up to their billing as part of the Rabbit Hole. The big question is whether the Rabbit Hole will eventually lead to Wonderland or will render everyone who ventures inside feeling surreal and disorientated.

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

    Five ‘Fantastic’ Piracy Predictions for 2018

    Post Syndicated from Ernesto original https://torrentfreak.com/five-fantastic-piracy-predictions-for-2018-180101/

    On January 1, the TF newsroom often wonders what copyright and piracy news the new year will have in store.

    Today we want to give our readers some insight into some of the things that crossed our minds.

    Granted, predicting the future isn’t an easy task, but the ‘fantastic’ forecasts below give plenty of food for thought and discussion.

    Power Cord Manufacturer Held Liable for Streaming Piracy

    Hollywood’s concerns over pirate streaming boxes will reach unprecedented levels this year. After successful cases against box sellers and add-on developers, the major movie studios will take aim at the hardware.

    A Chinese power cord manufacturer, believed to be linked to more than half of all the streaming boxes sold throughout the world, will be taken to court.

    The movie studios argue that the power-cords are essential to make pirate streaming boxes work. They are therefore liable for contributory copyright infringement and should pay for the billions in losses they are partly responsible for.

    Pirate Sites Launch ‘The Pirate Coin’

    In 2017 The Pirate Bay added a cryptocoin miner to its website, an example many other pirate sites followed. In the new year, there will be another cryptocurrency innovation that will have an even more profound effect.

    After Google Chrome adds its default ad-blocker to the Chrome browser, a coalition of torrent sites will release The Pirate Coin.

    With this new cryptocurrency, users can buy all sorts of perks and features on their favorite download and streaming portals. From priority HD streaming, through personalized RSS feeds, to VIP access – Pirate Coins can pay for it all.

    The new coin will see mass adoption within a few months and provide a stable income for pirate sites, which no longer see the need for traditional ads.

    YouTube Music Label Signs First Artists

    For years on end, the major music labels have complained bitterly about YouTube. While the video service earned them millions, they demanded better deals and less piracy.

    In 2018, YouTube will run out of patience. The video streaming platform will launch a counter-attack and start its own record label. With a talent pool of millions of aspiring artists among its users, paired with the right algorithms, they are a force to be reckoned with.

    After signing the first artists, YouTube will scold the other labels for not giving their musicians the best deals.

    Comcast Introduces Torrent Pro Subscription

    While there’s still a lot of public outrage against the net neutrality repeal in 2018, torrent users are no longer complaining. After the changes are approved by Congress, Comcast will announce its first non-neutral Internet package.

    The Torrent Pro (®) package will allow subscribers to share files via BitTorrent in an optimized network environment.

    Their traffic will be routed over separate lanes with optimal connections to India, while minimizing interference from regular Internet users.

    The new package comes with a free VPN, of course, to ensure that all transfers take place in a fully encrypted setting without having to worry about false notifications from outsiders.

    Pirate Bay Goes All-in on Streaming

    The Pirate Bay turns 15 years old in 2018, which is an unprecedented achievement. While the site’s appearance hasn’t changed much since the mid-2000s, technically it has been changed down quite a bit.

    The resource-intensive tracker was removed from the site years ago, for example, and shortly after, the .torrent files followed. This made The Pirate Bay more ‘portable’ and easier to operate, the argument was.

    In 2018 The Pirate Bay will take things even further. Realizing that torrents are no longer as modern as they once were, TPB will make the switch to streaming, at least for video.

    While the site has experimented with streaming browser add-ons in the past, it will implement WebTorrent streaming support in the new year. This means users can stream high-quality videos directly from the TPB website.

    The new streaming feature will be released together with an overhaul of the search engine and site navigation, allowing users to follow TV-shows more easily, and see what’s new at a glimpse.

    Happy 2018!

    Don’t believe in any of the above? Look how accurate we were last year! Don’t forget the salt…

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

    Bitcoin: In Crypto We Trust

    Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/12/bitcoin-in-crypto-we-trust.html

    Tim Wu, who coined “net neutrality”, has written an op-ed on the New York Times called “The Bitcoin Boom: In Code We Trust“. He is wrong about “code”.

    The wrong “trust”

    Wu builds a big manifesto about how real-world institutions can’t be trusted. Certainly, this reflects the rhetoric from a vocal wing of Bitcoin fanatics, but it’s not the Bitcoin manifesto.

    Instead, the word “trust” in the Bitcoin paper is much narrower, referring to how online merchants can’t trust credit-cards (for example). When I bought school supplies for my niece when she studied in Canada, the online site wouldn’t accept my U.S. credit card. They didn’t trust my credit card. However, they trusted my Bitcoin, so I used that payment method instead, and succeeded in the purchase.

    Real-world currencies like dollars are tethered to the real-world, which means no single transaction can be trusted, because “they” (the credit-card company, the courts, etc.) may decide to reverse the transaction. The manifesto behind Bitcoin is that a transaction cannot be reversed — and thus, can always be trusted.

    Deliberately confusing the micro-trust in a transaction and macro-trust in banks and governments is a sort of bait-and-switch.

    The wrong inspiration

    Wu claims:

    “It was, after all, a carnival of human errors and misfeasance that inspired the invention of Bitcoin in 2009, namely, the financial crisis.”

    Not true. Bitcoin did not appear fully formed out of the void, but was instead based upon a series of innovations that predate the financial crisis by a decade. Moreover, the financial crisis had little to do with “currency”. The value of the dollar and other major currencies were essentially unscathed by the crisis. Certainly, enthusiasts looking backward like to cherry pick the financial crisis as yet one more reason why the offline world sucks, but it had little to do with Bitcoin.

    In crypto we trust

    It’s not in code that Bitcoin trusts, but in crypto. Satoshi makes that clear in one of his posts on the subject:

    A generation ago, multi-user time-sharing computer systems had a similar problem. Before strong encryption, users had to rely on password protection to secure their files, placing trust in the system administrator to keep their information private. Privacy could always be overridden by the admin based on his judgment call weighing the principle of privacy against other concerns, or at the behest of his superiors. Then strong encryption became available to the masses, and trust was no longer required. Data could be secured in a way that was physically impossible for others to access, no matter for what reason, no matter how good the excuse, no matter what.

    You don’t possess Bitcoins. Instead, all the coins are on the public blockchain under your “address”. What you possess is the secret, private key that matches the address. Transferring Bitcoin means using your private key to unlock your coins and transfer them to another. If you print out your private key on paper, and delete it from the computer, it can never be hacked.

    Trust is in this crypto operation. Trust is in your private crypto key.

    We don’t trust the code

    The manifesto “in code we trust” has been proven wrong again and again. We don’t trust computer code (software) in the cryptocurrency world.

    The most profound example is something known as the “DAO” on top of Ethereum, Bitcoin’s major competitor. Ethereum allows “smart contracts” containing code. The quasi-religious manifesto of the DAO smart-contract is that the “code is the contract”, that all the terms and conditions are specified within the smart-contract code, completely untethered from real-world terms-and-conditions.

    Then a hacker found a bug in the DAO smart-contract and stole most of the money.

    In principle, this is perfectly legal, because “the code is the contract”, and the hacker just used the code. In practice, the system didn’t live up to this. The Ethereum core developers, acting as central bankers, rewrote the Ethereum code to fix this one contract, returning the money back to its original owners. They did this because those core developers were themselves heavily invested in the DAO and got their money back.

    Similar things happen with the original Bitcoin code. A disagreement has arisen about how to expand Bitcoin to handle more transactions. One group wants smaller and “off-chain” transactions. Another group wants a “large blocksize”. This caused a “fork” in Bitcoin with two versions, “Bitcoin” and “Bitcoin Cash”. The fork championed by the core developers (central bankers) is worth around $20,000 right now, while the other fork is worth around $2,000.

    So it’s still “in central bankers we trust”, it’s just that now these central bankers are mostly online instead of offline institutions. They have proven to be even more corrupt than real-world central bankers. It’s certainly not the code that is trusted.

    The bubble

    Wu repeats the well-known reference to Amazon during the dot-com bubble. If you bought Amazon’s stock for $107 right before the dot-com crash, it still would be one of wisest investments you could’ve made. Amazon shares are now worth around $1,200 each.

    The implication is that Bitcoin, too, may have such long term value. Even if you buy it today and it crashes tomorrow, it may still be worth ten-times its current value in another decade or two.

    This is a poor analogy, for three reasons.

    The first reason is that we knew the Internet had fundamentally transformed commerce. We knew there were going to be winners in the long run, it was just a matter of picking who would win (Amazon) and who would lose (Pets.com). We have yet to prove Bitcoin will be similarly transformative.

    The second reason is that businesses are real, they generate real income. While the stock price may include some irrational exuberance, it’s ultimately still based on the rational expectations of how much the business will earn. With Bitcoin, it’s almost entirely irrational exuberance — there are no long term returns.

    The third flaw in the analogy is that there are an essentially infinite number of cryptocurrencies. We saw this today as Coinbase started trading Bitcoin Cash, a fork of Bitcoin. The two are nearly identical, so there’s little reason one should be so much valuable than another. It’s only a fickle fad that makes one more valuable than another, not business fundamentals. The successful future cryptocurrency is unlikely to exist today, but will be invented in the future.

    The lessons of the dot-com bubble is not that Bitcoin will have long term value, but that cryptocurrency companies like Coinbase and BitPay will have long term value. Or, the lesson is that “old” companies like JPMorgan that are early adopters of the technology will grow faster than their competitors.

    Conclusion

    The point of Wu’s paper is to distinguish trust in traditional real-world institutions and trust in computer software code. This is an inaccurate reading of the situation.

    Bitcoin is not about replacing real-world institutions but about untethering online transactions.

    The trust in Bitcoin is in crypto — the power crypto gives individuals instead of third-parties.

    The trust is not in the code. Bitcoin is a “cryptocurrency” not a “codecurrency”.

    Now Open AWS EU (Paris) Region

    Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-eu-paris-region/

    Today we are launching our 18th AWS Region, our fourth in Europe. Located in the Paris area, AWS customers can use this Region to better serve customers in and around France.

    The Details
    The new EU (Paris) Region provides a broad suite of AWS services including Amazon API Gateway, Amazon Aurora, Amazon CloudFront, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, Amazon DynamoDB, Amazon Elastic Compute Cloud (EC2), EC2 Container Registry, Amazon ECS, Amazon Elastic Block Store (EBS), Amazon EMR, Amazon ElastiCache, Amazon Elasticsearch Service, Amazon Glacier, Amazon Kinesis Streams, Polly, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), Amazon Virtual Private Cloud, Auto Scaling, AWS Certificate Manager (ACM), AWS CloudFormation, AWS CloudTrail, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, AWS Elastic Beanstalk, AWS Identity and Access Management (IAM), AWS Key Management Service (KMS), AWS Lambda, AWS Marketplace, AWS OpsWorks Stacks, AWS Personal Health Dashboard, AWS Server Migration Service, AWS Service Catalog, AWS Shield Standard, AWS Snowball, AWS Snowball Edge, AWS Snowmobile, AWS Storage Gateway, AWS Support (including AWS Trusted Advisor), Elastic Load Balancing, and VM Import.

    The Paris Region supports all sizes of C5, M5, R4, T2, D2, I3, and X1 instances.

    There are also four edge locations for Amazon Route 53 and Amazon CloudFront: three in Paris and one in Marseille, all with AWS WAF and AWS Shield. Check out the AWS Global Infrastructure page to learn more about current and future AWS Regions.

    The Paris Region will benefit from three AWS Direct Connect locations. Telehouse Voltaire is available today. AWS Direct Connect will also become available at Equinix Paris in early 2018, followed by Interxion Paris.

    All AWS infrastructure regions around the world are designed, built, and regularly audited to meet the most rigorous compliance standards and to provide high levels of security for all AWS customers. These include ISO 27001, ISO 27017, ISO 27018, SOC 1 (Formerly SAS 70), SOC 2 and SOC 3 Security & Availability, PCI DSS Level 1, and many more. This means customers benefit from all the best practices of AWS policies, architecture, and operational processes built to satisfy the needs of even the most security sensitive customers.

    AWS is certified under the EU-US Privacy Shield, and the AWS Data Processing Addendum (DPA) is GDPR-ready and available now to all AWS customers to help them prepare for May 25, 2018 when the GDPR becomes enforceable. The current AWS DPA, as well as the AWS GDPR DPA, allows customers to transfer personal data to countries outside the European Economic Area (EEA) in compliance with European Union (EU) data protection laws. AWS also adheres to the Cloud Infrastructure Service Providers in Europe (CISPE) Code of Conduct. The CISPE Code of Conduct helps customers ensure that AWS is using appropriate data protection standards to protect their data, consistent with the GDPR. In addition, AWS offers a wide range of services and features to help customers meet the requirements of the GDPR, including services for access controls, monitoring, logging, and encryption.

    From Our Customers
    Many AWS customers are preparing to use this new Region. Here’s a small sample:

    Societe Generale, one of the largest banks in France and the world, has accelerated their digital transformation while working with AWS. They developed SG Research, an application that makes reports from Societe Generale’s analysts available to corporate customers in order to improve the decision-making process for investments. The new AWS Region will reduce latency between applications running in the cloud and in their French data centers.

    SNCF is the national railway company of France. Their mobile app, powered by AWS, delivers real-time traffic information to 14 million riders. Extreme weather, traffic events, holidays, and engineering works can cause usage to peak at hundreds of thousands of users per second. They are planning to use machine learning and big data to add predictive features to the app.

    Radio France, the French public radio broadcaster, offers seven national networks, and uses AWS to accelerate its innovation and stay competitive.

    Les Restos du Coeur, a French charity that provides assistance to the needy, delivering food packages and participating in their social and economic integration back into French society. Les Restos du Coeur is using AWS for its CRM system to track the assistance given to each of their beneficiaries and the impact this is having on their lives.

    AlloResto by JustEat (a leader in the French FoodTech industry), is using AWS to to scale during traffic peaks and to accelerate their innovation process.

    AWS Consulting and Technology Partners
    We are already working with a wide variety of consulting, technology, managed service, and Direct Connect partners in France. Here’s a partial list:

    AWS Premier Consulting PartnersAccenture, Capgemini, Claranet, CloudReach, DXC, and Edifixio.

    AWS Consulting PartnersABC Systemes, Atos International SAS, CoreExpert, Cycloid, Devoteam, LINKBYNET, Oxalide, Ozones, Scaleo Information Systems, and Sopra Steria.

    AWS Technology PartnersAxway, Commerce Guys, MicroStrategy, Sage, Software AG, Splunk, Tibco, and Zerolight.

    AWS in France
    We have been investing in Europe, with a focus on France, for the last 11 years. We have also been developing documentation and training programs to help our customers to improve their skills and to accelerate their journey to the AWS Cloud.

    As part of our commitment to AWS customers in France, we plan to train more than 25,000 people in the coming years, helping them develop highly sought after cloud skills. They will have access to AWS training resources in France via AWS Academy, AWSome days, AWS Educate, and webinars, all delivered in French by AWS Technical Trainers and AWS Certified Trainers.

    Use it Today
    The EU (Paris) Region is open for business now and you can start using it today!

    Jeff;

     

    САЩ: FCC отмени правилата за неутралност на мрежата

    Post Syndicated from nellyo original https://nellyo.wordpress.com/2017/12/15/fcc-netneutr/

    Отхвърлянето на правилата за неутралност на мрежата е най-значимото и противоречиво действие на американския регулатор FCC под ръководството на новоназначения председател Ажит Пай, пише Ню Йорк Таймс. През първите  11 месеца в качеството си на председател, той вдигна и ограниченията за собствеността на медиите.

    Netflix заявява, че решението “е началото на по-дълга съдебна битка”.

    https://platform.twitter.com/widgets.js

    И нашата  – европейска –  реакция:

    https://platform.twitter.com/widgets.js

    Filed under: Digital, US Law

    What is HAMR and How Does It Enable the High-Capacity Needs of the Future?

    Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hamr-hard-drives/

    HAMR drive illustration

    During Q4, Backblaze deployed 100 petabytes worth of Seagate hard drives to our data centers. The newly deployed Seagate 10 and 12 TB drives are doing well and will help us meet our near term storage needs, but we know we’re going to need more drives — with higher capacities. That’s why the success of new hard drive technologies like Heat-Assisted Magnetic Recording (HAMR) from Seagate are very relevant to us here at Backblaze and to the storage industry in general. In today’s guest post we are pleased to have Mark Re, CTO at Seagate, give us an insider’s look behind the hard drive curtain to tell us how Seagate engineers are developing the HAMR technology and making it market ready starting in late 2018.

    What is HAMR and How Does It Enable the High-Capacity Needs of the Future?

    Guest Blog Post by Mark Re, Seagate Senior Vice President and Chief Technology Officer

    Earlier this year Seagate announced plans to make the first hard drives using Heat-Assisted Magnetic Recording, or HAMR, available by the end of 2018 in pilot volumes. Even as today’s market has embraced 10TB+ drives, the need for 20TB+ drives remains imperative in the relative near term. HAMR is the Seagate research team’s next major advance in hard drive technology.

    HAMR is a technology that over time will enable a big increase in the amount of data that can be stored on a disk. A small laser is attached to a recording head, designed to heat a tiny spot on the disk where the data will be written. This allows a smaller bit cell to be written as either a 0 or a 1. The smaller bit cell size enables more bits to be crammed into a given surface area — increasing the areal density of data, and increasing drive capacity.

    It sounds almost simple, but the science and engineering expertise required, the research, experimentation, lab development and product development to perfect this technology has been enormous. Below is an overview of the HAMR technology and you can dig into the details in our technical brief that provides a point-by-point rundown describing several key advances enabling the HAMR design.

    As much time and resources as have been committed to developing HAMR, the need for its increased data density is indisputable. Demand for data storage keeps increasing. Businesses’ ability to manage and leverage more capacity is a competitive necessity, and IT spending on capacity continues to increase.

    History of Increasing Storage Capacity

    For the last 50 years areal density in the hard disk drive has been growing faster than Moore’s law, which is a very good thing. After all, customers from data centers and cloud service providers to creative professionals and game enthusiasts rarely go shopping looking for a hard drive just like the one they bought two years ago. The demands of increasing data on storage capacities inevitably increase, thus the technology constantly evolves.

    According to the Advanced Storage Technology Consortium, HAMR will be the next significant storage technology innovation to increase the amount of storage in the area available to store data, also called the disk’s “areal density.” We believe this boost in areal density will help fuel hard drive product development and growth through the next decade.

    Why do we Need to Develop Higher-Capacity Hard Drives? Can’t Current Technologies do the Job?

    Why is HAMR’s increased data density so important?

    Data has become critical to all aspects of human life, changing how we’re educated and entertained. It affects and informs the ways we experience each other and interact with businesses and the wider world. IDC research shows the datasphere — all the data generated by the world’s businesses and billions of consumer endpoints — will continue to double in size every two years. IDC forecasts that by 2025 the global datasphere will grow to 163 zettabytes (that is a trillion gigabytes). That’s ten times the 16.1 ZB of data generated in 2016. IDC cites five key trends intensifying the role of data in changing our world: embedded systems and the Internet of Things (IoT), instantly available mobile and real-time data, cognitive artificial intelligence (AI) systems, increased security data requirements, and critically, the evolution of data from playing a business background to playing a life-critical role.

    Consumers use the cloud to manage everything from family photos and videos to data about their health and exercise routines. Real-time data created by connected devices — everything from Fitbit, Alexa and smart phones to home security systems, solar systems and autonomous cars — are fueling the emerging Data Age. On top of the obvious business and consumer data growth, our critical infrastructure like power grids, water systems, hospitals, road infrastructure and public transportation all demand and add to the growth of real-time data. Data is now a vital element in the smooth operation of all aspects of daily life.

    All of this entails a significant infrastructure cost behind the scenes with the insatiable, global appetite for data storage. While a variety of storage technologies will continue to advance in data density (Seagate announced the first 60TB 3.5-inch SSD unit for example), high-capacity hard drives serve as the primary foundational core of our interconnected, cloud and IoT-based dependence on data.

    HAMR Hard Drive Technology

    Seagate has been working on heat assisted magnetic recording (HAMR) in one form or another since the late 1990s. During this time we’ve made many breakthroughs in making reliable near field transducers, special high capacity HAMR media, and figuring out a way to put a laser on each and every head that is no larger than a grain of salt.

    The development of HAMR has required Seagate to consider and overcome a myriad of scientific and technical challenges including new kinds of magnetic media, nano-plasmonic device design and fabrication, laser integration, high-temperature head-disk interactions, and thermal regulation.

    A typical hard drive inside any computer or server contains one or more rigid disks coated with a magnetically sensitive film consisting of tiny magnetic grains. Data is recorded when a magnetic write-head flies just above the spinning disk; the write head rapidly flips the magnetization of one magnetic region of grains so that its magnetic pole points up or down, to encode a 1 or a 0 in binary code.

    Increasing the amount of data you can store on a disk requires cramming magnetic regions closer together, which means the grains need to be smaller so they won’t interfere with each other.

    Heat Assisted Magnetic Recording (HAMR) is the next step to enable us to increase the density of grains — or bit density. Current projections are that HAMR can achieve 5 Tbpsi (Terabits per square inch) on conventional HAMR media, and in the future will be able to achieve 10 Tbpsi or higher with bit patterned media (in which discrete dots are predefined on the media in regular, efficient, very dense patterns). These technologies will enable hard drives with capacities higher than 100 TB before 2030.

    The major problem with packing bits so closely together is that if you do that on conventional magnetic media, the bits (and the data they represent) become thermally unstable, and may flip. So, to make the grains maintain their stability — their ability to store bits over a long period of time — we need to develop a recording media that has higher coercivity. That means it’s magnetically more stable during storage, but it is more difficult to change the magnetic characteristics of the media when writing (harder to flip a grain from a 0 to a 1 or vice versa).

    That’s why HAMR’s first key hardware advance required developing a new recording media that keeps bits stable — using high anisotropy (or “hard”) magnetic materials such as iron-platinum alloy (FePt), which resist magnetic change at normal temperatures. Over years of HAMR development, Seagate researchers have tested and proven out a variety of FePt granular media films, with varying alloy composition and chemical ordering.

    In fact the new media is so “hard” that conventional recording heads won’t be able to flip the bits, or write new data, under normal temperatures. If you add heat to the tiny spot on which you want to write data, you can make the media’s coercive field lower than the magnetic field provided by the recording head — in other words, enable the write head to flip that bit.

    So, a challenge with HAMR has been to replace conventional perpendicular magnetic recording (PMR), in which the write head operates at room temperature, with a write technology that heats the thin film recording medium on the disk platter to temperatures above 400 °C. The basic principle is to heat a tiny region of several magnetic grains for a very short time (~1 nanoseconds) to a temperature high enough to make the media’s coercive field lower than the write head’s magnetic field. Immediately after the heat pulse, the region quickly cools down and the bit’s magnetic orientation is frozen in place.

    Applying this dynamic nano-heating is where HAMR’s famous “laser” comes in. A plasmonic near-field transducer (NFT) has been integrated into the recording head, to heat the media and enable magnetic change at a specific point. Plasmonic NFTs are used to focus and confine light energy to regions smaller than the wavelength of light. This enables us to heat an extremely small region, measured in nanometers, on the disk media to reduce its magnetic coercivity,

    Moving HAMR Forward

    HAMR write head

    As always in advanced engineering, the devil — or many devils — is in the details. As noted earlier, our technical brief provides a point-by-point short illustrated summary of HAMR’s key changes.

    Although hard work remains, we believe this technology is nearly ready for commercialization. Seagate has the best engineers in the world working towards a goal of a 20 Terabyte drive by 2019. We hope we’ve given you a glimpse into the amount of engineering that goes into a hard drive. Keeping up with the world’s insatiable appetite to create, capture, store, secure, manage, analyze, rapidly access and share data is a challenge we work on every day.

    With thousands of HAMR drives already being made in our manufacturing facilities, our internal and external supply chain is solidly in place, and volume manufacturing tools are online. This year we began shipping initial units for customer tests, and production units will ship to key customers by the end of 2018. Prepare for breakthrough capacities.

    The post What is HAMR and How Does It Enable the High-Capacity Needs of the Future? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Canadian Government Triggers Major Copyright Review

    Post Syndicated from Andy original https://torrentfreak.com/canadian-government-triggers-major-copyright-review-171214/

    The Copyright Act of Canada was first passed in 1921 and in the decades that followed has undergone considerable amendment.

    Between 2005 and 2010, several bills failed to gain traction due to opposition but in 2011 the Copyright Modernization Act was tabled. A year later, in the summer of 2012, it was passed into law.

    The Act tackles a number of important issues, such as allowing time and format shifting, plus backup copies, as long as DRM isn’t circumvented along the way. So-called ‘fair dealing’ also enjoys expansion while statutory damages for non-commercial scale infringement are capped at CAD$5000 per proceeding. Along with these changes sits the “notice-and-notice” regime, in which ISPs forward infringement notices to subscribers on behalf of copyright holders.

    The Act also mandates a review of copyright law every five years, a period that expired at the end of June 2017. Yesterday a House of Commons motion triggered the required parliamentary review, which will be carried out by the Standing Committee on Industry, Science and Technology. It didn’t take long for the music industry to make its position known.

    Music Canada, whose key members are Sony Music, Universal Music and Warner Music, enthusiastically welcomed the joint announcement from the Minister of Innovation, Science and Economic Development and the Minister of Canadian Heritage.

    “I applaud Minister Bains and Minister Joly for initiating this review of the Copyright Act,” said Graham Henderson, President and CEO of Music Canada.

    “Music creators, and all creators who depend on copyright, deserve a Copyright Act that protects their rights when their works are commercialized by others. This is our chance to address the Value Gap threatening the livelihood of Canadian creators and the future of Canadian culture.”

    That the so-called “Value Gap” has been immediately thrown on the table comes as no surprise. The term, which loosely refers to the way user-generated platforms like YouTube are able to avoid liability for infringing content while generating revenue from it, is a hot topic around the world at the moment.

    In the US and Europe, for example, greater emphasis is being placed on YouTube’s position than on piracy itself, with record labels claiming that the platform gains an unfair advantage in licensing negotiations, something which leads to a “gap” between what is paid for music, and what it’s actually worth.

    But the recording labels are unlikely to get an easy ride. As pointed out in a summary by Canadian law professor Michael Geist, the notice-and-takedown rules that facilitate the “Value Gap” are not even part of Canadian law and even without them, the labels have done just fine.

    “The industry has enjoyed remarkable success since 2012, growing far faster [than] the world average and passing Australia as the world’s 6th largest music market,” Geist writes.

    “The growth has come largely through Internet streaming revenues, which now generate tens of millions of dollars every year for creators, publishers, and the broader industry. The industry is also likely to continue to lobby for copyright term extension, as foreshadowed by a lobbying blitz just last month in Ottawa.”

    As reported in September, telecoms companies and the entertainment industries are pressing for website blockades, without intervention from the courts. The upcoming copyright review will provide additional opportunity to push that message home.

    “Bell admits that copyright reform is not needed for site blocking, but the link to the Copyright Act ensures that the issue will be a prominent part of its lobbying campaign,” Geist notes.

    “The reality is that Canada is already home to some of the toughest anti-piracy laws in the world with many legislative tools readily available for rights holders and some of the largest damages provisions found anywhere in the world.”

    But for copyright holders, a review also has the potential to swing things the other way.

    The previously mentioned notice-and-notice regime, for example, was put in place as an alternative to more restrictive schemes elsewhere. However, it was quickly abused by copyright trolls seeking cash settlements from alleged pirates. It’s certainly possible for that particular loophole to be closed or at least addressed as part of a comprehensive review.

    In any event, the review is likely to prove spirited, with interested parties on all sides trying to carve out a smooth path for their interests under the next five years of copyright law.

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

    timeShift(GrafanaBuzz, 1w) Issue 25

    Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/12/08/timeshiftgrafanabuzz-1w-issue-25/

    Welcome to TimeShift

    This week, a few of us from Grafana Labs, along with 4,000 of our closest friends, headed down to chilly Austin, TX for KubeCon + CloudNativeCon North America 2017. We got to see a number of great talks and were thrilled to see Grafana make appearances in some of the presentations. We were also a sponsor of the conference and handed out a ton of swag (we overnighted some of our custom Grafana scarves, which came in handy for Thursday’s snow).

    We also announced Grafana Labs has joined the Cloud Native Computing Foundation as a Silver member! We’re excited to share our expertise in time series data visualization and open source software with the CNCF community.


    Latest Release

    Grafana 4.6.2 is available and includes some bug fixes:

    • Prometheus: Fixes bug with new Prometheus alerts in Grafana. Make sure to download this version if you’re using Prometheus for alerting. More details in the issue. #9777
    • Color picker: Bug after using textbox input field to change/paste color string #9769
    • Cloudwatch: build using golang 1.9.2 #9667, thanks @mtanda
    • Heatmap: Fixed tooltip for “time series buckets” mode #9332
    • InfluxDB: Fixed query editor issue when using > or < operators in WHERE clause #9871

    Download Grafana 4.6.2 Now


    From the Blogosphere

    Grafana Labs Joins the CNCF: Grafana Labs has officially joined the Cloud Native Computing Foundation (CNCF). We look forward to working with the CNCF community to democratize metrics and help unify traditionally disparate information.

    Automating Web Performance Regression Alerts: Peter and his team needed a faster and easier way to find web performance regressions at the Wikimedia Foundation. Grafana 4’s alerting features were exactly what they needed. This post covers their journey on setting up alerts for both RUM and synthetic testing and shares the alerts they’ve set up on their dashboards.

    How To Install Grafana on Ubuntu 17.10: As you probably guessed from the title, this article walks you through installing and configuring Grafana in the latest version of Ubuntu (or earlier releases). It also covers installing plugins using the Grafana CLI tool.

    Prometheus: Starting the Server with Alertmanager, cAdvisor and Grafana: Learn how to monitor Docker from scratch using cAdvisor, Prometheus and Grafana in this detailed, step-by-step walkthrough.

    Monitoring Java EE Servers with Prometheus and Payara: In this screencast, Adam uses firehose; a Java EE 7+ metrics gateway for Prometheus, to convert the JSON output into Prometheus statistics and visualizes the data in Grafana.

    Monitoring Spark Streaming with InfluxDB and Grafana: This article focuses on how to monitor Apache Spark Streaming applications with InfluxDB and Grafana at scale.


    GrafanaCon EU, March 1-2, 2018

    We are currently reaching out to everyone who submitted a talk to GrafanaCon and will soon publish the final schedule at grafanacon.org.

    Join us March 1-2, 2018 in Amsterdam for 2 days of talks centered around Grafana and the surrounding monitoring ecosystem including Graphite, Prometheus, InfluxData, Elasticsearch, Kubernetes, and more.

    Get Your Ticket Now


    Grafana Plugins

    Lots of plugin updates and a new OpenNMS Helm App plugin to announce! To install or update any plugin in an on-prem Grafana instance, use the Grafana-cli tool, or install and update with 1 click on Hosted Grafana.

    NEW PLUGIN

    OpenNMS Helm App – The new OpenNMS Helm App plugin replaces the old OpenNMS data source. Helm allows users to create flexible dashboards using both fault management (FM) and performance management (PM) data from OpenNMS® Horizon™ and/or OpenNMS® Meridian™. The old data source is now deprecated.


    Install Now

    UPDATED PLUGIN

    PNP Data Source – This data source plugin (that uses PNP4Nagios to access RRD files) received a small, but important update that fixes template query parsing.


    Update

    UPDATED PLUGIN

    Vonage Status Panel – The latest version of the Status Panel comes with a number of small fixes and changes. Below are a few of the enhancements:

    • Threshold settings – removed Show Always option, and replaced it with 2 options:
      • Display Alias – Select when to show the metric alias.
      • Display Value – Select when to show the metric value.
    • Text format configuration (bold / italic) for warning / critical / disabled states.
    • Option to change the corner radius of the panel. Now you can change the panel’s shape to have rounded corners.

    Update

    UPDATED PLUGIN

    Google Calendar Plugin – This plugin received a small update, so be sure to install version 1.0.4.


    Update

    UPDATED PLUGIN

    Carpet Plot Panel – The Carpet Plot Panel received a fix for IE 11, and also added the ability to choose custom colors.


    Update


    Upcoming Events:

    In between code pushes we like to speak at, sponsor and attend all kinds of conferences and meetups. We also like to make sure we mention other Grafana-related events happening all over the world. If you’re putting on just such an event, let us know and we’ll list it here.

    Docker Meetup @ Tuenti | Madrid, Spain – Dec 12, 2017: Javier Provecho: Intro to Metrics with Swarm, Prometheus and Grafana

    Learn how to gain visibility in real time for your micro services. We’ll cover how to deploy a Prometheus server with persistence and Grafana, how to enable metrics endpoints for various service types (docker daemon, traefik proxy and postgres) and how to scrape, visualize and set up alarms based on those metrics.

    RSVP

    Grafana Lyon Meetup n ° 2 | Lyon, France – Dec 14, 2017: This meetup will cover some of the latest innovations in Grafana and discussion about automation. Also, free beer and chips, so – of course you’re going!

    RSVP

    FOSDEM | Brussels, Belgium – Feb 3-4, 2018: FOSDEM is a free developer conference where thousands of developers of free and open source software gather to share ideas and technology. Carl Bergquist is managing the Cloud and Monitoring Devroom, and we’ve heard there were some great talks submitted. There is no need to register; all are welcome.


    Tweet of the Week

    We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

    We were thrilled to see our dashboards bigger than life at KubeCon + CloudNativeCon this week. Thanks for snapping a photo and sharing!


    Grafana Labs is Hiring!

    We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

    Check out our Open Positions


    How are we doing?

    Hard to believe this is the 25th issue of Timeshift! I have a blast writing these roundups, but Let me know what you think. Submit a comment on this article below, or post something at our community forum. Find an article I haven’t included? Send it my way. Help us make timeShift better!

    Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

    Libertarians are against net neutrality

    Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/12/libertarians-are-against-net-neutrality.html

    This post claims to be by a libertarian in support of net neutrality. As a libertarian, I need to debunk this. “Net neutrality” is a case of one-hand clapping, you rarely hear the competing side, and thus, that side may sound attractive. This post is about the other side, from a libertarian point of view.

    That post just repeats the common, and wrong, left-wing talking points. I mean, there might be a libertarian case for some broadband regulation, but this isn’t it.

    This thing they call “net neutrality” is just left-wing politics masquerading as some sort of principle. It’s no different than how people claim to be “pro-choice”, yet demand forced vaccinations. Or, it’s no different than how people claim to believe in “traditional marriage” even while they are on their third “traditional marriage”.

    Properly defined, “net neutrality” means no discrimination of network traffic. But nobody wants that. A classic example is how most internet connections have faster download speeds than uploads. This discriminates against upload traffic, harming innovation in upload-centric applications like DropBox’s cloud backup or BitTorrent’s peer-to-peer file transfer. Yet activists never mention this, or other types of network traffic discrimination, because they no more care about “net neutrality” than Trump or Gingrich care about “traditional marriage”.

    Instead, when people say “net neutrality”, they mean “government regulation”. It’s the same old debate between who is the best steward of consumer interest: the free-market or government.

    Specifically, in the current debate, they are referring to the Obama-era FCC “Open Internet” order and reclassification of broadband under “Title II” so they can regulate it. Trump’s FCC is putting broadband back to “Title I”, which means the FCC can’t regulate most of its “Open Internet” order.

    Don’t be tricked into thinking the “Open Internet” order is anything but intensely politically. The premise behind the order is the Democrat’s firm believe that it’s government who created the Internet, and all innovation, advances, and investment ultimately come from the government. It sees ISPs as inherently deceitful entities who will only serve their own interests, at the expense of consumers, unless the FCC protects consumers.

    It says so right in the order itself. It starts with the premise that broadband ISPs are evil, using illegitimate “tactics” to hurt consumers, and continues with similar language throughout the order.

    A good contrast to this can be seen in Tim Wu’s non-political original paper in 2003 that coined the term “net neutrality”. Whereas the FCC sees broadband ISPs as enemies of consumers, Wu saw them as allies. His concern was not that ISPs would do evil things, but that they would do stupid things, such as favoring short-term interests over long-term innovation (such as having faster downloads than uploads).

    The political depravity of the FCC’s order can be seen in this comment from one of the commissioners who voted for those rules:

    FCC Commissioner Jessica Rosenworcel wants to increase the minimum broadband standards far past the new 25Mbps download threshold, up to 100Mbps. “We invented the internet. We can do audacious things if we set big goals, and I think our new threshold, frankly, should be 100Mbps. I think anything short of that shortchanges our children, our future, and our new digital economy,” Commissioner Rosenworcel said.

    This is indistinguishable from communist rhetoric that credits the Party for everything, as this booklet from North Korea will explain to you.

    But what about monopolies? After all, while the free-market may work when there’s competition, it breaks down where there are fewer competitors, oligopolies, and monopolies.

    There is some truth to this, in individual cities, there’s often only only a single credible high-speed broadband provider. But this isn’t the issue at stake here. The FCC isn’t proposing light-handed regulation to keep monopolies in check, but heavy-handed regulation that regulates every last decision.

    Advocates of FCC regulation keep pointing how broadband monopolies can exploit their renting-seeking positions in order to screw the customer. They keep coming up with ever more bizarre and unlikely scenarios what monopoly power grants the ISPs.

    But the never mention the most simplest: that broadband monopolies can just charge customers more money. They imagine instead that these companies will pursue a string of outrageous, evil, and less profitable behaviors to exploit their monopoly position.

    The FCC’s reclassification of broadband under Title II gives it full power to regulate ISPs as utilities, including setting prices. The FCC has stepped back from this, promising it won’t go so far as to set prices, that it’s only regulating these evil conspiracy theories. This is kind of bizarre: either broadband ISPs are evilly exploiting their monopoly power or they aren’t. Why stop at regulating only half the evil?

    The answer is that the claim “monopoly” power is a deception. It starts with overstating how many monopolies there are to begin with. When it issued its 2015 “Open Internet” order the FCC simultaneously redefined what they meant by “broadband”, upping the speed from 5-mbps to 25-mbps. That’s because while most consumers have multiple choices at 5-mbps, fewer consumers have multiple choices at 25-mbps. It’s a dirty political trick to convince you there is more of a problem than there is.

    In any case, their rules still apply to the slower broadband providers, and equally apply to the mobile (cell phone) providers. The US has four mobile phone providers (AT&T, Verizon, T-Mobile, and Sprint) and plenty of competition between them. That it’s monopolistic power that the FCC cares about here is a lie. As their Open Internet order clearly shows, the fundamental principle that animates the document is that all corporations, monopolies or not, are treacherous and must be regulated.

    “But corporations are indeed evil”, people argue, “see here’s a list of evil things they have done in the past!”

    No, those things weren’t evil. They were done because they benefited the customers, not as some sort of secret rent seeking behavior.

    For example, one of the more common “net neutrality abuses” that people mention is AT&T’s blocking of FaceTime. I’ve debunked this elsewhere on this blog, but the summary is this: there was no network blocking involved (not a “net neutrality” issue), and the FCC analyzed it and decided it was in the best interests of the consumer. It’s disingenuous to claim it’s an evil that justifies FCC actions when the FCC itself declared it not evil and took no action. It’s disingenuous to cite the “net neutrality” principle that all network traffic must be treated when, in fact, the network did treat all the traffic equally.

    Another frequently cited abuse is Comcast’s throttling of BitTorrent.Comcast did this because Netflix users were complaining. Like all streaming video, Netflix backs off to slower speed (and poorer quality) when it experiences congestion. BitTorrent, uniquely among applications, never backs off. As most applications become slower and slower, BitTorrent just speeds up, consuming all available bandwidth. This is especially problematic when there’s limited upload bandwidth available. Thus, Comcast throttled BitTorrent during prime time TV viewing hours when the network was already overloaded by Netflix and other streams. BitTorrent users wouldn’t mind this throttling, because it often took days to download a big file anyway.

    When the FCC took action, Comcast stopped the throttling and imposed bandwidth caps instead. This was a worse solution for everyone. It penalized heavy Netflix viewers, and prevented BitTorrent users from large downloads. Even though BitTorrent users were seen as the victims of this throttling, they’d vastly prefer the throttling over the bandwidth caps.

    In both the FaceTime and BitTorrent cases, the issue was “network management”. AT&T had no competing video calling service, Comcast had no competing download service. They were only reacting to the fact their networks were overloaded, and did appropriate things to solve the problem.

    Mobile carriers still struggle with the “network management” issue. While their networks are fast, they are still of low capacity, and quickly degrade under heavy use. They are looking for tricks in order to reduce usage while giving consumers maximum utility.

    The biggest concern is video. It’s problematic because it’s designed to consume as much bandwidth as it can, throttling itself only when it experiences congestion. This is what you probably want when watching Netflix at the highest possible quality, but it’s bad when confronted with mobile bandwidth caps.

    With small mobile devices, you don’t want as much quality anyway. You want the video degraded to lower quality, and lower bandwidth, all the time.

    That’s the reasoning behind T-Mobile’s offerings. They offer an unlimited video plan in conjunction with the biggest video providers (Netflix, YouTube, etc.). The catch is that when congestion occurs, they’ll throttle it to lower quality. In other words, they give their bandwidth to all the other phones in your area first, then give you as much of the leftover bandwidth as you want for video.

    While it sounds like T-Mobile is doing something evil, “zero-rating” certain video providers and degrading video quality, the FCC allows this, because they recognize it’s in the customer interest.

    Mobile providers especially have great interest in more innovation in this area, in order to conserve precious bandwidth, but they are finding it costly. They can’t just innovate, but must ask the FCC permission first. And with the new heavy handed FCC rules, they’ve become hostile to this innovation. This attitude is highlighted by the statement from the “Open Internet” order:

    And consumers must be protected, for example from mobile commercial practices masquerading as “reasonable network management.”

    This is a clear declaration that free-market doesn’t work and won’t correct abuses, and that that mobile companies are treacherous and will do evil things without FCC oversight.

    Conclusion

    Ignoring the rhetoric for the moment, the debate comes down to simple left-wing authoritarianism and libertarian principles. The Obama administration created a regulatory regime under clear Democrat principles, and the Trump administration is rolling it back to more free-market principles. There is no principle at stake here, certainly nothing to do with a technical definition of “net neutrality”.

    The 2015 “Open Internet” order is not about “treating network traffic neutrally”, because it doesn’t do that. Instead, it’s purely a left-wing document that claims corporations cannot be trusted, must be regulated, and that innovation and prosperity comes from the regulators and not the free market.

    It’s not about monopolistic power. The primary targets of regulation are the mobile broadband providers, where there is plenty of competition, and who have the most “network management” issues. Even if it were just about wired broadband (like Comcast), it’s still ignoring the primary ways monopolies profit (raising prices) and instead focuses on bizarre and unlikely ways of rent seeking.

    If you are a libertarian who nonetheless believes in this “net neutrality” slogan, you’ve got to do better than mindlessly repeating the arguments of the left-wing. The term itself, “net neutrality”, is just a slogan, varying from person to person, from moment to moment. You have to be more specific. If you truly believe in the “net neutrality” technical principle that all traffic should be treated equally, then you’ll want a rewrite of the “Open Internet” order.

    In the end, while libertarians may still support some form of broadband regulation, it’s impossible to reconcile libertarianism with the 2015 “Open Internet”, or the vague things people mean by the slogan “net neutrality”.

    Apple CEO is Optimistic VPN Apps Will Return to China App Store

    Post Syndicated from Andy original https://torrentfreak.com/apple-ceo-is-optimistic-vpn-apps-will-return-to-china-app-store-171206/

    As part of an emerging crackdown on tools and systems with the ability to bypass China’s ‘Great Firewall’, during the summer Chinese government pressure began to affect Apple.

    During the final days of July, Apple was forced to remove many of the most-used VPN applications from its Chinese App Store. In a short email from the company, VPN providers and software developers were told that VPN applications are considered illegal in China.

    “We are writing to notify you that your application will be removed from the China App Store because it includes content that is illegal in China, which is not in compliance with the App Store Review Guidelines,” Apple informed the affected VPNs.

    While the position on the ground doesn’t appear to have changed in the interim, Apple Chief Executive Tim Cook today expressed optimism that the VPN apps would eventually be restored to their former positions on China’s version of the App Store.

    “My hope over time is that some of the things, the couple of things that’s been pulled, come back,” Cook said. “I have great hope on that and great optimism on that.”

    According to Reuters, Cook said that he always tries to find ways to work together to settle differences and if he gets criticized for that “so be it.”

    Speaking at the Fortune Forum in the Chinese city of Guangzhou, Cook said that he believes strongly in freedoms. But back home in the US, Apple has been strongly criticized for not doing enough to uphold freedom of speech and communication in China.

    Back in October, two US senators wrote to Cook asking why the company had removed the VPN apps from the company’s store in China.

    “VPNs allow users to access the uncensored Internet in China and other countries that restrict Internet freedom. If these reports are true, we are concerned that Apple may be enabling the Chinese government’s censorship and surveillance of the Internet,” senators Ted Cruz and Patrick Leahy wrote.

    “While Apple’s many contributions to the global exchange of information are admirable, removing VPN apps that allow individuals in China to evade the Great Firewall and access the Internet privately does not enable people in China to ‘speak up’.”

    They were comments Senator Leahy underlined again yesterday.

    “American tech companies have become leading champions of free expression. But that commitment should not end at our borders,” Leahy told CNBC.

    “Global leaders in innovation, like Apple, have both an opportunity and a moral obligation to promote free expression and other basic human rights in countries that routinely deny these rights.”

    Whether the optimism expressed by Cook today is based on discussions with the Chinese government is unknown. However, it seems unlikely that authorities would be willing to significantly compromise on their dedication to maintaining the Great Firewall, which not only controls access to locally controversial content but also seeks to boost the success of Chinese companies.

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

    Implementing Dynamic ETL Pipelines Using AWS Step Functions

    Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/implementing-dynamic-etl-pipelines-using-aws-step-functions/

    This post contributed by:
    Wangechi Dole, AWS Solutions Architect
    Milan Krasnansky, ING, Digital Solutions Developer, SGK
    Rian Mookencherry, Director – Product Innovation, SGK

    Data processing and transformation is a common use case you see in our customer case studies and success stories. Often, customers deal with complex data from a variety of sources that needs to be transformed and customized through a series of steps to make it useful to different systems and stakeholders. This can be difficult due to the ever-increasing volume, velocity, and variety of data. Today, data management challenges cannot be solved with traditional databases.

    Workflow automation helps you build solutions that are repeatable, scalable, and reliable. You can use AWS Step Functions for this. A great example is how SGK used Step Functions to automate the ETL processes for their client. With Step Functions, SGK has been able to automate changes within the data management system, substantially reducing the time required for data processing.

    In this post, SGK shares the details of how they used Step Functions to build a robust data processing system based on highly configurable business transformation rules for ETL processes.

    SGK: Building dynamic ETL pipelines

    SGK is a subsidiary of Matthews International Corporation, a diversified organization focusing on brand solutions and industrial technologies. SGK’s Global Content Creation Studio network creates compelling content and solutions that connect brands and products to consumers through multiple assets including photography, video, and copywriting.

    We were recently contracted to build a sophisticated and scalable data management system for one of our clients. We chose to build the solution on AWS to leverage advanced, managed services that help to improve the speed and agility of development.

    The data management system served two main functions:

    1. Ingesting a large amount of complex data to facilitate both reporting and product funding decisions for the client’s global marketing and supply chain organizations.
    2. Processing the data through normalization and applying complex algorithms and data transformations. The system goal was to provide information in the relevant context—such as strategic marketing, supply chain, product planning, etc. —to the end consumer through automated data feeds or updates to existing ETL systems.

    We were faced with several challenges:

    • Output data that needed to be refreshed at least twice a day to provide fresh datasets to both local and global markets. That constant data refresh posed several challenges, especially around data management and replication across multiple databases.
    • The complexity of reporting business rules that needed to be updated on a constant basis.
    • Data that could not be processed as contiguous blocks of typical time-series data. The measurement of the data was done across seasons (that is, combination of dates), which often resulted with up to three overlapping seasons at any given time.
    • Input data that came from 10+ different data sources. Each data source ranged from 1–20K rows with as many as 85 columns per input source.

    These challenges meant that our small Dev team heavily invested time in frequent configuration changes to the system and data integrity verification to make sure that everything was operating properly. Maintaining this system proved to be a daunting task and that’s when we turned to Step Functions—along with other AWS services—to automate our ETL processes.

    Solution overview

    Our solution included the following AWS services:

    • AWS Step Functions: Before Step Functions was available, we were using multiple Lambda functions for this use case and running into memory limit issues. With Step Functions, we can execute steps in parallel simultaneously, in a cost-efficient manner, without running into memory limitations.
    • AWS Lambda: The Step Functions state machine uses Lambda functions to implement the Task states. Our Lambda functions are implemented in Java 8.
    • Amazon DynamoDB provides us with an easy and flexible way to manage business rules. We specify our rules as Keys. These are key-value pairs stored in a DynamoDB table.
    • Amazon RDS: Our ETL pipelines consume source data from our RDS MySQL database.
    • Amazon Redshift: We use Amazon Redshift for reporting purposes because it integrates with our BI tools. Currently we are using Tableau for reporting which integrates well with Amazon Redshift.
    • Amazon S3: We store our raw input files and intermediate results in S3 buckets.
    • Amazon CloudWatch Events: Our users expect results at a specific time. We use CloudWatch Events to trigger Step Functions on an automated schedule.

    Solution architecture

    This solution uses a declarative approach to defining business transformation rules that are applied by the underlying Step Functions state machine as data moves from RDS to Amazon Redshift. An S3 bucket is used to store intermediate results. A CloudWatch Event rule triggers the Step Functions state machine on a schedule. The following diagram illustrates our architecture:

    Here are more details for the above diagram:

    1. A rule in CloudWatch Events triggers the state machine execution on an automated schedule.
    2. The state machine invokes the first Lambda function.
    3. The Lambda function deletes all existing records in Amazon Redshift. Depending on the dataset, the Lambda function can create a new table in Amazon Redshift to hold the data.
    4. The same Lambda function then retrieves Keys from a DynamoDB table. Keys represent specific marketing campaigns or seasons and map to specific records in RDS.
    5. The state machine executes the second Lambda function using the Keys from DynamoDB.
    6. The second Lambda function retrieves the referenced dataset from RDS. The records retrieved represent the entire dataset needed for a specific marketing campaign.
    7. The second Lambda function executes in parallel for each Key retrieved from DynamoDB and stores the output in CSV format temporarily in S3.
    8. Finally, the Lambda function uploads the data into Amazon Redshift.

    To understand the above data processing workflow, take a closer look at the Step Functions state machine for this example.

    We walk you through the state machine in more detail in the following sections.

    Walkthrough

    To get started, you need to:

    • Create a schedule in CloudWatch Events
    • Specify conditions for RDS data extracts
    • Create Amazon Redshift input files
    • Load data into Amazon Redshift

    Step 1: Create a schedule in CloudWatch Events
    Create rules in CloudWatch Events to trigger the Step Functions state machine on an automated schedule. The following is an example cron expression to automate your schedule:

    In this example, the cron expression invokes the Step Functions state machine at 3:00am and 2:00pm (UTC) every day.

    Step 2: Specify conditions for RDS data extracts
    We use DynamoDB to store Keys that determine which rows of data to extract from our RDS MySQL database. An example Key is MCS2017, which stands for, Marketing Campaign Spring 2017. Each campaign has a specific start and end date and the corresponding dataset is stored in RDS MySQL. A record in RDS contains about 600 columns, and each Key can represent up to 20K records.

    A given day can have multiple campaigns with different start and end dates running simultaneously. In the following example DynamoDB item, three campaigns are specified for the given date.

    The state machine example shown above uses Keys 31, 32, and 33 in the first ChoiceState and Keys 21 and 22 in the second ChoiceState. These keys represent marketing campaigns for a given day. For example, on Monday, there are only two campaigns requested. The ChoiceState with Keys 21 and 22 is executed. If three campaigns are requested on Tuesday, for example, then ChoiceState with Keys 31, 32, and 33 is executed. MCS2017 can be represented by Key 21 and Key 33 on Monday and Tuesday, respectively. This approach gives us the flexibility to add or remove campaigns dynamically.

    Step 3: Create Amazon Redshift input files
    When the state machine begins execution, the first Lambda function is invoked as the resource for FirstState, represented in the Step Functions state machine as follows:

    "Comment": ” AWS Amazon States Language.", 
      "StartAt": "FirstState",
     
"States": { 
      "FirstState": {
       
"Type": "Task",
       
"Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Start",
        "Next": "ChoiceState" 
      } 

    As described in the solution architecture, the purpose of this Lambda function is to delete existing data in Amazon Redshift and retrieve keys from DynamoDB. In our use case, we found that deleting existing records was more efficient and less time-consuming than finding the delta and updating existing records. On average, an Amazon Redshift table can contain about 36 million cells, which translates to roughly 65K records. The following is the code snippet for the first Lambda function in Java 8:

    public class LambdaFunctionHandler implements RequestHandler<Map<String,Object>,Map<String,String>> {
        Map<String,String> keys= new HashMap<>();
        public Map<String, String> handleRequest(Map<String, Object> input, Context context){
           Properties config = getConfig(); 
           // 1. Cleaning Redshift Database
           new RedshiftDataService(config).cleaningTable(); 
           // 2. Reading data from Dynamodb
           List<String> keyList = new DynamoDBDataService(config).getCurrentKeys();
           for(int i = 0; i < keyList.size(); i++) {
               keys.put(”key" + (i+1), keyList.get(i)); 
           }
           keys.put(”key" + T,String.valueOf(keyList.size()));
           // 3. Returning the key values and the key count from the “for” loop
           return (keys);
    }

    The following JSON represents ChoiceState.

    "ChoiceState": {
       "Type" : "Choice",
       "Choices": [ 
       {

          "Variable": "$.keyT",
         "StringEquals": "3",
         "Next": "CurrentThreeKeys" 
       }, 
       {

         "Variable": "$.keyT",
        "StringEquals": "2",
        "Next": "CurrentTwooKeys" 
       } 
     ], 
     "Default": "DefaultState"
    }
    

    The variable $.keyT represents the number of keys retrieved from DynamoDB. This variable determines which of the parallel branches should be executed. At the time of publication, Step Functions does not support dynamic parallel state. Therefore, choices under ChoiceState are manually created and assigned hardcoded StringEquals values. These values represent the number of parallel executions for the second Lambda function.

    For example, if $.keyT equals 3, the second Lambda function is executed three times in parallel with keys, $key1, $key2 and $key3 retrieved from DynamoDB. Similarly, if $.keyT equals two, the second Lambda function is executed twice in parallel.  The following JSON represents this parallel execution:

    "CurrentThreeKeys": { 
      "Type": "Parallel",
      "Next": "NextState",
      "Branches": [ 
      {

         "StartAt": “key31",
        "States": { 
           “key31": {

              "Type": "Task",
            "InputPath": "$.key1",
            "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
            "End": true 
           } 
        } 
      }, 
      {

         "StartAt": “key32",
        "States": { 
         “key32": {

            "Type": "Task",
           "InputPath": "$.key2",
             "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
           "End": true 
          } 
         } 
       }, 
       {

          "StartAt": “key33",
           "States": { 
              “key33": {

                    "Type": "Task",
                 "InputPath": "$.key3",
                 "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
               "End": true 
           } 
         } 
        } 
      ] 
    } 

    Step 4: Load data into Amazon Redshift
    The second Lambda function in the state machine extracts records from RDS associated with keys retrieved for DynamoDB. It processes the data then loads into an Amazon Redshift table. The following is code snippet for the second Lambda function in Java 8.

    public class LambdaFunctionHandler implements RequestHandler<String, String> {
     public static String key = null;
    
public String handleRequest(String input, Context context) { 
       key=input; 
       //1. Getting basic configurations for the next classes + s3 client Properties
       config = getConfig();

       AmazonS3 s3 = AmazonS3ClientBuilder.defaultClient(); 
       // 2. Export query results from RDS into S3 bucket 
       new RdsDataService(config).exportDataToS3(s3,key); 
       // 3. Import query results from S3 bucket into Redshift 
        new RedshiftDataService(config).importDataFromS3(s3,key); 
       System.out.println(input); 
       return "SUCCESS"; 
     } 
    }

    After the data is loaded into Amazon Redshift, end users can visualize it using their preferred business intelligence tools.

    Lessons learned

    • At the time of publication, the 1.5–GB memory hard limit for Lambda functions was inadequate for processing our complex workload. Step Functions gave us the flexibility to chunk our large datasets and process them in parallel, saving on costs and time.
    • In our previous implementation, we assigned each key a dedicated Lambda function along with CloudWatch rules for schedule automation. This approach proved to be inefficient and quickly became an operational burden. Previously, we processed each key sequentially, with each key adding about five minutes to the overall processing time. For example, processing three keys meant that the total processing time was three times longer. With Step Functions, the entire state machine executes in about five minutes.
    • Using DynamoDB with Step Functions gave us the flexibility to manage keys efficiently. In our previous implementations, keys were hardcoded in Lambda functions, which became difficult to manage due to frequent updates. DynamoDB is a great way to store dynamic data that changes frequently, and it works perfectly with our serverless architectures.

    Conclusion

    With Step Functions, we were able to fully automate the frequent configuration updates to our dataset resulting in significant cost savings, reduced risk to data errors due to system downtime, and more time for us to focus on new product development rather than support related issues. We hope that you have found the information useful and that it can serve as a jump-start to building your own ETL processes on AWS with managed AWS services.

    For more information about how Step Functions makes it easy to coordinate the components of distributed applications and microservices in any workflow, see the use case examples and then build your first state machine in under five minutes in the Step Functions console.

    If you have questions or suggestions, please comment below.