Tag Archives: announcement

Weekly roundup: Upside down

Post Syndicated from Eevee original https://eev.ee/dev/2017/11/22/weekly-roundup-upside-down/

Complicated week.

  • blog: I wrote a rather large chunk of one post, but didn’t finish it. I also made a release category for, well, release announcements, so that maybe things I make will have a permanent listing and not fade into obscurity on my Twitter timeline.

  • fox flux: Drew some experimental pickups. Started putting together a real level with a real tileset (rather than the messy sketch sheet i’ve been using). Got doors partially working with some cool transitions. Wrote a little jingle for picking up a heart.

  • veekun: Started working on Ultra Sun and Ultra Moon; I have the games dumped to YAML already, so getting them onto the site shouldn’t take too much more work.

AWS Achieves FedRAMP JAB Moderate Provisional Authorization for 20 Services in the AWS US East/West Region

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-jab-moderate-authorization-for-20-services-in-us-eastwest/

The AWS US East/West Region has received a Provisional Authority to Operate (P-ATO) from the Joint Authorization Board (JAB) at the Federal Risk and Authorization Management Program (FedRAMP) Moderate baseline.

Though AWS has maintained an AWS US East/West Region Agency-ATO since early 2013, this announcement represents AWS’s carefully deliberated move to the JAB for the centralized maintenance of our P-ATO for 10 services already authorized. This also includes the addition of 10 new services to our FedRAMP program (see the complete list of services below). This doubles the number of FedRAMP Moderate services available to our customers to enable increased use of the cloud and support modernized IT missions. Our public sector customers now can leverage this FedRAMP P-ATO as a baseline for their own authorizations and look to the JAB for centralized Continuous Monitoring reporting and updates. In a significant enhancement for our partners that build their solutions on the AWS US East/West Region, they can now achieve FedRAMP JAB P-ATOs of their own for their Platform as a Service (PaaS) and Software as a Service (SaaS) offerings.

In line with FedRAMP security requirements, our independent FedRAMP assessment was completed in partnership with a FedRAMP accredited Third Party Assessment Organization (3PAO) on our technical, management, and operational security controls to validate that they meet or exceed FedRAMP’s Moderate baseline requirements. Effective immediately, you can begin leveraging this P-ATO for the following 20 services in the AWS US East/West Region:

  • Amazon Aurora (MySQL)*
  • Amazon CloudWatch Logs*
  • Amazon DynamoDB
  • Amazon Elastic Block Store
  • Amazon Elastic Compute Cloud
  • Amazon EMR*
  • Amazon Glacier*
  • Amazon Kinesis Streams*
  • Amazon RDS (MySQL, Oracle, Postgres*)
  • Amazon Redshift
  • Amazon Simple Notification Service*
  • Amazon Simple Queue Service*
  • Amazon Simple Storage Service
  • Amazon Simple Workflow Service*
  • Amazon Virtual Private Cloud
  • AWS CloudFormation*
  • AWS CloudTrail*
  • AWS Identity and Access Management
  • AWS Key Management Service
  • Elastic Load Balancing

* Services with first-time FedRAMP Moderate authorizations

We continue to work with the FedRAMP Project Management Office (PMO), other regulatory and compliance bodies, and our customers and partners to ensure that we are raising the bar on our customers’ security and compliance needs.

To learn more about how AWS helps customers meet their security and compliance requirements, see the AWS Compliance website. To learn about what other public sector customers are doing on AWS, see our Government, Education, and Nonprofits Case Studies and Customer Success Stories. To review the public posting of our FedRAMP authorizations, see the FedRAMP Marketplace.

– Chris Gile, Senior Manager, AWS Public Sector Risk and Compliance

New White House Announcement on the Vulnerability Equities Process

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/new_white_house_1.html

The White House has released a new version of the Vulnerabilities Equities Process (VEP). This is the inter-agency process by which the US government decides whether to inform the software vendor of a vulnerability it finds, or keep it secret and use it to eavesdrop on or attack other systems. You can read the new policy or the fact sheet, but the best place to start is Cybersecurity Coordinator Rob Joyce’s blog post.

In considering a way forward, there are some key tenets on which we can build a better process.

Improved transparency is critical. The American people should have confidence in the integrity of the process that underpins decision making about discovered vulnerabilities. Since I took my post as Cybersecurity Coordinator, improving the VEP and ensuring its transparency have been key priorities, and we have spent the last few months reviewing our existing policy in order to improve the process and make key details about the VEP available to the public. Through these efforts, we have validated much of the existing process and ensured a rigorous standard that considers many potential equities.

The interests of all stakeholders must be fairly represented. At a high level we consider four major groups of equities: defensive equities; intelligence / law enforcement / operational equities; commercial equities; and international partnership equities. Additionally, ordinary people want to know the systems they use are resilient, safe, and sound. These core considerations, which have been incorporated into the VEP Charter, help to standardize the process by which decision makers weigh the benefit to national security and the national interest when deciding whether to disclose or restrict knowledge of a vulnerability.

Accountability of the process and those who operate it is important to establish confidence in those served by it. Our public release of the unclassified portions Charter will shed light on aspects of the VEP that were previously shielded from public review, including who participates in the VEP’s governing body, known as the Equities Review Board. We make it clear that departments and agencies with protective missions participate in VEP discussions, as well as other departments and agencies that have broader equities, like the Department of State and the Department of Commerce. We also clarify what categories of vulnerabilities are submitted to the process and ensure that any decision not to disclose a vulnerability will be reevaluated regularly. There are still important reasons to keep many of the specific vulnerabilities evaluated in the process classified, but we will release an annual report that provides metrics about the process to further inform the public about the VEP and its outcomes.

Our system of government depends on informed and vigorous dialogue to discover and make available the best ideas that our diverse society can generate. This publication of the VEP Charter will likely spark discussion and debate. This discourse is important. I also predict that articles will make breathless claims of “massive stockpiles” of exploits while describing the issue. That simply isn’t true. The annual reports and transparency of this effort will reinforce that fact.

Mozilla is pleased with the new charter. I am less so; it looks to me like the same old policy with some new transparency measures — which I’m not sure I trust. The devil is in the details, and we don’t know the details — and it has giant loopholes that pretty much anything can fall through:

The United States Government’s decision to disclose or restrict vulnerability information could be subject to restrictions by partner agreements and sensitive operations. Vulnerabilities that fall within these categories will be cataloged by the originating Department/Agency internally and reported directly to the Chair of the ERB. The details of these categories are outlined in Annex C, which is classified. Quantities of excepted vulnerabilities from each department and agency will be provided in ERB meetings to all members.

This is me from last June:

There’s a lot we don’t know about the VEP. The Washington Post says that the NSA used EternalBlue “for more than five years,” which implies that it was discovered after the 2010 process was put in place. It’s not clear if all vulnerabilities are given such consideration, or if bugs are periodically reviewed to determine if they should be disclosed. That said, any VEP that allows something as dangerous as EternalBlue — or the Cisco vulnerabilities that the Shadow Brokers leaked last August — to remain unpatched for years isn’t serving national security very well. As a former NSA employee said, the quality of intelligence that could be gathered was “unreal.” But so was the potential damage. The NSA must avoid hoarding vulnerabilities.

I stand by that, and am not sure the new policy changes anything.

More commentary.

Here’s more about the Windows vulnerabilities hoarded by the NSA and released by the Shadow Brokers.

EDITED TO ADD (11/18): More news.

EDITED TO ADD (11/22): Adam Shostack points out that the process does not cover design flaws or trade-offs, and that those need to be covered:

…we need the VEP to expand to cover those issues. I’m not going to claim that will be easy, that the current approach will translate, or that they should have waited to handle those before publishing. One obvious place it gets harder is the sources and methods tradeoff. But we need the internet to be a resilient and trustworthy infrastructure.

The Decision on Transparency

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/transparency-in-business/

Backblaze transparency

This post by Backblaze’s CEO and co-founder Gleb Budman is the seventh in a series about entrepreneurship. You can choose posts in the series from the list below:

  1. How Backblaze got Started: The Problem, The Solution, and the Stuff In-Between
  2. Building a Competitive Moat: Turning Challenges Into Advantages
  3. From Idea to Launch: Getting Your First Customers
  4. How to Get Your First 1,000 Customers
  5. Surviving Your First Year
  6. How to Compete with Giants
  7. The Decision on Transparency

Use the Join button above to receive notification of new posts in this series.

“Are you crazy?” “Why would you do that?!” “You shouldn’t share that!”

These are just a few of the common questions and comments we heard after posting some of the information we have shared over the years. So was it crazy? Misguided? Should you do it?

With that background I’d like to dig into the decision to become so transparent, from releasing stats on hard drive failures, to storage pod specs, to publishing our cloud storage costs, and open sourcing the Reed-Solomon code. What was the thought process behind becoming so transparent when most companies work so hard to hide their inner workings, especially information such as the Storage Pod specs that would normally be considered a proprietary advantage? Most importantly I’d like to explore the positives and negatives of being so transparent.

Sharing Intellectual Property

The first “transparency” that garnered a flurry of “why would you share that?!” came as a result of us deciding to open source our Storage Pod design: publishing the specs, parts, prices, and how to build it yourself. The Storage Pod was a key component of our infrastructure, gave us a cost (and thus competitive) advantage, took significant effort to develop, and had a fair bit of intellectual property: the “IP.”

The negatives of sharing this are obvious: it allows our competitors to use the design to reduce our cost advantage, and it gives away the IP, which could be patentable or have value as a trade secret.

The positives were certainly less obvious, and at the time we couldn’t have guessed how massive they would be.

We wrestled with the decision: prospective users and others online didn’t believe we could offer our service for such a low price, thinking that we would burn through some cash hoard and then go out of business. We wanted to reassure them, but how?

This is how our response evolved:

We’ve built a lower cost storage platform.
But why would anyone believe us?
Because, we’ve designed our own servers and they’re less expensive.
But why would anyone believe they were so low cost and efficient?
Because here’s how much they cost versus others.
But why would anyone believe they cost that little and still enabled us to efficiently store data?
Because here are all the components they’re made of, this is how to build them, and this is how they work.
Ok, you can’t argue with that.

Great — so that would reassure people. But should we do this? Is it worth it?

This was 2009, we were a tiny company of seven people working from our co-founder’s one-bedroom apartment. We decided that the risk of not having potential customers trust us was more impactful than the risk of our competitors possibly deciding to use our server architecture. The former might kill the company in short order; the latter might make it harder for us to compete in the future. Moreover, we figured that most competitors were established on their own platforms and were unlikely to switch to ours, even if it were better.

Takeaway: Build your brand today. There are no assurances you will make it to tomorrow if you can’t make people believe in you today.

A Sharing Success Story — The Backblaze Storage Pod

So with that, we decided to publish everything about the Storage Pod. As for deciding to actually open source it? That was a ‘thank you’ to the open source community upon whose shoulders we stood as we used software such as Linux, Tomcat, etc.

With eight years of hindsight, here’s what happened:

As best as I can tell, none of our direct competitors ever used our Storage Pod design, opting instead to continue paying more for commercial solutions.

  • Hundreds of press articles have been written about Backblaze as a direct result of sharing the Storage Pod design.
  • Millions of people have read press articles or our blog posts about the Storage Pods.
  • Backblaze was established as a storage tech thought leader, and a resource for those looking for information in the space.
  • Our blog became viewed as a resource, not a corporate mouthpiece.
  • Recruiting has been made easier through the awareness of Backblaze, the appreciation for us taking on challenging tech problems in interesting ways, and for our openness.
  • Sourcing for our Storage Pods has become easier because we can point potential vendors to our blog posts and say, “here’s what we need.”

And those are just the direct benefits for us. One of the things that warms my heart is that doing this has helped others:

  • Several companies have started selling servers based on our Storage Pod designs.
  • Netflix credits Backblaze with being the inspiration behind their CDN servers.
  • Many schools, labs, and others have shared that they’ve been able to do what they didn’t think was possible because using our Storage Pod designs provided lower-cost storage.
  • And I want to believe that in general we pushed forward the development of low-cost storage servers in the industry.

So overall, the decision on being transparent and sharing our Storage Pod designs was a clear win.

Takeaway: Never underestimate the value of goodwill. It can help build new markets that fuel your future growth and create new ecosystems.

Sharing An “Almost Acquisition”

Acquisition announcements are par for the course. No company, however, talks about the acquisition that fell through. If rumors appear in the press, the company’s response is always, “no comment.” But in 2010, when Backblaze was almost, but not acquired, we wrote about it in detail. Crazy?

The negatives of sharing this are slightly less obvious, but the two issues most people worried about were, 1) the fact that the company could be acquired would spook customers, and 2) the fact that it wasn’t would signal to potential acquirers that something was wrong.

So, why share this at all? No one was asking “did you almost get acquired?”

First, we had established a culture of transparency and this was a significant event that occurred for us, thus we defaulted to assuming we would share. Second, we learned that acquisitions fall through all the time, not just during the early fishing stage, but even after term sheets are signed, diligence is done, and all the paperwork is complete. I felt we had learned some things about the process that would be valuable to others that were going through it.

As it turned out, we received emails from startup founders saying they saved the post for the future, and from lawyers, VCs, and advisors saying they shared them with their portfolio companies. Among the most touching emails I received was from a founder who said that after an acquisition fell through she felt so alone that she became incredibly depressed, and that reading our post helped her see that this happens and that things could be OK after. Being transparent about almost getting acquired was worth it just to help that one founder.

And what about the concerns? As for spooking customers, maybe some were — but our sign-ups went up, not down, afterward. Any company can be acquired, and many of the world’s largest have been. That we were being both thoughtful about where to go with it, and open about it, I believe gave customers a sense that we would do the right thing if it happened. And as for signaling to potential acquirers? The ones I’ve spoken with all knew this happens regularly enough that it’s not a factor.

Takeaway: Being open and transparent is also a form of giving back to others.

Sharing Strategic Data

For years people have been desperate to know how reliable are hard drives. They could go to Amazon for individual reviews, but someone saying “this drive died for me” doesn’t provide statistical insight. Google published a study that showed annualized drive failure rates, but didn’t break down the results by manufacturer or model. Since Backblaze has deployed about 100,000 hard drives to store customer data, we have been able to collect a wealth of data on the reliability of the drives by make, model, and size. Was Backblaze the only one with this data? Of course not — Google, Amazon, Microsoft, and any other cloud-scale storage provider tracked it. Yet none would publish. Should Backblaze?

Again, starting with the main negatives: 1) sharing which drives we liked could increase demand for them, thus reducing availability or increasing prices, and 2) publishing the data might make the drive vendors unhappy with us, thereby making it difficult for us to buy drives.

But we felt that the largest drive purchasers (Amazon, Google, etc.) already had their own stats and would buy the drives they chose, and if individuals or smaller companies used our stats, they wouldn’t sufficiently move the overall market demand. Also, we hoped that the drive companies would see that we were being fair in our analysis and, if anything, would leverage our data to make drives even better.

Again, publishing the data resulted in tremendous value for Backblaze, with millions of people having read the analysis that we put out quarterly. Also, becoming known as the place to go for drive reliability information is a natural fit with being a backup and storage provider. In addition, in a twist from many people’s expectations, some of the drive companies actually started working closer with us, seeing that we could be a good source of data for them as feedback. We’ve also seen many individuals and companies make more data-based decisions on which drives to buy, and researchers have used the data for a variety of analyses.

traffic spike from hard drive reliability post

Backblaze blog analytics showing spike in readership after a hard drive stats post

Takeaway: Being open and transparent is rarely as risky as it seems.

Sharing Revenue (And Other Metrics)

Journalists always want to publish company revenue and other metrics, and private companies always shy away from sharing. For a long time we did, too. Then, we opened up about that, as well.

The negatives of sharing these numbers are: 1) external parties may otherwise perceive you’re doing better than you are, 2) if you share numbers often, you may show that growth has slowed or worse, 3) it gives your competitors info to compare their own business too.

We decided that, while some may have perceived we were bigger, our scale was plenty significant. Since we choose what we share and when, it’s up to us whether to disclose at any point. And if our competitors compare, what will they actually change that would affect us?

I did wait to share revenue until I felt I had the right person to write about it. At one point a journalist said she wouldn’t write about us unless I disclosed revenue. I suggested we had a lot to offer for the story, but didn’t want to share revenue yet. She refused to budge and I walked away from the article. Several year later, I reached out to a journalist who had covered Backblaze before and I felt understood our business and offered to share revenue with him. He wrote a deep-dive about the company, with revenue being one of the components of the story.

Sharing these metrics showed that we were at scale and running a real business, one with positive unit economics and margins, but not one where we were gouging customers.

Takeaway: Being open with the press about items typically not shared can be uncomfortable, but the press can amplify your story.

Should You Share?

For Backblaze, I believe the results of transparency have been staggering. However, it’s not for everyone. Apple has, clearly, been wildly successful taking secrecy to the extreme. In their case, early disclosure combined with the long cycle of hardware releases could significantly impact sales of current products.

“For Backblaze, I believe the results of transparency have been staggering.” — Gleb Budman

I will argue, however, that for most startups transparency wins. Most startups need to establish credibility and trust, build awareness and a fan base, show that they understand what their customers need and be useful to them, and show the soul and passion behind the company. Some startup companies try to buy these virtues with investor money, and sometimes amplifying your brand via paid marketing helps. But, authentic transparency can build awareness and trust not only less expensively, but more deeply than money can buy.

Backblaze was open from the beginning. With no outside investors, as founders we were able to express ourselves and make our decisions. And it’s easier to be a company that shares if you do it from the start, but for any company, here are a few suggestions:

  1. Ask about sharing: If something significant happens — good or bad — ask “should we share this?” If you made a tough decision, ask “should we share the thinking behind the decision and why it was tough?”
  2. Default to yes: It’s often scary to share, but look for the reasons to say ‘yes,’ not the reasons to say ‘no.’ That doesn’t mean you won’t sometimes decide not to, but make that the high bar.
  3. Minimize reviews: Press releases tend to be sanitized and boring because they’ve been endlessly wordsmithed by committee. Establish the few things you don’t want shared, but minimize the number of people that have to see anything else before it can go out. Teach, then trust.
  4. Engage: Sharing will result in comments on your blog, social, articles, etc. Reply to people’s questions and engage. It’ll make the readers more engaged and give you a better understanding of what they’re looking for.
  5. Accept mistakes: Things will become public that aren’t perfectly sanitized. Accept that and don’t punish people for oversharing.

Building a culture of a company that is open to sharing takes time, but continuous practice will build that, and over time the company will navigate its voice and approach to sharing.

The post The Decision on Transparency appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Grafana and Microsoft Azure

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/11/10/grafana-and-microsoft-azure/

Grafana Launches Microsoft Azure Data Source

Microsoft is a whole new company. Way back in college, I remember that they were vehemently anti-Linux, with Steve Ballmer even going even going so far as to call open source a “cancer”. More recently, I’ve been watching with a sense of astonishment and admiration at some of their moves and announcements. I’ve been particularly impressed with the rise of Azure, and how they’ve come to embrace open source and open standards.

We got a chance to talk to the Azure metrics team a few months ago, and they shared some of their strategy and vision for metrics and observability. They’re all about interoperability and making the data easy to consume; whatever is best for the customer.

The Grafana Labs team quickly realized there was a lot of alignment; we both wanted to help Azure users bring their valuable metrics into Grafana. There, they can be unified with other data to get a complete understanding.

Fast forward a couple of short months, and we now have an official Azure metrics data source for Grafana. Other solutions that integrate with Azure generally require you to ETL all that data out, and deal with the associated pain and cost. This plugin just talks directly to Azure metrics, on demand.

The plugin is the result of collaboration between the Microsoft Azure team and Daniel Lee from Grafana Labs. It’s a preview version, so it’s at the beginning of what will hopefully be a fruitful journey. We’d love your feedback to help shape future development. This plugin can be installed into your self-hosted Grafana or GrafanaCloud – check out the plugin page for installation instructions. If you’d like to dig in a bit more and learn how everything fits together, check out the documentation.

The Azure team also announced the collaboration on their blog, and the availability of Grafana on the Azure Marketplace.

At Grafana Labs, we’re dead serious about bringing together ALL your time series data, wherever it lives. The Azure data source plugin is the 43rd data source in our growing catalog, and I’m sure it will be really well received by our users!

98, 99, 100 CloudFront Points of Presence!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/98-99-100-cloudfront-points-of-presence/

Nine years ago I showed you how you could Distribute Your Content with Amazon CloudFront. We launched CloudFront in 2008 with 14 Points of Presence and have been expanding rapidly ever since. Today I am pleased to announce the opening of our 100th Point of Presence, the fifth one in Tokyo and the sixth in Japan. With 89 Edge Locations and 11 Regional Edge Caches, CloudFront now supports traffic generated by millions of viewers around the world.

23 Countries, 50 Cities, and Growing
Those 100 Points of Presence span the globe, with sites in 50 cities and 23 countries. In the past 12 months we have expanded the size of our network by about 58%, adding 37 Points of Presence, including nine in the following new cities:

  • Berlin, Germany
  • Minneapolis, Minnesota, USA
  • Prague, Czech Republic
  • Boston, Massachusetts, USA
  • Munich, Germany
  • Vienna, Austria
  • Kuala Lumpur, Malaysia
  • Philadelphia, Pennsylvania, USA
  • Zurich, Switzerland

We have even more in the works, including an Edge Location in the United Arab Emirates, currently planned for the first quarter of 2018.

Innovating for Our Customers
As I mentioned earlier, our network consists of a mix of Edge Locations and Regional Edge Caches. First announced at re:Invent 2016, the Regional Edge Caches sit between our Edge Locations and your origin servers, have even more memory than the Edge Locations, and allow us to store content close to the viewers for rapid delivery, all while reducing the load on the origin servers.

While locations are important, they are just a starting point. We continue to focus on security with the recent launch of our Security Policies feature and our announcement that CloudFront is a HIPAA-eligible service. We gave you more content-serving and content-generation options with the launch of [email protected], letting you run AWS Lambda functions close to your users.

We have also been working to accelerate the processing of cache invalidations and configuration changes. We now accept invalidations within milliseconds of the request and confirm that the request has been processed world-wide, typically within 60 seconds. This helps to ensure that your customers have access to fresh, timely content!

Visit our Getting Started with Amazon CloudFront page for sign-up information, tutorials, webinars, on-demand videos, office hours, and more.

Jeff;

 

Now Available – Amazon Aurora with PostgreSQL Compatibility

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-amazon-aurora-with-postgresql-compatibility/

Late last year I told you about our plans to add PostgreSQL compatibility to Amazon Aurora. We launched the private beta shortly after that announcement, and followed it up earlier this year with an open preview. We’ve received lots of great feedback during the beta and the preview and have done our best to make sure that the product meets your needs and exceeds your expectations!

Now Generally Available
I am happy to report that Amazon Aurora with PostgreSQL Compatibility is now generally available and that you can use it today in four AWS Regions, with more to follow. It is compatible with PostgreSQL 9.6.3 and scales automatically to support up to 64 TB of storage, with 6-way replication behind the scenes to improve performance and availability.

Just like Amazon Aurora with MySQL compatibility, this edition is fully managed and is very easy to set up and to use. On the performance side, you can expect up to 3x the throughput that you’d get if you ran PostgreSQL on your own (you can read Amazon Aurora: Design Considerations for High Throughput Cloud-Native Relational Databases to learn more about how we did this).

You can launch a PostgreSQL-compatible Amazon Aurora instance from the RDS Console by selecting Amazon Aurora as the engine and PostgreSQL-compatible as the edition, and clicking on Next:

Then choose your instance class, single or Multi-AZ deployment (good for dev/test and production, respectively), set the instance name, and the administrator credentials, and click on Next:

You can choose between six instance classes (2 to 64 vCPUs and 15.25 to 488 GiB of memory):

The db.r4 instance class is new addition to Aurora and to RDS, and gives you an additional size at the top-end. The db.r4.16xlarge will give you additional write performance, and may allow you to use a single Aurora database instead of two or more sharded databases.

You can also set many advanced options on the next page, starting with network options such as the VPC and public accessibility:

You can set the cluster name and other database options. Encryption is easy to use and enabled by default; you can use the built-in default master key or choose one of your own:

You can also set failover behavior, the retention period for snapshot backups, and choose to enable collection of detailed (OS-level) metrics via Enhanced Monitoring:

After you have set it up to your liking, click on Launch DB Instance to proceed!

The new instances (primary and secondary since I specified Multi-AZ) are up and running within minutes:

Each PostgreSQL-compatible instance publishes 44 metrics to CloudWatch automatically:

With enhanced monitoring enabled, each instance collects additional per-instance and per-process metrics. It can be enabled when the instance is launched, or afterward, via Modify Instance. Here are some of the metrics collected when enhanced monitoring is enabled:

Clicking on Manage Graphs lets you choose which metrics are shown:

Per-process metrics are also available:

You can scale your read capacity by creating up to 15 Aurora replicas:

The cluster provides a single reader endpoint that you can access in order to load-balance requests across the replicas:

Performance Insights
As I noted earlier, Performance Insights is turned on automatically. This Amazon Aurora feature is wired directly into the database engine and allows you to look deep inside of each query, seeing the database resources that it uses and how they contribute to the overall response time. Here’s the initial view:

I can slice the view by SQL query in order to see how many concurrent copies of each query are running:

There are more views and options than I can fit in this post; to learn more take a look at Using Performance Insights.

Migrating to Amazon Aurora with PostgreSQL Compatibility
AWS Database Migration Service and the Schema Conversion Tool are ready to help you to move data stored in commercial and open-source databases to Amazon Aurora. The Schema Conversion Tool will perform a quick assessment of your database schemas and your code in order to help you to choose between MySQL and PostgreSQL. Our new, limited-time, Free DMS program allows you to use DMS and SCT to migrate to Aurora at no cost, with access to several types of DMS Instances for up to 6 months.

If you are already using PostgreSQL, you will be happy to hear that we support a long list of extensions including PostGIS and dblink.

Available Now
You can use Amazon Aurora with PostgreSQL Compatibility today in the US East (Northern Virginia), EU (Ireland), US West (Oregon), and US East (Ohio) Regions, with others to follow as soon as possible.

Jeff;

Apache OpenOffice 4.1.4 released

Post Syndicated from corbet original https://lwn.net/Articles/736898/rss

The OpenOffice
4.1.4 release
is finally available; see this article for some background on this
release. The announcement is all bright and sunny, but a look at the
August 16 Apache board minutes
shows concern about the state of
the project. Indeed, the OpenOffice project management committee was,
according to these minutes, supposed to post an announcement about the
state of the project; it would appear that has not yet happened.

Anti-Piracy Group Joins Internet Organization That Controls Top-Level Domain

Post Syndicated from Andy original https://torrentfreak.com/anti-piracy-group-joins-internet-organization-that-controls-top-level-domain-171019/

All around the world, content creators and rightsholders continue to protest against the unauthorized online distribution of copyrighted content.

While pirating end-users obviously share some of the burden, the main emphasis has traditionally been placed on the shuttering of illicit sites, whether torrent, streaming, or hosting based.

Over time, however, sites have become more prevalent and increasingly resilient, leaving the music, movie and publishing industries to play a frustrating game of whac-a-mole. With this in mind, their focus has increasingly shifted towards Internet gatekeepers, including ISPs and bodies with influence over domain availability.

While most of these efforts take place via cooperation or legal action, there’s regularly conflict when Hollywood, for example, wants a particular domain rendered inaccessible or the music industry wants pirates kicked off the Internet.

As a result, there’s nearly always a disconnect, with copyright holders on one side and Internet technology companies worried about mission creep on the other. In Denmark, however, those lines have just been blurred in the most intriguing way possible after an infamous anti-piracy outfit joined an organization with significant control over the Internet in the country.

RettighedsAlliancen (or Rights Alliance as it’s more commonly known) is an anti-piracy group which counts some of the most powerful local and international movie companies among its members. It also operates on behalf of IFPI and by extension, most of the world’s major recording labels.

The group has been involved in dozens of legal processes over the years against file-sharers and file-sharing sites, most recently fighting for and winning ISP blockades against most major pirate portals including The Pirate Bay, RARBG, Torrentz, and many more.

In a somewhat surprising new announcement, the group has revealed it’s become the latest member of DIFO, the Danish Internet Forum (DIFO) which “works for a secure and accessible Internet” under the top-level .DK domain. Indeed, DIFO has overall responsibility for Danish internet infrastructure.

“For DIFO it is important to have a strong link to the Danish internet community. Therefore, we are very pleased that the Alliance wishes to be part of the association,” DIFO said in a statement.

Rights Alliance will be DIFO’s third new member this year but uniquely it will get the opportunity to represent the interests of more than 100,000 Danish and international rightholders from inside an influential Internet-focused organization.

Looking at DIFO’s membership, Rights Alliance certainly stands out as unusual. The majority of the members are made up of IT-based organizations, such as the Internet Industry Association, The Association of Open Source Suppliers and DKRegistrar, the industry association for Danish domain registrars.

A meeting around a table with these players and their often conflicting interests is likely to be an experience for all involved. However, all parties seem more than happy with the new partnership.

“We want to help create a more secure internet for companies that invest in doing business online, and for users to be safe, so combating digital crime is a key and shared goal,” says Rights Alliance chief, Maria Fredenslund. “I am therefore looking forward to the future cooperation with DIFO.”

Only time will tell how this partnership will play out but if common ground can be found, it’s certainly possible that the anti-piracy scene in Denmark could step up a couple of gears in the future.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Amazon Redshift Dense Compute (DC2) Nodes Deliver Twice the Performance as DC1 at the Same Price

Post Syndicated from Quaseer Mujawar original https://aws.amazon.com/blogs/big-data/amazon-redshift-dense-compute-dc2-nodes-deliver-twice-the-performance-as-dc1-at-the-same-price/

Amazon Redshift makes analyzing exabyte-scale data fast, simple, and cost-effective. It delivers advanced data warehousing capabilities, including parallel execution, compressed columnar storage, and end-to-end encryption as a fully managed service, for less than $1,000/TB/year. With Amazon Redshift Spectrum, you can run SQL queries directly against exabytes of unstructured data in Amazon S3 for $5/TB scanned.

Today, we are making our Dense Compute (DC) family faster and more cost-effective with new second-generation Dense Compute (DC2) nodes at the same price as our previous generation DC1. DC2 is designed for demanding data warehousing workloads that require low latency and high throughput. DC2 features powerful Intel E5-2686 v4 (Broadwell) CPUs, fast DDR4 memory, and NVMe-based solid state disks.

We’ve tuned Amazon Redshift to take advantage of the better CPU, network, and disk on DC2 nodes, providing up to twice the performance of DC1 at the same price. Our DC2.8xlarge instances now provide twice the memory per slice of data and an optimized storage layout with 30 percent better storage utilization.

Customer successes

Several flagship customers, ranging from fast growing startups to large Fortune 100 companies, previewed the new DC2 node type. In their tests, DC2 provided up to twice the performance as DC1. Our preview customers saw faster ETL (extract, transform, and load) jobs, higher query throughput, better concurrency, faster reports, and shorter data-to-insights—all at the same cost as DC1. DC2.8xlarge customers also noted that their databases used up to 30 percent less disk space due to our optimized storage format, reducing their costs.

4Cite Marketing, one of America’s fastest growing private companies, uses Amazon Redshift to analyze customer data and determine personalized product recommendations for retailers. “Amazon Redshift’s new DC2 node is giving us a 100 percent performance increase, allowing us to provide faster insights for our retailers, more cost-effectively, to drive incremental revenue,” said Jim Finnerty, 4Cite’s senior vice president of product.

BrandVerity, a Seattle-based brand protection and compliance‎ company, provides solutions to monitor, detect, and mitigate online brand, trademark, and compliance abuse. “We saw a 70 percent performance boost with the DC2 nodes for running Redshift Spectrum queries. As a result, we can analyze far more data for our customers and deliver results much faster,” said Hyung-Joon Kim, principal software engineer at BrandVerity.

“Amazon Redshift is at the core of our operations and our marketing automation tools,” said Jarno Kartela, head of analytics and chief data scientist at DNA Plc, one of the leading Finnish telecommunications groups and Finland’s largest cable operator and pay TV provider. “We saw a 52 percent performance gain in moving to Amazon Redshift’s DC2 nodes. We can now run queries in half the time, allowing us to provide more analytics power and reduce time-to-insight for our analytics and marketing automation users.”

You can read about their experiences on our Customer Success page.

Get started

You can try the new node type using our getting started guide. Just choose dc2.large or dc2.8xlarge in the Amazon Redshift console:

If you have a DC1.large Amazon Redshift cluster, you can restore to a new DC2.large cluster using an existing snapshot. To migrate from DS2.xlarge, DS2.8xlarge, or DC1.8xlarge Amazon Redshift clusters, you can use the resize operation to move data to your new DC2 cluster. For more information, see Clusters and Nodes in Amazon Redshift.

To get the latest Amazon Redshift feature announcements, check out our What’s New page, and subscribe to the RSS feed.

timeShift(GrafanaBuzz, 1w) Issue 16

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/10/06/timeshiftgrafanabuzz-1w-issue-16/

Welcome to another issue of TimeShift. In addition to the roundup of articles and plugin updates, we had a big announcement this week – Early Bird tickets to GrafanaCon EU are now available! We’re also accepting CFPs through the end of October, so if you have a topic in mind, don’t wait until the last minute, please send it our way. Speakers who are selected will receive a comped ticket to the conference.


Early Bird Tickets Now Available

We’ve released a limited number of Early Bird tickets before General Admission tickets are available. Take advantage of this discount before they’re sold out!

Get Your Early Bird Ticket Now

Interested in speaking at GrafanaCon? We’re looking for technical and non-tecnical talks of all sizes. Submit a CFP Now.


From the Blogosphere

Get insights into your Azure Cosmos DB: partition heatmaps, OMS, and More: Microsoft recently announced the ability to access a subset of Azure Cosmos DB metrics via Azure Monitor API. Grafana Labs built an Azure Monitor Plugin for Grafana 4.5 to visualize the data.

How to monitor Docker for Mac/Windows: Brian was tired of guessing about the performance of his development machines and test environment. Here, he shows how to monitor Docker with Prometheus to get a better understanding of a dev environment in his quest to monitor all the things.

Prometheus and Grafana to Monitor 10,000 servers: This article covers enokido’s process of choosing a monitoring platform. He identifies three possible solutions, outlines the pros and cons of each, and discusses why he chose Prometheus.

GitLab Monitoring: It’s fascinating to see Grafana dashboards with production data from companies around the world. For instance, we’ve previously highlighted the huge number of dashboards Wikimedia publicly shares. This week, we found that GitLab also has public dashboards to explore.

Monitoring a Docker Swarm Cluster with cAdvisor, InfluxDB and Grafana | The Laboratory: It’s important to know the state of your applications in a scalable environment such as Docker Swarm. This video covers an overview of Docker, VM’s vs. containers, orchestration and how to monitor Docker Swarm.

Introducing Telemetry: Actionable Time Series Data from Counters: Learn how to use counters from mulitple disparate sources, devices, operating systems, and applications to generate actionable time series data.

ofp_sniffer Branch 1.2 (docker/influxdb/grafana) Upcoming Features: This video demo shows off some of the upcoming features for OFP_Sniffer, an OpenFlow sniffer to help network troubleshooting in production networks.


Grafana Plugins

Plugin authors add new features and bugfixes all the time, so it’s important to always keep your plugins up to date. To update plugins from on-prem Grafana, use the Grafana-cli tool, if you are using Hosted Grafana, you can update with 1 click! If you have questions or need help, hit up our community site, where the Grafana team and members of the community are happy to help.

UPDATED PLUGIN

PNP for Nagios Data Source – The latest release for the PNP data source has some fixes and adds a mathematical factor option.

Update

UPDATED PLUGIN

Google Calendar Data Source – This week, there was a small bug fix for the Google Calendar annotations data source.

Update

UPDATED PLUGIN

BT Plugins – Our friends at BT have been busy. All of the BT plugins in our catalog received and update this week. The plugins are the Status Dot Panel, the Peak Report Panel, the Trend Box Panel and the Alarm Box Panel.

Changes include:

  • Custom dashboard links now work in Internet Explorer.
  • The Peak Report panel no longer supports click-to-sort.
  • The Status Dot panel tooltips now look like Grafana tooltips.


This week’s MVC (Most Valuable Contributor)

Each week we highlight some of the important contributions from our amazing open source community. This week, we’d like to recognize a contributor who did a lot of work to improve Prometheus support.

pdoan017
Thanks to Alin Sinpaleanfor his Prometheus PR – that aligns the step and interval parameters. Alin got a lot of feedback from the Prometheus community and spent a lot of time and energy explaining, debating and iterating before the PR was ready.
Thank you!


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Wow – Excited to be a part of exploring data to find out how Mexico City is evolving.

We Need Your Help!

Do you have a graph that you love because the data is beautiful or because the graph provides interesting information? Please get in touch. Tweet or send us an email with a screenshot, and we’ll tell you about this fun experiment.

Tell Me More


What do you think?

That’s a wrap! How are we doing? Submit a comment on this article below, or post something at our community forum. Help us make these weekly roundups better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Iran Arrests Six Movie Pirates After Rival ‘Licensed’ Pirates Complain

Post Syndicated from Andy original https://torrentfreak.com/iran-arrests-six-movie-pirates-after-rival-licensed-pirates-complain-171003/

Article 23 of Iran’s Copyright law is quite clear. Anyone who publishes, distributes or broadcasts another person’s work without permission “shall be condemned to corrective imprisonment for a period of time not less than six months and not more than three years.”

That being said, not all content receives protection. Since there are no copyright agreements between Iran and the United States, for example, US content is pirated almost at will in the country. Even the government itself has run ‘warez’ servers in the past.

That makes the arrest late last month of six men tied to movie piracy site TinyMoviez all the more unusual. At first view (translated image below), the site looks just like any other streaming portal offering Hollywood movies.

TinyMoviez

Indeed, much of the content comes from abroad, augmented with local Farsi-language subtitles or audio voiceovers.

However, according to a source cited by the Center for Human Rights in Iran (CHRI), the site was targeted because rival pirate sites (which had been licensed to ‘pirate’ by the Iranian government) complained about its unlicensed status.

“In July and August [2017], there was a meeting between a number of Iranian start-up companies and [current Telecommunications Minister Mohammad Javad Azari] Jahromi, who was asked by film and TV series distributors as well as video game developers to help shut down and monitor unlicensed rivals,” a film distributor in Tehran told CHRI.

“The start-ups made the request because they could not compete with a site like TinyMovies,” the source added. “After that meeting, Jahromi was nicknamed the ‘Start-Up Tsar’ because of his supportive comments. They were happy that he became the minister.”

That being said, the announcement from the authorities suggested broader issues, including that the site offered movies (none are singled out) that may be unacceptable by Iranian standards.

“Tehran’s prosecutor, after referral of the case to the Cyberspace corruption and prostitution department, said that the defendants in the case, of whom six were currently detained, produced vagabond and pornographic films and sold them in cyberspace,” Tehran Prosecutor Abbas Jafari Dowlatabadi said in an announcement.

“This gang illegally operated the largest source for downloading Hollywood movies and over the past three years, has distributed 18,000 foreign films and series after dubbing, many of which were indecent and immoral, and thus facilitated by illegitimate funds.”

While the authorities say that TinyMoviez has been taken down, various URLs (including Tinyz.us, ironically) now divert to a new domain, Timoviez2.net. However, at least for the moment, download links seem to be disabled.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Linux kernel LTS releases are now good for 6 years (ars technica)

Post Syndicated from corbet original https://lwn.net/Articles/735190/rss

Ars technica reports
on an announcement that the kernel’s long-term support releases will now be
maintained for six years instead of two. “A six-year support window
will give Google, SoC Vendors, and OEMs plenty of time to develop a device
and get it to market, while still leaving about four years for end-user
ownership. Google currently provides two years of major OS updates on its
phones and three years of security updates, but if it wanted to extend
that, an announcement like this would seem like an important first
step.
” The kernel.org releases
page
now shows 4.4 being maintained through February 2022.

HDClub, Russia’s Leading HD-Only Torrent Site, Returns as EliteHD

Post Syndicated from Ernesto original https://torrentfreak.com/hdclub-russias-leading-hd-only-torrent-site-returns-as-elitehd-170930/

With around 170,000 users, HDClub was known for high-quality releases that often leaked to public sites like The Pirate Bay.

Describing itself as “The HighDefinition BitTorrent Community”, HDClub specialized in HD productions including Blu-ray and 3D content, covering movies, TV shows, music videos, and animation.

The site was the largest of its kind in Russia and had been around for a long time. It celebrated its tenth anniversary a few months ago and during this time it amassed over 170,000 members, which is quite significant for a private community.

However, last month the fun was over. As a total surprise to most of the members, HDTorrents’ operators decided to shut down the site. A Russian language announcement now present on its main page explains the reasons for the site’s demise.

“Recently, we received several dozens of complaints from rightsholders weekly, and our community is subjected to attacks and espionage. In parallel, there is a tightening of Internet legislation in Russia, Ukraine and EU countries,” the announcement explained.

This grim outlook was, however, paired with a glimmer of hope. “There are talks on preserving the heritage of the club,” the site teased.

This was not a false promise, it turned out this week. The former foundation of HDClub now forms the basis of a new tracker. EliteHD takes over where HDClub left off with a working copy of the code, torrents and user database.

“Welcome to the closed tracker elitehd.org. We will try to increase the best HD collection and ensure your safety and confidentiality,” EliteHD’s operators posted in a Russian announcement earlier this week.

“The new site received a full copy of the database and the code of the closed HDClub. The user base has been thoroughly cleaned, there will be no free registration,” it adds.

EliteHD’s torrents

“Thoroughly cleaned” means that around 80,000 accounts were removed and the new maximum is currently set at 100,000 registered users. The torrent database is intact though. There are over 26,000 HD torrents in the database totaling more than 500 terabytes of data.

The site’s operators note that members can continue to seed old torrents as well. All they have to do is change the torrent’s announce URL in their client, and uploads should pick up again.

In recent weeks there have been other private trackers which tried to get former HDClub users on board, but it will be hard to compete with a site that has the real database and code.

EliteHD specifically warns people not to fall for fakes and ‘unofficial’ incarnations of its predecessor. “We strongly recommend that you beware of numerous fake projects and “successors,” the site operators stress.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Of Course Atlus Hit RPCS3’s Patreon Page Over Persona 5

Post Syndicated from Andy original https://torrentfreak.com/of-course-atlus-hit-rpcs3s-patreon-page-over-persona-5-170927/

For the uninitiated, RPCS3 is an open-source Sony PlayStation 3 emulator for PC. This growing and brilliant piece of code was publicly released in 2012 and since then has been under constant development thanks to a decent-sized team of programmers and other contributors.

While all emulation has its challenges, emulating a relatively recent piece of hardware such as Playstation 3 is a massive undertaking. As a result, RPCS3 needs funding. This it achieves through its Patreon page, which currently receives support from 675 patrons to the tune of $3,000 per month.

There’s little doubt that there are plenty of people out there who want the project to succeed. Yesterday, however, things took a turn for the worse when RPCS3 attracted the negative attention of Atlus, the developer behind the utterly beautiful RPG, Persona 5.

According to the RPCS3 team, Atlus filed a DMCA takedown notice with Patreon requesting the removal of the entire RPCS3 page after the team promoted the fact that Persona 5 would be compatible with the under-development emulator.

“The PS3 emulator itself is not infringing on our copyrights and trademarks; however, no version of the P5 game should be playable on this platform; and [the RPCS3] developers are infringing on our IP by making such games playable,” Atlus told Patreon.

Fortunately for everyone involved, Patreon did not storm in and remove the entire page, not least since the page itself didn’t infringe on Atlus’ IP rights. However, Atlus was not happy with the response and attempted to negotiate with the fund-raising platform, noting that in order for Persona 5 to work, the user would have to circumvent the game’s DRM protections.

The RPCS3 team, on the other hand, believe they’re on solid ground, noting that where their main developers live, it is legal to make personal copies of legally purchased games. They concede it may not be legal for everyone, but in any event, that would be irrelevant to the DMCA notice filed against their Patreon page. Indeed, trying to take down an entire fundraiser with a DMCA notice was a significant overreach under the circumstances

According to a statement from the team, ultimately a decision was taken to proceed with caution. In order to avoid a full takedown of their Patreon page, all mentions of Persona 5 were removed from both the fund-raiser and main RPSC3 site yesterday.

The RPSC3 team noted that they had no idea why Atlus targeted their project but an announcement from the developer later shone a little light on the issue.

“We believe that our fans best experience our titles (like Persona 5) on the actual platforms for which they are developed. We don’t want their first experiences to be framerate drops, or crashes, or other issues that can crop up in emulation that we have not personally overseen,” Atlus explained.

While some gamers expressed negative opinions over Atlus’ undoubtedly overbroad actions yesterday, it’s difficult to argue with the developer’s main point. Emulators can be beautiful things but there is no doubt that in many instances they don’t recreate the gaming experience perfectly. Indeed, in some cases when things don’t go to plan, the results can be pretty horrible.

That being said, for whatever reason Atlus has chosen not to release a PC version of this popular title so, as many hardcore emulator fans will tell you (this one included), that’s a bit of a red rag to a bull. The company suggests that it might remedy that situation in the future though, so maybe that’s some consolation.

In the meantime, there’s a significant backlash against Atlus and what it attempted to do to the RPCS3 project and its fund-raising efforts. Some people are threatening never to buy an Atlus game ever again, for example, and that’s their prerogative.

But really – is anyone truly surprised that Atlus reacted in the way it did?

While Persona 5 isn’t available on PC yet, this isn’t an out-of-print game from 1982 that’s about to disappear into the black hole of time because there’s no hardware to play it on. This is a game created for relatively current hardware (bang up to date if you include the PS4 version) that was released April 2017 in the United States, just a handful of months ago.

As such, none of the usual ‘moral’ motivations for emulating games on other platforms exist for Persona 5 and for that reason alone, the decision to heavily mention it in RPCS3 fund-raising efforts was bound to backfire. It doesn’t matter whether emulation or dumping of ROMs is legal in some regions, any company can be expected to wade in when someone threatens their business model.

The stark reality is that when they do, entire projects can be put at risk. In this case, Patreon stepped in to save the day but it could’ve been a lot worse. Martyring the whole project for one game would’ve been a disaster for the team and the public. All that being said, Atlus is unlikely to come out of this on top.

“Whatever people may wish, there’s no way to stop any playable game from being executed on the emulator,” the RPCS3 team note.

“Blacklisting the game? RPCS3 is open-source, any attempt would easily be reversed. Attempting to take down the project? At the time of this post, this and many other games were already playable to their full extent, and again, RPCS3 is and will always be an open-source project.”

The bottom line here is that Atlus’ actions may have left a bit of a bad taste in the mouths of some gamers, but even the most hardcore emulator fan shouldn’t be surprised the company went for the throat on a game so fresh. That being said, there are lessons to be learned.

Atlus could’ve spoken quietly to RPCS3 first, but chose not to. RPCS3, on the other hand, will probably be a little bit more strategic with future game compatibility announcements, given what’s just happened. In the long term, that will help them, since it will ensure longevity for the project.

RPCS3 is needed, there’s no doubt about that, but its true value will only be felt when the PS3 has been consigned to history. At that point people will understand why it was worth all the effort – and the occasional hiccup.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Cryptocurrency Miner Targeted by Anti-Virus and Adblock Tools

Post Syndicated from Ernesto original https://torrentfreak.com/cryptocurrency-miner-targeted-by-anti-virus-and-adblock-tools-170926/

Earlier this month The Pirate Bay caused some uproar by adding a Javascript-based cryptocurrency miner to its website.

The miner utilizes CPU power from visitors to generate Monero coins for the site, providing an extra revenue source.

While Pirate Bay only tested the option briefly, it inspired many others to follow suit. Streaming related sites such as Alluc, Vidoza, and Rapidvideo jumped on board, and torrent site Demonoid also ran some tests.

During the weekend, Coinhive’s miner code even appeared on the official website of Showtime. The code was quickly removed and it’s still unclear how it got there, as the company refuses to comment. It’s clear, though, that miners are a hot topic thanks to The Pirate Bay.

The revenue potential is also real. TorrentFreak spoke to Vidoza who say that with 30,000 online users throughout the day (2M unique visitors), they can make between $500 and $600. That’s when the miner is throttled at 50%. Although ads can bring in more, it’s not insignificant.

That said, all the uproar about cryptocurrency miners and their possible abuse has also attracted the attention of ad-blockers. Some people have coded new browser add-ons to block miners specifically and the popular uBlock Origin added Coinhive to its default blocklist as well. And that’s just after a few days.

Needless to say, this limits the number of miners, and thus the money that comes in. And there’s another problem with a similar effect.

In addition to ad-blockers, anti-virus tools are also flagging Coinhive. Malwarebytes is one of the companies that lists it as a malicious activity, warning users about the threat.

The anti-virus angle is one of the issues that worries Demonoid’s operator. The site is used to ad-blockers, but getting flagged by anti-virus companies is of a different order.

“The problem I see there and the reason we will likely discontinue [use of the miner] is that some anti-virus programs block it, and that might get the site on their blacklists,” Deimos informs TorrentFreak.

Demonoid’s miner announcement

Vidoza operator Eugene sees all the blocking as an unwelcome development and hopes that Coinhive will tackle it. Coinhive may want to come out in public and start to discuss the issue with ad-blockers and anti-virus companies, he says.

“They should find out under what conditions all these guys will stop blocking the script,” he notes.

The other option would be to circumvent the blocking through proxies and circumvention tools, but that might not be the best choice in the long run.

Coinhive, meanwhile, has chimed in as well. The company says that it wasn’t properly prepared for the massive attention and understands why some ad-blockers have put them on the blacklist.

“Providing a real alternative to ads and users who block them turned out to be a much harder problem. Coinhive, too, is now blocked by many ad-block browser extensions, which – we have to admit – is reasonable at this point.”

Most complaints have been targeted at sites that implemented the miner without the user’s consent. Coinhive doesn’t like this either and will take steps to prevent it in future.

“We’re a bit saddened to see that some of our customers integrate Coinhive into their pages without disclosing to their users what’s going on, let alone asking for their permission,” the Coinhive team notes.

The crypto miner provider is working on a new implementation that requires explicit consent from website visitors in order to run. This should deal with most of the negative responses.

If users start mining voluntarily, then ad-blockers and anti-virus companies should no longer have a reason to block the script. Nor will it be easy for malware peddlers to abuse it.

To be continued.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

In the Works – AWS Region in the Middle East

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-the-middle-east/

Last year we launched new AWS Regions in Canada, India, Korea, the UK, and the United States, and announced that new regions are coming to China, France, Hong Kong, Sweden, and a second GovCloud Region in the US throughout 2017 and 2018.

Middle East Region by Early 2019
Today, I am happy to announce that we will be opening an AWS Region in the Middle East by early 2019. The new Region will be based in Bahrain, will be comprised of three Availability Zones at launch, and will give AWS customers and partners the ability to run their workloads and store their data in the Middle East.

AWS customers are already making use of 44 Availability Zones across 16 geographic regions. Today’s announcement brings the total number of global regions (operational and in the works) up to 22.

UAE Edge Location in 2018
We also plan to open an edge location in the UAE in the first quarter of 2018. This will bring Amazon CloudFront, Amazon Route 53, AWS Shield, and AWS WAF to the region, adding to our existing set of 78 points of presence world-wide.

These announcements add to our continued investment in the Middle East. Earlier this year we announced the opening of AWS offices in Dubai, UAE and Manama, Bahrain. Prior to this we have supported the growth of technology education in the region with AWS Educate and have supported the growth of new businesses through AWS Activate for many years.

The addition of AWS infrastructure in the Middle East will help countries across the region to innovate, grow their economies, and pursue their vision plans (Saudi Vision 2030, UAE Vision 2021, Bahrain Vision 2030, and so forth).

Talk to Us
As always, we are looking forward to serving new and existing customers in the Middle East and working with partners across the region. Of course, the new Region will also be open to existing AWS customers who would like to serve users in the Middle East.

To learn more about the AWS Middle East Region feel free to contact our team at [email protected] .

If you are interested in joining the team and would like to learn more about AWS positions in the Middle East, take a look at the Amazon Jobs site.

Jeff;

Russia’s Largest Torrent Site Celebrates 13 Years Online in a Chinese Restaurant

Post Syndicated from Andy original https://torrentfreak.com/russias-largest-torrent-site-celebrates-13-years-online-in-a-chinese-restaurant-170923/

For most torrent fans around the world, The Pirate Bay is the big symbol of international defiance. Over the years the site has fought, avoided, and snubbed its nose at dozens of battles, yet still remains online today.

But there is another site, located somewhere in the east, that has been online for nearly as long, has millions more registered members, and has proven just as defiant.

RuTracker, for those who haven’t yet found it, is a Russian-focused treasure trove of both local and international content. For many years the site was frequented only by native speakers but with the wonders of tools like Google Translate, anyone can use the site at the flick of the switch. When people are struggling to find content, it’s likely that RuTracker has it.

This position has attracted the negative attention of a wide range of copyright holders and thanks to legislation introduced during 2013, the site is now subject to complete blocking in Russia. In fact, RuTracker has proven so stubborn to copyright holder demands, it is now permanently blocked in the region by all ISPs.

Surprisingly, especially given the enthusiasm for blockades among copyright holders, this doesn’t seem to have dampened demand for the site’s services. According to SimiliarWeb, against all the odds the site is still pulling in around 90 million visitors per month. But the impressive stats don’t stop there.

Impressive stats for a permanently blocked site

This week, RuTracker celebrates its 13th birthday, a relative lifetime for a site that has been front and center of Russia’s most significant copyright battles, trouble which doesn’t look like stopping anytime soon.

Back in 2010, for example, RU-Center, Russia’s largest domain name registrar and web-hosting provider, pulled the plug on the site’s former Torrents.ru domain. The Director of Public Relations at RU-Center said that the domain had been blocked on the orders of the Investigative Division of the regional prosecutor’s office in Moscow. The site never got its domain back but carried on regardless, despite the setbacks.

Back then the site had around 4,000,000 members but now, seven years on, its ranks have swelled to a reported 15,382,907. According to figures published by the site this week, 778,317 of those members signed up this year during a period the site was supposed to be completely inaccessible. Needless to say, its operators remain defiant.

“Today we celebrate the 13th anniversary of our tracker, which is the largest Russian (and not only) -language media library on this planet. A tracker strangely banished in the country where most of its audience is located – in Russia,” a site announcement reads.

“But, despite the prohibitions, with all these legislative obstacles, with all these technical difficulties, we see that our tracker still exists and is successfully developing. And we still believe that the library should be open and free for all, and not be subject to censorship or a victim of legislative and executive power lobbied by the monopolists of the media industry.”

It’s interesting to note the tone of the RuTracker announcement. On any other day it could’ve been written by the crew of The Pirate Bay who, in their prime, loved to stick a finger or two up to the copyright lobby and then rub their noses in it. For the team at RuTracker, that still appears to be one of the main goals.

Like The Pirate Bay but unlike many of the basic torrent indexers that have sprung up in recent years, RuTracker relies on users to upload its content. They certainly haven’t been sitting back. RuTracker reveals that during the past year and despite all the problems, users uploaded a total of 171,819 torrents – on average, 470 torrents per day.

Interestingly, the content most uploaded to the site also points to the growing internationalization of RuTracker. During the past year, the NBA / NCAA section proved most popular, closely followed by non-Russian rock music and NHL games. Non-Russian movies accounted for almost 2,000 fresh torrents in just 12 months.

“It is thanks to you this tracker lives!” the site’s operators informed the users.

“It is thanks to you that it was, is, and, for sure, will continue to offer the most comprehensive, diverse and, most importantly, quality content in the Russian Internet. You stayed with us when the tracker lost its original name: torrents.ru. You stayed with us when access to a new name was blocked in Russia: rutracker.org. You stayed with us when [the site’s trackers] were blocked. We will stay with you as long as you need us!”

So as RuTracker plans for another year online, all that remains is to celebrate its 13th birthday in style. That will be achieved tonight when every adult member of RuTracker is invited to enjoy Chinese meal at the Tian Jin Chinese Restaurant in St. Petersburg.

Turn up early, seating is limited.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.