Tag Archives: Technical

Don Jr.: I’ll bite

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/11/don-jr-ill-bite.html

So Don Jr. tweets the following, which is an excellent troll. So I thought I’d bite. The reason is I just got through debunk Democrat claims about NetNeutrality, so it seems like a good time to balance things out and debunk Trump nonsense.

The issue here is not which side is right. The issue here is whether you stand for truth, or whether you’ll seize any factoid that appears to support your side, regardless of the truthfulness of it. The ACLU obviously chose falsehoods, as I documented. In the following tweet, Don Jr. does the same.

It’s a preview of the hyperpartisan debates are you are likely to have across the dinner table tomorrow, which each side trying to outdo the other in the false-hoods they’ll claim.

What we see in this number is a steady trend of these statistics since the Great Recession, with no evidence in the graphs showing how Trump has influenced these numbers, one way or the other.

Stock markets at all time highs

This is true, but it’s obviously not due to Trump. The stock markers have been steadily rising since the Great Recession. Trump has done nothing substantive to change the market trajectory. Also, he hasn’t inspired the market to change it’s direction.
To be fair to Don Jr., we’ve all been crediting (or blaming) presidents for changes in the stock market despite the fact they have almost no influence over it. Presidents don’t run the economy, it’s an inappropriate conceit. The most influence they’ve had is in harming it.

Lowest jobless claims since 73

Again, let’s graph this:

As we can see, jobless claims have been on a smooth downward trajectory since the Great Recession. It’s difficult to see here how President Trump has influenced these numbers.

6 Trillion added to the economy

What he’s referring to is that assets have risen in value, like the stock market, homes, gold, and even Bitcoin.
But this is a well known fallacy known as Mercantilism, believing the “economy” is measured by the value of its assets. This was debunked by Adam Smith in his book “The Wealth of Nations“, where he showed instead the the “economy” is measured by how much it produces (GDP – Gross Domestic Product) and not assets.
GDP has grown at 3.0%, which is pretty good compared to the long term trend, and is better than Europe or Japan (though not as good as China). But Trump doesn’t deserve any credit for this — today’s rise in GDP is the result of stuff that happened years ago.
Assets have risen by $6 trillion, but that’s not a good thing. After all, when you sell your home for more money, the buyer has to pay more. So one person is better off and one is worse off, so the net effect is zero.
Actually, such asset price increase is a worrisome indicator — we are entering into bubble territory. It’s the result of a loose monetary policy, low interest rates and “quantitative easing” that was designed under the Obama administration to stimulate the economy. That’s why all assets are rising in value. Normally, a rise in one asset means a fall in another, like selling gold to pay for houses. But because of loose monetary policy, all assets are increasing in price. The amazing rise in Bitcoin over the last year is as much a result of this bubble growing in all assets as it is to an exuberant belief in Bitcoin.
When this bubble collapses, which may happen during Trump’s term, it’ll really be the Obama administration who is to blame. I mean, if Trump is willing to take credit for the asset price bubble now, I’m willing to give it to him, as long as he accepts the blame when it crashes.

1.5 million fewer people on food stamps

As you’d expect, I’m going to debunk this with a graph: the numbers have been falling since the great recession. Indeed, in the previous period under Obama, 1.9 fewer people got off food stamps, so Trump’s performance is slight ahead rather than behind Obama. Of course, neither president is really responsible.

Consumer confidence through the roof

Again we are going to graph this number:

Again we find nothing in the graph that suggests President Trump is responsible for any change — it’s been improving steadily since the Great Recession.

One thing to note is that, technically, it’s not “through the roof” — it still quite a bit below the roof set during the dot-com era.

Lowest Unemployment rate in 17 years

Again, let’s simply graph it over time and look for Trump’s contribution. as we can see, there doesn’t appear to be anything special Trump has done — unemployment has steadily been improving since the Great Recession.
But here’s the thing, the “unemployment rate” only measures those looking for work, not those who have given up. The number that concerns people more is the “labor force participation rate”. The Great Recession kicked a lot of workers out of the economy.
Mostly this is because Baby Boomer are now retiring an leaving the workforce, and some have chosen to retire early rather than look for another job. But there are still some other problems in our economy that cause this. President Trump has nothing particular in order to solve these problems.

Conclusion

As we see, Don Jr’s tweet is a troll. When we look at the graphs of these indicators going back to the Great Recession, we don’t see how President Trump has influenced anything. The improvements this year are in line with the improvements last year, which are in turn inline with the improvements in the previous year.
To be fair, all parties credit their President with improvements during their term. President Obama’s supporters did the same thing. But at least right now, with these numbers, we can see that there’s no merit to anything in Don Jr’s tweet.
The hyperpartisan rancor in this country is because neither side cares about the facts. We should care. We should care that these numbers suck, even if we are Republicans. Conversely, we should care that those NetNeutrality claims by Democrats suck, even if we are Democrats.

Game of Thrones Leaks “Carried Out By Former Iranian Military Hacker”

Post Syndicated from Andy original https://torrentfreak.com/game-of-thrones-leaks-carried-out-by-former-iranian-military-hacker-171122/

Late July it was reported that hackers had stolen proprietary information from media giant HBO.

The haul was said to include confidential details of the then-unreleased fourth episode of the latest Game of Thrones season, plus episodes of Ballers, Barry, Insecure, and Room 104.

“Hi to all mankind,” an email sent to reporters read. “The greatest leak of cyber space era is happening. What’s its name? Oh I forget to tell. Its HBO and Game of Thrones……!!!!!!”

In follow-up correspondence, the hackers claimed to have penetrated HBO’s internal network, gaining access to emails, technical platforms, and other confidential information.

Image released by the hackers

Soon after, HBO chairman and CEO Richard Plepler confirmed a breach at his company, telling employees that there had been a “cyber incident” in which information and programming had been taken.

“Any intrusion of this nature is obviously disruptive, unsettling, and disturbing for all of us. I can assure you that senior leadership and our extraordinary technology team, along with outside experts, are working round the clock to protect our collective interests,” he said.

During mid-August, problems persisted, with unreleased shows hitting the Internet. HBO appeared rattled by the ongoing incident, refusing to comment to the media on every new development. Now, however, it appears the tide is turning on HBO’s foe.

In a statement last evening, Joon H. Kim, Acting United States Attorney for the Southern District of New York, and William F. Sweeney Jr., Assistant Director-in-Charge of the New York Field Division of the FBI, announced the unsealing of an indictment charging a 29-year-old man with offenses carried out against HBO.

“Behzad Mesri, an Iranian national who had previously hacked computer systems for the Iranian military, allegedly infiltrated HBO’s systems, stole proprietary data, including scripts and plot summaries for unaired episodes of Game of Thrones, and then sought to extort HBO of $6 million in Bitcoins,” Kim said.

“Mesri now stands charged with federal crimes, and although not arrested today, he will forever have to look over his shoulder until he is made to face justice. American ingenuity and creativity is to be cultivated and celebrated — not hacked, stolen, and held for ransom. For hackers who test our resolve in protecting our intellectual property — even those hiding behind keyboards in countries far away — eventually, winter will come.”

According to the Department of Justice, Mesri honed his computer skills working for the Iranian military, conducting cyber attacks against enemy military systems, nuclear software, and Israeli infrastructure. He was also a member of the Turk Black Hat hacking team which defaced hundreds of websites with the online pseudonym “Skote Vahshat”.

The indictment states that Mesri began his campaign against HBO during May 2017, when he conducted “online reconnaissance” of HBO’s networks and employees. Between May and July, he then compromised a number of HBO employee user accounts and used them to access the company’s data and TV shows, copying them to his own machines.

After allegedly obtaining around 1.5 terabytes of HBO’s data, Mesri then began to extort HBO, warning that unless a ransom of $5.5 million wasn’t paid in Bitcoin, the leaking would begin. When the amount wasn’t paid, three days later Mesri told HBO that the amount had now risen to $6m and as an additional punishment, data could be wiped from HBO’s servers.

Subsequently, on or around July 30 and continuing through August 2017, Mesri allegedly carried through with his threats, leaking information and TV shows online and promoting them via emails to members of the press.

As a result of the above, Mesri is charged with one count of wire fraud, which carries a maximum sentence of 20 years in prison, one count of computer hacking (five years), three counts of threatening to impair the confidentiality of information (five years each), and one count of interstate transmission of an extortionate communication (two years). No copyright infringement offenses are mentioned in the indictment.

The big question now is whether the US will ever get their hands on Mesri. The answer to that, at least through any official channels, seems to be a resounding no. There is no extradition treaty between the US and Iran meaning that if Mesri stays put, he’s likely to remain a free man.

Wanted

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

On Architecture and the State of the Art

Post Syndicated from Philip Fitzsimons original https://aws.amazon.com/blogs/architecture/on-architecture-and-the-state-of-the-art/

On the AWS Solutions Architecture team we know we’re following in the footsteps of other technical experts who pulled together the best practices of their eras. Around 22 BC the Roman Architect Vitruvius Pollio wrote On architecture (published as The Ten Books on Architecture), which became a seminal work on architectural theory. Vitruvius captured the best practices of his contemporaries and those who went before him (especially the Greek architects).

Closer to our time, in 1910, another technical expert, Henry Harrison Suplee, wrote Gas Turbine: progress in the design and construction of turbines operated by gases of combustion, from which we believe the phrase “state of the art” originates:

It has therefore been thought desirable to gather under one cover the most important papers which have appeared upon the subject of the gas turbine in England, France, Germany, and Switzerland, together with some account of the work in America, and to add to this such information upon actual experimental machines as can be secured.

In the present state of the art this is all that can be done, but it is believed that this will aid materially in the conduct of subsequent work, and place in the hands of the gas-power engineer a collection of material not generally accessible or available in convenient form.  

Source

Both authors wrote books that captured the current knowledge on design principles and best practices (in architecture and engineering) to improve awareness and adoption. Like these authors, we at AWS believe that capturing and sharing best practices leads to better outcomes. This follows a pattern we established internally, in our Principal Engineering community. In 2012 we started an initiative called “Well-Architected” to help share the best practices for architecting in the cloud with our customers.

Every year AWS Solution Architects dedicate hundreds of thousands of hours to helping customers build architectures that are cloud native. Through customer feedback, and real world experience we see what strategies, patterns, and approaches work for you.

“After our well-architected review and subsequent migration to the cloud, we saw the tremendous cost-savings potential of Amazon Web Services. By using the industry-standard service, we can invest the majority of our time and energy into enhancing our solutions. Thanks to (consulting partner) 1Strategy’s deep, technical AWS expertise and flexibility during our migration, we were able to leverage the strengths of AWS quickly.”

Paul Cooley, Chief Technology Officer for Imprev

This year we have again refreshed the AWS Well-Architected Framework, with a particular focus on Operational Excellence. Last year we announced the addition of Operational Excellence as a new pillar to AWS Well-Architected Framework. Having carried out thousands of reviews since then, we reexamined the pillar and are pleased to announce some significant changes. First, the pillar dives more deeply into people and process because this is an area where we see the most opportunities for teams to improve. Second, we’ve pivoted heavily to focusing on whether your team and your workload are ready for runtime operations. Key to this is ensuring that in the early phases of design that you think about how your architecture will be operated. Reflecting on this we realized that Operational Excellence should be the first pillar to support the “Architect for run-time operations” approach.

We’ve also added detail on how Amazon approaches technology architecture, covering topics such as our Principal Engineering Community and two-way doors and mechanisms. We refreshed the other pillars to reflect the evolution of AWS, and the best practices we are seeing in the field. We have also added detail on the review process, in the surprisingly named “The Review Process” section.

As part of refreshing the pillars are have also released a new Operational Excellence Pillar whitepaper, and have updated the whitepapers for all of the other pillars of the Framework. For example we have significantly updated the Reliability Pillar whitepaper to provide guidance on application design for high availability. New sections cover techniques for high availability including and beyond infrastructure implications, and considerations across the application lifecycle. This updated whitepaper also provides examples that show how to achieve availability goals in single and multi-region architectures.

You can find free training and all of the ”state of the art” whitepapers on the AWS Well-Architected homepage:

Philip Fitzsimons, Leader, AWS Well-Architected Team

Amazon QuickSight Update – Geospatial Visualization, Private VPC Access, and More

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-quicksight-update-geospatial-visualization-private-vpc-access-and-more/

We don’t often recognize or celebrate anniversaries at AWS. With nearly 100 services on our list, we’d be eating cake and drinking champagne several times a week. While that might sound like fun, we’d rather spend our working hours listening to customers and innovating. With that said, Amazon QuickSight has now been generally available for a little over a year and I would like to give you a quick update!

QuickSight in Action
Today, tens of thousands of customers (from startups to enterprises, in industries as varied as transportation, legal, mining, and healthcare) are using QuickSight to analyze and report on their business data.

Here are a couple of examples:

Gemini provides legal evidence procurement for California attorneys who represent injured workers. They have gone from creating custom reports and running one-off queries to creating and sharing dynamic QuickSight dashboards with drill-downs and filtering. QuickSight is used to track sales pipeline, measure order throughput, and to locate bottlenecks in the order processing pipeline.

Jivochat provides a real-time messaging platform to connect visitors to website owners. QuickSight lets them create and share interactive dashboards while also providing access to the underlying datasets. This has allowed them to move beyond the sharing of static spreadsheets, ensuring that everyone is looking at the same and is empowered to make timely decisions based on current data.

Transfix is a tech-powered freight marketplace that matches loads and increases visibility into logistics for Fortune 500 shippers in retail, food and beverage, manufacturing, and other industries. QuickSight has made analytics accessible to both BI engineers and non-technical business users. They scrutinize key business and operational metrics including shipping routes, carrier efficient, and process automation.

Looking Back / Looking Ahead
The feedback on QuickSight has been incredibly helpful. Customers tell us that their employees are using QuickSight to connect to their data, perform analytics, and make high-velocity, data-driven decisions, all without setting up or running their own BI infrastructure. We love all of the feedback that we get, and use it to drive our roadmap, leading to the introduction of over 40 new features in just a year. Here’s a summary:

Looking forward, we are watching an interesting trend develop within our customer base. As these customers take a close look at how they analyze and report on data, they are realizing that a serverless approach offers some tangible benefits. They use Amazon Simple Storage Service (S3) as a data lake and query it using a combination of QuickSight and Amazon Athena, giving them agility and flexibility without static infrastructure. They also make great use of QuickSight’s dashboards feature, monitoring business results and operational metrics, then sharing their insights with hundreds of users. You can read Building a Serverless Analytics Solution for Cleaner Cities and review Serverless Big Data Analytics using Amazon Athena and Amazon QuickSight if you are interested in this approach.

New Features and Enhancements
We’re still doing our best to listen and to learn, and to make sure that QuickSight continues to meet your needs. I’m happy to announce that we are making seven big additions today:

Geospatial Visualization – You can now create geospatial visuals on geographical data sets.

Private VPC Access – You can now sign up to access a preview of a new feature that allows you to securely connect to data within VPCs or on-premises, without the need for public endpoints.

Flat Table Support – In addition to pivot tables, you can now use flat tables for tabular reporting. To learn more, read about Using Tabular Reports.

Calculated SPICE Fields – You can now perform run-time calculations on SPICE data as part of your analysis. Read Adding a Calculated Field to an Analysis for more information.

Wide Table Support – You can now use tables with up to 1000 columns.

Other Buckets – You can summarize the long tail of high-cardinality data into buckets, as described in Working with Visual Types in Amazon QuickSight.

HIPAA Compliance – You can now run HIPAA-compliant workloads on QuickSight.

Geospatial Visualization
Everyone seems to want this feature! You can now take data that contains a geographic identifier (country, city, state, or zip code) and create beautiful visualizations with just a few clicks. QuickSight will geocode the identifier that you supply, and can also accept lat/long map coordinates. You can use this feature to visualize sales by state, map stores to shipping destinations, and so forth. Here’s a sample visualization:

To learn more about this feature, read Using Geospatial Charts (Maps), and Adding Geospatial Data.

Private VPC Access Preview
If you have data in AWS (perhaps in Amazon Redshift, Amazon Relational Database Service (RDS), or on EC2) or on-premises in Teradata or SQL Server on servers without public connectivity, this feature is for you. Private VPC Access for QuickSight uses an Elastic Network Interface (ENI) for secure, private communication with data sources in a VPC. It also allows you to use AWS Direct Connect to create a secure, private link with your on-premises resources. Here’s what it looks like:

If you are ready to join the preview, you can sign up today.

Jeff;

 

AWS Achieves FedRAMP JAB Moderate Provisional Authorization for 20 Services in the AWS US East/West Region

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-jab-moderate-authorization-for-20-services-in-us-eastwest/

The AWS US East/West Region has received a Provisional Authority to Operate (P-ATO) from the Joint Authorization Board (JAB) at the Federal Risk and Authorization Management Program (FedRAMP) Moderate baseline.

Though AWS has maintained an AWS US East/West Region Agency-ATO since early 2013, this announcement represents AWS’s carefully deliberated move to the JAB for the centralized maintenance of our P-ATO for 10 services already authorized. This also includes the addition of 10 new services to our FedRAMP program (see the complete list of services below). This doubles the number of FedRAMP Moderate services available to our customers to enable increased use of the cloud and support modernized IT missions. Our public sector customers now can leverage this FedRAMP P-ATO as a baseline for their own authorizations and look to the JAB for centralized Continuous Monitoring reporting and updates. In a significant enhancement for our partners that build their solutions on the AWS US East/West Region, they can now achieve FedRAMP JAB P-ATOs of their own for their Platform as a Service (PaaS) and Software as a Service (SaaS) offerings.

In line with FedRAMP security requirements, our independent FedRAMP assessment was completed in partnership with a FedRAMP accredited Third Party Assessment Organization (3PAO) on our technical, management, and operational security controls to validate that they meet or exceed FedRAMP’s Moderate baseline requirements. Effective immediately, you can begin leveraging this P-ATO for the following 20 services in the AWS US East/West Region:

  • Amazon Aurora (MySQL)*
  • Amazon CloudWatch Logs*
  • Amazon DynamoDB
  • Amazon Elastic Block Store
  • Amazon Elastic Compute Cloud
  • Amazon EMR*
  • Amazon Glacier*
  • Amazon Kinesis Streams*
  • Amazon RDS (MySQL, Oracle, Postgres*)
  • Amazon Redshift
  • Amazon Simple Notification Service*
  • Amazon Simple Queue Service*
  • Amazon Simple Storage Service
  • Amazon Simple Workflow Service*
  • Amazon Virtual Private Cloud
  • AWS CloudFormation*
  • AWS CloudTrail*
  • AWS Identity and Access Management
  • AWS Key Management Service
  • Elastic Load Balancing

* Services with first-time FedRAMP Moderate authorizations

We continue to work with the FedRAMP Project Management Office (PMO), other regulatory and compliance bodies, and our customers and partners to ensure that we are raising the bar on our customers’ security and compliance needs.

To learn more about how AWS helps customers meet their security and compliance requirements, see the AWS Compliance website. To learn about what other public sector customers are doing on AWS, see our Government, Education, and Nonprofits Case Studies and Customer Success Stories. To review the public posting of our FedRAMP authorizations, see the FedRAMP Marketplace.

– Chris Gile, Senior Manager, AWS Public Sector Risk and Compliance

The Truth Behind the “Kodi Boxes Can Kill Their Owners” Headlines

Post Syndicated from Andy original https://torrentfreak.com/the-truth-behind-the-kodi-boxes-can-kill-their-owners-headlines-171118/

Another week, another batch of ‘Kodi Box Armageddon’ stories. This time it hasn’t been directly about the content they can provide but the physical risks they pose to their owners.

After being primed in advance, the usual British tabloids jumped into action early Thursday, noting that following tests carried out on “illicit streaming devices” (aka Android set-top devices), 100% of them failed to meet UK national electrical safety regulations.

The tests were carried out by Electrical Safety First, a charity which was prompted into action by anti-piracy outfit Federation Against Copyright Theft.

“A series of product safety tests on popular illicit streaming devices entering the UK have found that 100% fail to meet national electrical safety regulations,” a FACT statement reads.

“The news is all the more significant as the Intellectual Property Office (IPO) estimates that more than one million of these illegal devices have been sold in the UK in the last two years, representing a significant risk to the general public.”

After reading many sensational headlines stating that “Kodi Boxes Might Kill Their Owners”, please excuse us for groaning. This story has absolutely nothing – NOTHING – to do with Kodi or any other piece of software. Quite obviously, software doesn’t catch fire.

So, suspecting that there might be more to this than meets the eye, we decided to look beyond the press releases into the actual Electrical Safety First (ESF) report. While we have no doubt that ESF is extremely competent in its field (it is, no question), the front page of its report is disappointing.

Despite the items sent for testing being straightforward Android-based media players, the ESF report clearly describes itself as examining “illicit streaming devices”. It’s terminology that doesn’t describe the subject matter from an electrical, safety or technical perspective but is pretty convenient for FACT clients Sky and the Premier League.

Nevertheless, the full picture reveals rather more than most of the headlines suggest.

First of all, it’s important to know that ESF tested just nine devices out of the million or so allegedly sold in the UK during the past two years. Even more importantly, every single one of those devices was supplied to ESF by FACT.

Now, we’re not suggesting they were hand-picked to fail but it’s clear that the samples weren’t provided from a neutral source. Also, as we’ll learn shortly, it’s possible to determine in advance if an item will fail to meet UK standards simply by looking at its packaging and casing.

But perhaps even more intriguing is that the electrical testing carried out by ESF related primarily not to the set-top boxes themselves, but to their power supplies. ESF say so themselves.

“The product review relates primarily to the switched mode power supply units for the connection to the mains supply, which were supplied with the devices, to identify any potential risks to consumers such as electric shocks, heating and resistance to fire,” ESF reports.

The set-top boxes themselves were only assessed “in terms of any faults in the marking, warnings and instructions,” the group adds.

So, what we’re really talking about here isn’t dangerous illicit streaming devices set-top boxes, but the power supply units that come with them. It might seem like a small detail but we’ll come to the vast importance of this later on.

Firstly, however, we should note that none of the equipment supplied by FACT complied with Schedule 1 of the Electrical Equipment (Safety) Regulations 1994. This means that they failed to have the “Conformité Européene” or CE logo present. That’s unacceptable.

In addition, none of them lived up the requirements of Schedule 3 of the Electrical Equipment (Safety) Regulations 1994 either, which in part requires the manufacturer’s brand name or trademark to be “clearly printed on the electrical equipment or, where that is not possible, on the packaging.” (That’s how you can tell they’ll definitely fail UK standards, before sending them for testing)

Also, none of the samples were supplied with “sufficient safety or warning information to ensure the safe and correct use, assembly, installation or maintenance of the equipment.” This represents ‘a technical breach’ of the regulations, ESF reports.

Finally, several of the samples were considered to be a potential risk to their users, either via electric shock and/or fire. That’s an important finding and people who suspect they have such devices at home should definitely take note.

However, the really important point isn’t mentioned in the tabloids, probably since it distracts from the “Kodi Armageddon” narrative which underlies the whole study and subsequent reports.

ESF says that one of the key issues is that the set-top boxes come unbranded, something which breaches safety regulations while making it difficult for consumers to assess whether they’re buying a quality product. Crucially, this is not exclusively a set-top box problem, it is much, MUCH bigger.

“Issues with power supply units or unbranded and counterfeit chargers go beyond illicit streaming devices. In the last year, issues have been reported with other consumer electrical devices, such as laptop chargers and counterfeit phone chargers,” the same ESF report reveals.

“The total annual online sales of mains plug-in chargers is estimated to be in the region of 1.8 million and according to Electrical Safety First, it is likely that most of these sales involve cheap, unbranded chargers.”

So, we looked into this issue of problem power supplies and chargers generally, to see where this report fits into the bigger picture. It transpires it’s a massive problem, all over the UK, across a wide range of products. In fact, Trading Standards reports that 99% of non-genuine Apple chargers bought online “fail a basic safety test”.

But buying from reputable High Street retailers doesn’t help either.

During the past year, Poundworld was fined for selling – wait for it – 72,000 dangerous chargers. Home Bargains was also fined for selling “thousands” of power adaptors that fail to meet UK standards.

“All samples provided failed to comply with Electrical Equipment Safety Regulations and were not marked with the manufacturer’s name,” Trading Standards reports.

That sounds familiar.

So, there you have it. Far from this being an isolated “Kodi Box Crisis” as some have proclaimed, this is a broad issue affecting imported electrical items in general. On this basis, one can’t help but think the tabloids missed a trick here. Think of the power of this headline:

ALL UNBRANDED ELECTRICAL EQUIPMENT CAN KILL, DISCONNECT EVERYTHING

or, alternatively:

PIRATES URGED TO SWITCH TO BRANDED AMAZON FIRESTICKS, SAFER FOR KODI

Perhaps not….

The ESF report can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

The Pirate Bay & 1337x Must Be Blocked, Austrian Supreme Court Rules

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bay-1337x-must-be-blocked-austrian-supreme-court-rules-171014/

Following a long-running case, in 2015 Austrian ISPs were ordered by the Commercial Court to block The Pirate Bay and other “structurally-infringing” sites including 1337x.to, isohunt.to, and h33t.to.

The decision was welcomed by the music industry, which looked forward to having more sites blocked in due course.

Soon after, local music rights group LSG sent its lawyers after several other large ISPs urging them to follow suit, or else. However, the ISPs dug in and a year later, in May 2016, things began to unravel. The Vienna Higher Regional Court overruled the earlier decision of the Commercial Court, meaning that local ISPs were free to unblock the previously blocked sites.

The Court concluded that ISP blocks are only warranted if copyright holders have exhausted all their options to take action against those actually carrying out the infringement. This decision was welcomed by the Internet Service Providers Austria (ISPA), which described the decision as an important milestone.

The ISPs argued that only torrent files, not the content itself, was available on the portals. They also had a problem with the restriction of access to legitimate content.

“A problem in this context is that the offending pages also have legal content and it is no longer possible to access that if barriers are put in place,” said ISPA Secretary General Maximilian Schubert.

Taking the case to its ultimate conclusion, the music companies appealed to the Supreme Court. Another year on and its decision has just been published and for the rightsholders, who represent 3,000 artists including The Beatles, Justin Bieber, Eric Clapton, Coldplay, David Guetta, Iggy Azalea, Michael Jackson, Lady Gaga, Metallica, George Michael, One Direction, Katy Perry, and Queen, to name a few, it was worth the effort.

The Court looked at whether “the provision and operation of a BitTorrent platform with the purpose of online file sharing [of non-public domain works]” represents a “communication to the public” under the EU Copyright Directive. Citing the now-familiar BREIN v Filmspeler and BREIN v Ziggo and XS4All cases that both received European Court of Justice rulings earlier this year, the Supreme Court concluded it was.

Citing another Dutch case, in which Playboy publisher Sanoma took on the blog GeenStijl.nl, the Court noted that linking to copyrighted content hosted elsewhere also amounted to a “communication to the public”, a situation mirrored on torrent sites like The Pirate Bay.

“The similarity of the technical procedure in this case when compared to BitTorrent platforms lies in the fact that in both cases the operators of the website did not provide any copyrighted works themselves, but merely provided further information on sites where the protected works were available,” the Court notes in its ruling.

In respect of the potential for blocking legitimate content as well as that infringing copyright, the Court turned the ISPs’ own arguments against them somewhat.

The ISPs had previously argued that blocking The Pirate Bay and other sites was pointless since the torrents they host would still be available elsewhere. The Court noted that point and also found that people can easily upload their torrents to sites that aren’t blocked, since there’s plenty of choice.

The ISPA criticized the Supreme Court’s ruling, noting that in future ISPs will still find themselves being held responsible for decisions concerning blocking.

“We do not support illegal content on the Internet in any way, but consider it extremely questionable that the decision on what is illegal and what is not falls to ISPs, instead of a court,” said ISPA Secretary General Maximilian.

“Although we find it positive that a court of last resort has taken the decision, the assessment of the website in the first instance continues to be left to the Internet provider. The Supreme Court’s expansion of the circle of sites that be potentially blocked further complicates this task for the operator and furthers the privatization of law enforcement.

“It is extremely unpleasant that even after more than 10 years of fierce discussion, there is still no compelling legal basis for a court decision on Internet blocking, which puts providers in the role of both judge and hangman.”

Also of interest is ISPA’s stance on how blocking of content fails to solve the underlying issue. When content is blocked, rather than removed, it simply displaces the problem, leaving others to pick up the pieces, the Internet body argues.

“Illegal content is permanently removed from the network by deletion. Everything else is a placebo with extremely dangerous side effects, which can easily be bypassed by both providers and consumers. The only thing that remains is a blocking infrastructure that can be misused for many purposes and, unfortunately, will be used in many places,” Schubert says.

“The current situation, where providers have to block the rightsholders quasi on the spot, if they do not want to engage in a time-consuming and cost-intensive litigation, is really not sustainable so we issue a call to action to the legislature.”

The domains that were listed in the case, many of which are already defunct, are: thepiratebay.se, thepiratebay.gd, thepiratebay.la, thepiratebay.mn, thepiratebay.mu, thepiratebay.sh, thepiratebay.tw, thepiratebay.fm, thepiratebay.ms, thepiratebay.vg, isohunt.to, 1337x.to and h33t.to.

Whether it will be added later is unclear, but the only domain currently used by The Pirate Bay (thepiratebay.org) is not included in the list.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Hollywood Studios Force ISPs to Block Popcorn Time & Subtitle Sites

Post Syndicated from Andy original https://torrentfreak.com/court-orders-isps-to-block-popcorn-time-subtitle-websites-171113/

Early 2014, a new craze was sweeping the piracy world. Instead of relatively cumbersome text-heavy torrent sites, people were turning to a brand new application called Popcorn Time.

Dubbed the Netflix for Pirates due to its beautiful interface, Popcorn Time was soon a smash hit all over the planet. But with that fame came trouble, with anti-piracy outfits all over the world seeking to shut it down or at least pour cold water on its popularity.

In the meantime, however, the popularity of Kodi skyrocketed, something which pushed Popcorn Time out of the spotlight for a while. Nevertheless, the application in several different forms never went away and it still enjoys an impressive following today. This means that despite earlier action in several jurisdictions, Hollywood still has it on the radar.

The latest development comes out of Norway, where Disney Entertainment, Paramount Pictures Corporation, Columbia Pictures, Twentieth Century Fox Film Corporation, Universal City Studios and Warner Bros. have just taken 14 local Internet service providers to court.

The studios claimed that the ISPs (including Telenor, Nextgentel, Get, Altibox, Telia, Homenet, Ice Norge, Eidsiva Bredbånd and Lynet Internet) should undertake broad blocking action to ensure that three of the most popular Popcorn Time forks (located at popcorn-time.to, popcorntime.sh and popcorn-time.is) can no longer function in the region.

Since site-blocking necessarily covers the blocking of websites, there appears to have been much discussion over whether a software application can be considered a website. However, the court ultimately found that wasn’t really an issue, since each application requires websites to operate.

“Each of the three [Popcorn Time variants] must be considered a ‘site’, even though users access Popcorn Time in a way that is technically different from the way other pirate sites provide users with access to content, and although different components of the Popcorn Time service are retrieved from different domains,” the Oslo District Court’s ruling reads.

In respect of all three releases of Popcorn Time, the Court weighed the pros and cons of blocking, including whether blocking was needed at all. However, it ultimately decided that alternative methods for dealing with the sites do not exist since the rightsholders tried and ultimately failed to get cooperation from the sites’ operators.

“All sites have as their main purpose the purpose of facilitating infringement of protected works by giving the public unauthorized access to movies and TV shows. This happens without regard to the rights of others and imposes major losses on the licensees and the cultural industry in general,” the Court writes.

The Court also supported compelling ISPs to introduce the blocks, noting that they are “an appropriate and proportionate measure” that does not interfere with the Internet service providers’ freedom to operate nor anyone’s else’s right to freedom of expression.

But while the websites in question are located in three places (popcorn-time.to, popcorntime.sh and popcorn-time.is) the Court’s blocking order goes much further. Not only does it cover these key domains but also other third-party sites that Popcorn Time utilizes, such as platforms offering subtitles.

Popcorn-time.to related domains to be blocked: popcorn-time.to, popcorn-time.xyz, popcorn-time.se, iosinstaller.com, video4time.info, thepopcorntime.net, timepopcorn.info, time-popcorn.com, the-pop-corn-time.net, timepopcorn.net, time4videostream.com, ukfrnlge.xyz, opensubtitles.org, onlinesubtitles.com, popcorntime-update.xyz, plus subdomains.

Popcorntime.sh related domains to be blocked: Popcorntime.sh, api-fetch.website, yts.ag, opensubtitles.org, plus subdomains.

Popcorn-time.is related domains to be blocked: popcorn-time.is, yts.ag, yify.is, yts.ph, api-fetch.website, eztvapi.ml and opensubtitles.org, plus subdomains.

Separately, the Court ordered the ISPs to block torrent site YTS.ag and onlinesubtitles.com, opensubtitles.org, plus their subdomains.

Since no one appeared to represent the sites and the ISPs can’t be held responsible if they cooperate, the Court found that the studios had succeeding in their action and are entitled to compensation.

“The Court’s conclusions mean that the plaintiffs have won the case and, in principle, are entitled to compensation for their legal costs from the operators of the sites,” the Court notes. “This means that the operators of sites are ordered to pay the plaintiffs’ costs.”

Those costs amount to 570,000 kr (around US$70,000), an amount which the Court chose to split equally between the three Popcorn Time forks ($23,359 each). It seems unlikely the amounts will ever be recovered although there is still an opportunity for the parties to appeal.

In the meantime the ISPs have just days left to block the sites listed above. Once they’ve been put in place, the blocks will remain in place for five years.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Visualize AWS Cloudtrail Logs using AWS Glue and Amazon Quicksight

Post Syndicated from Luis Caro Perez original https://aws.amazon.com/blogs/big-data/streamline-aws-cloudtrail-log-visualization-using-aws-glue-and-amazon-quicksight/

Being able to easily visualize AWS CloudTrail logs gives you a better understanding of how your AWS infrastructure is being used. It can also help you audit and review AWS API calls and detect security anomalies inside your AWS account. To do this, you must be able to perform analytics based on your CloudTrail logs.

In this post, I walk through using AWS Glue and AWS Lambda to convert AWS CloudTrail logs from JSON to a query-optimized format dataset in Amazon S3. I then use Amazon Athena and Amazon QuickSight to query and visualize the data.

Solution overview

To process CloudTrail logs, you must implement the following architecture:

CloudTrail delivers log files in an Amazon S3 bucket folder. To correctly crawl these logs, you modify the file contents and folder structure using an Amazon S3-triggered Lambda function that stores the transformed files in an S3 bucket single folder. When the files are in a single folder, AWS Glue scans the data, converts it into Apache Parquet format, and catalogs it to allow for querying and visualization using Amazon Athena and Amazon QuickSight.

Walkthrough

Let’s look at the steps that are required to build the solution.

Set up CloudTrail logs

First, you need to set up a trail that delivers log files to an S3 bucket. To create a trail in CloudTrail, follow the instructions in Creating a Trail.

When you finish, the trail settings page should look like the following screenshot:

In this example, I set up log files to be delivered to the cloudtraillfcaro bucket.

Consolidate CloudTrail reports into a single folder using Lambda

AWS CloudTrail delivers log files using the following folder structure inside the configured Amazon S3 bucket:

AWSLogs/ACCOUNTID/CloudTrail/REGION/YEAR/MONTH/HOUR/filename.json.gz

Additionally, log files have the following structure:

{
    "Records": [{
        "eventVersion": "1.01",
        "userIdentity": {
            "type": "IAMUser",
            "principalId": "AIDAJDPLRKLG7UEXAMPLE",
            "arn": "arn:aws:iam::123456789012:user/Alice",
            "accountId": "123456789012",
            "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
            "userName": "Alice",
            "sessionContext": {
                "attributes": {
                    "mfaAuthenticated": "false",
                    "creationDate": "2014-03-18T14:29:23Z"
                }
            }
        },
        "eventTime": "2014-03-18T14:30:07Z",
        "eventSource": "cloudtrail.amazonaws.com",
        "eventName": "StartLogging",
        "awsRegion": "us-west-2",
        "sourceIPAddress": "72.21.198.64",
        "userAgent": "signin.amazonaws.com",
        "requestParameters": {
            "name": "Default"
        },
        "responseElements": null,
        "requestID": "cdc73f9d-aea9-11e3-9d5a-835b769c0d9c",
        "eventID": "3074414d-c626-42aa-984b-68ff152d6ab7"
    },
    ... additional entries ...
    ]

If AWS Glue crawlers are used to catalog these files as they are written, the following obstacles arise:

  1. AWS Glue identifies different tables per different folders because they don’t follow a traditional partition format.
  2. Based on the structure of the file content, AWS Glue identifies the tables as having a single column of type array.
  3. CloudTrail logs have JSON attributes that use uppercase letters. According to the Best Practices When Using Athena with AWS Glue, it is recommended that you convert these to lowercase.

To have AWS Glue catalog all log files in a single table with all the columns describing each event, implement the following Lambda function:

from __future__ import print_function
import json
import urllib
import boto3
import gzip

s3 = boto3.resource('s3')
client = boto3.client('s3')

def convertColumntoLowwerCaps(obj):
    for key in obj.keys():
        new_key = key.lower()
        if new_key != key:
            obj[new_key] = obj[key]
            del obj[key]
    return obj


def lambda_handler(event, context):

    bucket = event['Records'][0]['s3']['bucket']['name']
    key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key'].encode('utf8'))
    print(bucket)
    print(key)
    try:
        newKey = 'flatfiles/' + key.replace("/", "")
        client.download_file(bucket, key, '/tmp/file.json.gz')
        with gzip.open('/tmp/out.json.gz', 'w') as output, gzip.open('/tmp/file.json.gz', 'rb') as file:
            i = 0
            for line in file: 
                for record in json.loads(line,object_hook=convertColumntoLowwerCaps)['records']:
            		if i != 0:
            		    output.write("\n")
            		output.write(json.dumps(record))
            		i += 1
        client.upload_file('/tmp/out.json.gz', bucket,newKey)
        return "success"
    except Exception as e:
        print(e)
        print('Error processing object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
        raise e

The function goes over each element of the records array, changes uppercase letters to lowercase in column names, and inserts each element of the array as a single line of a new file. The new file is saved inside a flatfiles folder created by the function without any subfolders in the S3 bucket.

The function should have a role containing a policy with at least the following permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::cloudtraillfcaro/*",
                "arn:aws:s3:::cloudtraillfcaro"
            ],
            "Effect": "Allow"
        }
    ]
}

In this example, CloudTrail delivers logs to the cloudtraillfcaro bucket. Make sure that you replace this name with your bucket name in the policy. For more information about how to work with inline policies, see Working with Inline Policies.

After the Lambda function is created, you can set up the following trigger using the Triggers tab on the AWS Lambda console.

Choose Add trigger, and choose S3 as a source of the trigger.

After choosing the source, configure the following settings:

In the trigger, any file that is written to the path for the log files—which in this case is AWSLogs/119582755581/CloudTrail/—is processed. Make sure that the Enable trigger check box is selected and that the bucket and prefix parameters match your use case.

After you set up the function and receive log files, the bucket (in this case cloudtraillfcaro) should contain the processed files inside the flatfiles folder.

Catalog source data

Once the files are processed by the Lambda function, set up a crawler named cloudtrail to catalog them.

The crawler must point to the flatfiles folder.

All the crawlers and AWS Glue jobs created for this solution must have a role with the AWSGlueServiceRole managed policy and an inline policy with permissions to modify the S3 buckets used on the Lambda function. For more information, see Working with Managed Policies.

The role should look like the following:

In this example, the inline policy named s3perms contains the permissions to modify the S3 buckets.

After you choose the role, you can schedule the crawler to run on demand.

A new database is created, and the crawler is set to use it. In this case, the cloudtrail database is used for all the tables.

After the crawler runs, a single table should be created in the catalog with the following structure:

The table should contain the following columns:

Create and run the AWS Glue job

To convert all the CloudTrail logs to a columnar store in Parquet, set up an AWS Glue job by following these steps.

Upload the following script into a bucket in Amazon S3:

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
import boto3
import time

## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "cloudtrail", table_name = "flatfiles", transformation_ctx = "datasource0")
resolvechoice1 = ResolveChoice.apply(frame = datasource0, choice = "make_struct", transformation_ctx = "resolvechoice1")
relationalized1 = resolvechoice1.relationalize("trail", args["TempDir"]).select("trail")
datasink = glueContext.write_dynamic_frame.from_options(frame = relationalized1, connection_type = "s3", connection_options = {"path": "s3://cloudtraillfcaro/parquettrails"}, format = "parquet", transformation_ctx = "datasink4")
job.commit()

In the example, you load the script as a file named cloudtrailtoparquet.py. Make sure that you modify the script and update the “{"path": "s3://cloudtraillfcaro/parquettrails"}” with the destination in which you want to store your results.

After uploading the script, add a new AWS Glue job. Choose a name and role for the job, and choose the option of running the job from An existing script that you provide.

To avoid processing the same data twice, enable the Job bookmark setting in the Advanced properties section of the job properties.

Choose Next twice, and then choose Finish.

If logs are already in the flatfiles folder, you can run the job on demand to generate the first set of results.

Once the job starts running, wait for it to complete.

When the job is finished, its Run status should be Succeeded. After that, you can verify that the Parquet files are written to the Amazon S3 location.

Catalog results

To be able to process results from Athena, you can use an AWS Glue crawler to catalog the results of the AWS Glue job.

In this example, the crawler is set to use the same database as the source named cloudtrail.

You can run the crawler using the console. When the crawler finishes running and has processed the Parquet results, a new table should be created in the AWS Glue Data Catalog. In this example, it’s named parquettrails.

The table should have the classification set to parquet.

It should have the same columns as the flatfiles table, with the exception of the struct type columns, which should be relationalized into several columns:

In this example, notice how the requestparameters column, which was a struct in the original table (flatfiles), was transformed to several columns—one for each key value inside it. This is done using a transformation native to AWS Glue called relationalize.

Query results with Athena

After crawling the results, you can query them using Athena. For example, to query what events took place in the time frame between 2017-10-23t12:00:00 and 2017-10-23t13:00, use the following select statement:

select *
from cloudtrail.parquettrails
where eventtime > '2017-10-23T12:00:00Z' AND eventtime < '2017-10-23T13:00:00Z'
order by eventtime asc;

Be sure to replace cloudtrail.parquettrails with the names of your database and table that references the Parquet results. Replace the datetimes with an hour when your account had activity and was processed by the AWS Glue job.

Visualize results using Amazon QuickSight

Once you can query the data using Athena, you can visualize it using Amazon QuickSight. Before connecting Amazon QuickSight to Athena, be sure to grant QuickSight access to Athena and the associated S3 buckets in your account. For more information, see Managing Amazon QuickSight Permissions to AWS Resources. You can then create a new data set in Amazon QuickSight based on the Athena table that you created.

After setting up permissions, you can create a new analysis in Amazon QuickSight by choosing New analysis.

Then add a new data set.

Choose Athena as the source.

Give the data source a name (in this case, I named it cloudtrail).

Choose the name of the database and the table referencing the Parquet results.

Then choose Visualize.

After that, you should see the following screen:

Now you can create some visualizations. First, search for the sourceipaddress column, and drag it to the AutoGraph section.

You can see a list of the IP addresses that you have used to interact with AWS. To review whether these IP addresses have been used from IAM users, internal AWS services, or roles, use the type value that is inside the useridentity field of the original log files. Thanks to the relationalize transformation, this value is available as the useridentity.type column. After the column is added into the Group/Color box, the visualization should look like the following:

You can now see and distinguish the most used IPs and whether they are used from roles, AWS services, or IAM users.

After following all these steps, you can use Amazon QuickSight to add different columns from CloudTrail and perform different types of visualizations. You can build operational dashboards that continuously monitor AWS infrastructure usage and access. You can share those dashboards with others in your organization who might need to see this data.

Summary

In this post, you saw how you can use a simple Lambda function and an AWS Glue script to convert text files into Parquet to improve Athena query performance and data compression. The post also demonstrated how to use AWS Lambda to preprocess files in Amazon S3 and transform them into a format that is recognizable by AWS Glue crawlers.

This example, used AWS CloudTrail logs, but you can apply the proposed solution to any set of files that after preprocessing, can be cataloged by AWS Glue.


Additional Reading

Learn how to Harmonize, Query, and Visualize Data from Various Providers using AWS Glue, Amazon Athena, and Amazon QuickSight.


About the Authors

Luis Caro is a Big Data Consultant for AWS Professional Services. He works with our customers to provide guidance and technical assistance on big data projects, helping them improving the value of their solutions when using AWS.

 

 

 

Pirate Bay Suffers Downtime, Tor and Proxies are Up

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-bay-down-for-24-hours-tor-and-proxies-are-up-171109/

pirate bayThe Pirate Bay has been unreachable for roughly a day now.

The site currently displays a CloudFlare error message across the entire site, with the CDN provider referring to an “unknown error.”

No further details are available to us and there is no known ETA for the site’s return. However, judging from past experience, it’s likely a small technical issue that needs fixing.

Pirate Bay downtime

The Pirate Bay has had quite a few stints of downtime in recent months. The popular torrent site usually returns after several hours, but an outage of more than 24 hours has happened before as well.

TorrentFreak reached out to the TPB team but we have yet to hear more about the issue.

Amid the downtime, there’s still some good news for those who desperately need to access the notorious torrent site. TPB is still available via its .onion address on the Tor network, accessible using the popular Tor Browser, for example. The Tor traffic goes through a separate server and works just fine.

The same is true for The Pirate Bay’s proxy sites, most of which are still working just fine.

The main .org domain will probably be back in action soon enough, but seasoned TPB users will probably know the drill by now…

The Pirate Bay is not the only torrent site facing problems at the moment. 1337x.to is also suffering downtime. A week ago the site’s operator said that the site was under attack, which may still be ongoing. Meanwhile, 1337x’s official proxy is still online.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Top 10 Torrent Site TorrentDownloads Blocked By Chrome and Firefox

Post Syndicated from Andy original https://torrentfreak.com/top-10-torrent-site-torrentdownloads-blocked-by-chrome-and-firefox-171107/

While the popularity of torrent sites isn’t as strong as it used to be, dozens of millions of people use them on a daily basis.

Content availability is rich and the majority of the main movie, TV show, game and software releases appear on them within minutes, offering speedy and convenient downloads. Nevertheless, things don’t always go as smoothly as people might like.

Over the past couple of days that became evident to visitors of TorrentDownloads, one of the Internet’s most popular torrent sites.

TorrentDownloads – usually a reliable and tidy platform

Instead of viewing the rather comprehensive torrent index that made the Top 10 Most Popular Torrent Site lists in 2016 and 2017, visitors receive a warning.

“Attackers on torrentdownloads.me may trick you into doing something dangerous like installing software or revealing your personal information (for example, passwords, phone numbers or credit cards),” Chrome users are warned.

“Google Safe Browsing recently detected phishing on torrentdownloads.me. Phishing sites pretend to be other websites to trick you.”

Chrome warning

People using Firefox also receive a similar warning.

“This web page at torrentdownloads.me has been reported as a deceptive site and has been blocked based on your security preferences,” the browser warns.

“Deceptive sites are designed to trick you into doing something dangerous, like installing software, or revealing your personal information, like passwords, phone numbers or credit cards.”

A deeper check on Google’s malware advisory service echoes the same information, noting that the site contains “harmful content” that may “trick visitors into sharing personal info or downloading software.” Checks carried out with MalwareBytes reveal that service blocking the domain too.

TorrentFreak spoke with the operator of TorrentDownloads who told us that the warnings had been triggered by a rogue advertiser which was immediately removed from the site.

“We have already requested a review with Google Webmaster after we removed an old affiliates advertiser and changed the links on the site,” he explained.

“In Google Webmaster they state that the request will be processed within 72 Hours, so I think it will be reviewed today when 72 hours are completed.”

This statement suggests that the site itself wasn’t the direct culprit, but ads hosted elsewhere. That being said, these kinds of warnings look very scary to visitors and sites have to take responsibility, so completely expelling the bad player from the platform was the correct choice. Nevertheless, people shouldn’t be too surprised at the appearance of suspect ads.

Many top torrent sites have suffered from similar warnings, including The Pirate Bay and KickassTorrents, which are often a product of anti-piracy efforts from the entertainment industries.

In the past, torrent and streaming sites could display ads from top-tier providers with few problems. However, in recent years, the so-called “follow the money” anti-piracy tactic has forced the majority away from pirate sites, meaning they now have to do business with ad networks that may not always be as tidy as one might hope.

While these warnings are the very last thing the sites in question want (they’re hardly good for increasing visitor numbers), they’re a gift to entertainment industry groups.

At the same time as the industries are forcing decent ads away, these alerts provide a great opportunity to warn users about the potential problems left behind as a result. A loose analogy might be deliberately cutting off beer supply to an unlicensed bar then warning people not to go there because the homebrew sucks. It some cases it can be true, but it’s a problem only being exacerbated by industry tactics.

It’s worth noting that no warnings are received by visitors to TorrentDownloads using Android devices, meaning that desktop users were probably the only people at risk. In any event, it’s expected that the warnings will disappear during the next day, so the immediate problems will be over. As far as TF is informed, the offending ads were removed days ago.

That appears to be backed up by checks carried out on a number of other malware scanning services. Norton, Opera, SiteAdvisor, Spamhaus, Yandex and ESET all declare the site to be clean.

Technical Chrome and Firefox users who are familiar with these types of warnings can take steps (Chrome, FF) to bypass the blocks, if they really must.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Now Available – Compute-Intensive C5 Instances for Amazon EC2

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-compute-intensive-c5-instances-for-amazon-ec2/

I’m thrilled to announce that the new compute-intensive C5 instances are available today in six sizes for launch in three AWS regions!

These instances designed for compute-heavy applications like batch processing, distributed analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding. The new instances offer a 25% price/performance improvement over the C4 instances, with over 50% for some workloads. They also have additional memory per vCPU, and (for code that can make use of the new AVX-512 instructions), twice the performance for vector and floating point workloads.

Over the years we have been working non-stop to provide our customers with the best possible networking, storage, and compute performance, with a long-term focus on offloading many types of work to dedicated hardware designed and built by AWS. The C5 instance type incorporates the latest generation of our hardware offloads, and also takes another big step forward with the addition of a new hypervisor that runs hand-in-glove with our hardware. The new hypervisor allows us to give you access to all of the processing power provided by the host hardware, while also making performance even more consistent and further raising the bar on security. We’ll be sharing many technical details about it at AWS re:Invent.

The New Instances
The C5 instances are available in six sizes:

Instance Name vCPUs
RAM
EBS Bandwidth Network Bandwidth
c5.large 2 4 GiB Up to 2.25 Gbps Up to 10 Gbps
c5.xlarge 4 8 GiB Up to 2.25 Gbps Up to 10 Gbps
c5.2xlarge 8 16 GiB Up to 2.25 Gbps Up to 10 Gbps
c5.4xlarge 16 32 GiB 2.25 Gbps Up to 10 Gbps
c5.9xlarge 36 72 GiB 4.5 Gbps 10 Gbps
c5.18xlarge 72 144 GiB 9 Gbps 25 Gbps

Each vCPU is a hardware hyperthread on a 3.0 GHz Intel Xeon Platinum 8000-series processor. This custom processor, optimized for EC2, allows you have full control over the C-states on the two largest sizes, allowing you to run a single core at up to 3.5 GHz using Intel Turbo Boost Technology.

As you can see from the table, the four smallest instance sizes offer substantially more EBS and network bandwidth than the previous generation of compute-intensive instances.

Because all networking and storage functionality is implemented in hardware, C5 instances require HVM AMIs that include drivers for the Elastic Network Adapter (ENA) and NVMe. The latest Amazon Linux, Microsoft Windows, Ubuntu, RHEL, CentOS, SLES, Debian, and FreeBSD AMIs all support C5 instances. If you are doing machine learning inferencing, or other compute-intensive work, be sure to check out the most recent version of the Intel Math Kernel Library. It has been optimized for the Intel® Xeon® Platinum processor and has the potential to greatly accelerate your work.

In order to remain compatible with instances that use the Xen hypervisor, the device names for EBS volumes will continue to use the existing /dev/sd and /dev/xvd prefixes. The device name that you provide when you attach a volume to an instance is not used because the NVMe driver assigns its own device name (read Amazon EBS and NVMe to learn more):

The nvme command displays additional information about each volume (install it using sudo yum -y install nvme-cli if necessary):

The SN field in the output can be mapped to an EBS volume ID by inserting a “-” after the “vol” prefix (sadly, the NVMe SN field is not long enough to store the entire ID). Here’s a simple script that uses this information to create an EBS snapshot of each attached volume:

$ sudo nvme list | \
  awk '/dev/ {print(gensub("vol", "vol-", 1, $2))}' | \
  xargs -n 1 aws ec2 create-snapshot --volume-id

With a little more work (and a lot of testing), you could create a script that expands EBS volumes that are getting full.

Getting to C5
As I mentioned earlier, our effort to offload work to hardware accelerators has been underway for quite some time. Here’s a recap:

CC1 – Launched in 2010, the CC1 was designed to support scale-out HPC applications. It was the first EC2 instance to support 10 Gbps networking and one of the first to support HVM virtualization. The network fabric that we designed for the CC1 (based on our own switch hardware) has become the standard for all AWS data centers.

C3 – Launched in 2013, the C3 introduced Enhanced Networking and uses dedicated hardware accelerators to support the software defined network inside of each Virtual Private Cloud (VPC). Hardware virtualization removes the I/O stack from the hypervisor in favor of direct access by the guest OS, resulting in higher performance and reduced variability.

C4 – Launched in 2015, the C4 instances are EBS Optimized by default via a dedicated network connection, and also offload EBS processing (including CPU-intensive crypto operations for encrypted EBS volumes) to a hardware accelerator.

C5 – Launched today, the hypervisor that powers the C5 instances allow practically all of the resources of the host CPU to be devoted to customer instances. The ENA networking and the NVMe interface to EBS are both powered by hardware accelerators. The instances do not require (or support) the Xen paravirtual networking or block device drivers, both of which have been removed in order to increase efficiency.

Going forward, we’ll use this hypervisor to power other instance types and plan to share additional technical details in a set of AWS re:Invent sessions.

Launch a C5 Today
You can launch C5 instances today in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) Regions in On-Demand and Spot form (Reserved Instances are also available), with additional Regions in the works.

One quick note before I go: The current NVMe driver is not optimized for high-performance sequential workloads and we don’t recommend the use of C5 instances in conjunction with sc1 or st1 volumes. We are aware of this issue and have been working to optimize the driver for this important use case.

Jeff;

[$] A report from the Realtime Summit

Post Syndicated from jake original https://lwn.net/Articles/738001/rss

The 2017
Realtime Summit
(RT-Summit) was hosted by the Czech Technical University on
Saturday, October 21 in Prague, just before the Embedded Linux
Conference. It
was attended by more than 50 individuals with backgrounds ranging from
academic to
industrial, and some local students daring enough to spend a day with that
group. Guest author Mathieu Poirier provides summaries of some of the
talks from the summit.

Join Us for AWS Security Week November 6–9 in New York City

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/join-us-for-aws-security-week-november-6-9-in-new-york-city/

Want to learn how to securely deploy applications and services in the AWS Cloud? Join us in New York City at the AWS Pop-up Loft for AWS Security Week, November 6–9. At this free technical event, you will learn security concepts and strategies from AWS security professionals in sessions, demos, and labs.

Here is a sampling of the security offerings during the week:

  • Become a Cloud Security Ninja
  • Data Protection in Transit and at Rest
  • Soup to Nuts: Identity Federation for AWS
  • Brewing an Effective Cloud Security Strategy

Learn more about the available sessions and register!

– Craig

Trolls Want to Seize Alleged Movie Pirates’ Computers

Post Syndicated from Andy original https://torrentfreak.com/trolls-want-to-seize-alleged-movie-pirates-computers-171101/

Five years ago, a massive controversy swept Finland. Local anti-piracy group CIAPC (known locally as TTVK) sent a letter to a man they accused of illegal file-sharing.

The documents advised the man to pay a settlement of 600 euros and sign a non-disclosure document, to make a threatened file-sharing lawsuit disappear. He made the decision not to cave in.

Then, in November 2012, there was an 8am call at the man’s door. Police, armed with a search warrant, said they were there to find evidence of illicit file-sharing. Eventually the culprit was found. It was the man’s 9-year-old daughter who had downloaded an album by local multi-platinum-selling songstress Chisu from The Pirate Bay, a whole year earlier.

Police went on to seize the child’s Winnie the Pooh-branded laptop and Chisu was horrified, posting public apologies on the Internet to her young fans. Five years on, it seems that pro-copyright forces in Finland are treading the same path.

Turre Legal, a law firm involved in defending file-sharing matters, has issued a warning that copyright trolls have filed eight new cases at the Market Court, the venue for previous copyright battles in the country.

“According to information provided by the Market Court, Crystalis Entertainment, previously active in such cases, filed three new copyright cases and initiated five pre-trial applications in October 2017,” says lawyer Herkko Hietanen.

The involvement of Crystalis Entertainment adds further controversy into the mix. The company isn’t an official movie distributor but obtained the rights to distribute content on BitTorrent networks instead. It doesn’t do so officially, instead preferring to bring prosecutions against file-sharers’ instead.

Like the earlier ‘Chisu’ case, the trolls’ law firms have moved extremely slowly. Hietanen reports that some of the new cases reference alleged file-sharing that took place two years ago in 2015.

“It would seem that right-holders want to show that even old cases may have to face justice,” says Hietanen.

“However, applications for enforceability may be a pre-requisite for computer confiscation by a bailiff for independent investigations. It is possible that seizures of the teddy bears of the past years will make a comeback,” he added, referencing the ‘Chisu’ case.

Part of the reason behind the seizure requests is that some people defending against copyright trolls have been obtaining reports from technical experts who have verified that no file-sharing software is present on their machines. The trolls say that this is a somewhat futile exercise since any ‘clean’ machine can be presented for inspection. On this basis, seizure on site is a better option.

While the moves for seizure are somewhat aggressive, things haven’t been getting easier for copyright trolls in Finland recently.

In February 2017, an alleged file-sharer won his case when a court ruled that copyright holders lacked sufficient evidence to show that the person in question downloaded the files, in part because his Wi-Fi network was open to the public

Then, in the summer of 2017, the Market Court tightened the parameters under which Internet service providers are compelled to hand over the identities of suspected file-sharers to copyright owners.

The Court determined that this could only happen in serious cases of unlawful distribution. This, Hietanen believes, is partially the reason that the groups behind the latest cases are digging up old infringements.

“After the verdict of the summer, I assumed that rightsholders would have to operate with old information, at least for a while,” he says. “Rightsholders want to show that litigation is still possible.”

The big question, of course, is what people should do if they receive a settlement letter. In some jurisdictions, the advice is to ignore, until proper legal documentation arrives.

Hietanen says the matter in Finland is serious and should be treated as such. There’s always a possibility that after failing to receive a response, a copyright holder could go to court to obtain a default judgment, meaning the alleged file-sharer is immediately found guilty.

In the current cases, the Market Court will now have to decide whether unannounced seizures are required to preserve evidence. For cases already dating back two years, there will be plenty of discussions to be had, for and against. But in the meantime, Hedman Partners, the company representing the copyright trolls, warn that more cases are on the way.

“We have put in place new requests for information after the summer. We have a large number of complaints in preparation. More are coming,” lawyer Joni Hatanmaa says.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Online Tech Talks – November 2017

Post Syndicated from Sara Rodas original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-november-2017/

Leaves are crunching under my boots, Halloween is tomorrow, and pumpkin is having its annual moment in the sun – it’s fall everybody! And just in time to celebrate, we have whipped up a fresh batch of pumpkin spice Tech Talks. Grab your planner (Outlook calendar) and pencil these puppies in. This month we are covering re:Invent, serverless, and everything in between.

November 2017 – Schedule

Noted below are the upcoming scheduled live, online technical sessions being held during the month of November. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts.

Webinars featured this month are:

Monday, November 6

Compute

9:00 – 9:40 AM PDT: Set it and Forget it: Auto Scaling Target Tracking Policies

Tuesday, November 7

Big Data

9:00 – 9:40 AM PDT: Real-time Application Monitoring with Amazon Kinesis and Amazon CloudWatch

Compute

10:30 – 11:10 AM PDT: Simplify Microsoft Windows Server Management with Amazon Lightsail

Mobile

12:00 – 12:40 PM PDT: Deep Dive on Amazon SES What’s New

Wednesday, November 8

Databases

10:30 – 11:10 AM PDT: Migrating Your Oracle Database to PostgreSQL

Compute

12:00 – 12:40 PM PDT: Run Your CI/CD Pipeline at Scale for a Fraction of the Cost

Thursday, November 9

Databases

10:30 – 11:10 AM PDT: Migrating Your Oracle Database to PostgreSQL

Containers

9:00 – 9:40 AM PDT: Managing Container Images with Amazon ECR

Big Data

12:00 – 12:40 PM PDT: Amazon Elasticsearch Service Security Deep Dive

Monday, November 13

re:Invent

10:30 – 11:10 AM PDT: AWS re:Invent 2017: Know Before You Go

5:00 – 5:40 PM PDT: AWS re:Invent 2017: Know Before You Go

Tuesday, November 14

AI

9:00 – 9:40 AM PDT: Sentiment Analysis Using Apache MXNet and Gluon

10:30 – 11:10 AM PDT: Bringing Characters to Life with Amazon Polly Text-to-Speech

IoT

12:00 – 12:40 PM PDT: Essential Capabilities of an IoT Cloud Platform

Enterprise

2:00 – 2:40 PM PDT: Everything you wanted to know about licensing Windows workloads on AWS, but were afraid to ask

Wednesday, November 15

Security & Identity

9:00 – 9:40 AM PDT: How to Integrate AWS Directory Service with Office365

Storage

10:30 – 11:10 AM PDT: Disaster Recovery Options with AWS

Hands on Lab

12:30 – 2:00 PM PDT: Hands on Lab: Windows Workloads

Thursday, November 16

Serverless

9:00 – 9:40 AM PDT: Building Serverless Websites with [email protected]

Hands on Lab

12:30 – 2:00 PM PDT: Hands on Lab: Deploy .NET Code to AWS from Visual Studio

– Sara

Assassins Creed Origin DRM Hammers Gamers’ CPUs

Post Syndicated from Andy original https://torrentfreak.com/assassins-creed-origin-drm-hammers-gamers-cpus-171030/

There’s a war taking place on the Internet. On one side: gaming companies, publishers, and anti-piracy outfits. On the other: people who varying reasons want to play and/or test games for free.

While these groups are free to battle it out in a manner of their choosing, innocent victims are getting caught up in the crossfire. People who pay for their games without question should be considered part of the solution, not the problem, but whether they like it or not, they’re becoming collateral damage in an increasingly desperate conflict.

For the past several days, some players of the recently-released Assassin’s Creed Origins have emerged as what appear to be examples of this phenomenon.

“What is the normal CPU usage for this game?” a user asked on Steam forums. “I randomly get between 60% to 90% and I’m wondering if this is too high or not.”

The individual reported running an i7 processor, which is no slouch. However, for those running a CPU with less oomph, matters are even worse. Another gamer, running an i5, reported a 100% load on all four cores of his processor, even when lower graphics settings were selected in an effort to free up resources.

“It really doesn’t seem to matter what kind of GPU you are using,” another complained. “The performance issues most people here are complaining about are tied to CPU getting maxed out 100 percent at all times. This results in FPS [frames per second] drops and stutter. As far as I know there is no workaround.”

So what could be causing these problems? Badly configured machines? Terrible coding on the part of the game maker?

According to Voksi, whose ‘Revolt’ team cracked Wolfenstein II: The New Colossus before its commercial release last week, it’s none of these. The entire problem is directly connected to desperate anti-piracy measures.

As widely reported (1,2), the infamous Denuvo anti-piracy technology has been taking a beating lately. Cracking groups are dismantling it in a matter of days, sometimes just hours, making the protection almost pointless. For Assassin’s Creed Origins, however, Ubisoft decided to double up, Voksi says.

“Basically, Ubisoft have implemented VMProtect on top of Denuvo, tanking the game’s performance by 30-40%, demanding that people have a more expensive CPU to play the game properly, only because of the DRM. It’s anti-consumer and a disgusting move,” he told TorrentFreak.

Voksi says he knows all of this because he got an opportunity to review the code after obtaining the binaries for the game. Here’s how it works.

While Denuvo sits underneath doing its thing, it’s clearly vulnerable to piracy, given recent advances in anti-anti-piracy technology. So, in a belt-and-braces approach, Ubisoft opted to deploy another technology – VMProtect – on top.

VMProtect is software that protects other software against reverse engineering and cracking. Although the technicalities are different, its aims appear to be somewhat similar to Denuvo, in that both seek to protect underlying systems from being subverted.

“VMProtect protects code by executing it on a virtual machine with non-standard architecture that makes it extremely difficult to analyze and crack the software. Besides that, VMProtect generates and verifies serial numbers, limits free upgrades and much more,” the company’s marketing reads.

VMProtect and Denuvo didn’t appear to be getting on all that well earlier this year but they later settled their differences. Now their systems are working together, to try and solve the anti-piracy puzzle.

“It seems that Ubisoft decided that Denuvo is not enough to stop pirates in the crucial first days [after release] anymore, so they have implemented an iteration of VMProtect over it,” Voksi explains.

“This is great if you are looking to save your game from those pirates, because this layer of VMProtect will make Denuvo a lot more harder to trace and keygen than without it. But if you are a legit customer, well, it’s not that great for you since this combo could tank your performance by a lot, especially if you are using a low-mid range CPU. That’s why we are seeing 100% CPU usage on 4 core CPUs right now for example.”

The situation is reportedly so bad that some users are getting the dreaded BSOD (blue screen of death) due to their machines overheating after just an hour or two’s play. It remains unclear whether these crashes are indeed due to the VMProtect/Denuvo combination but the perception is that these anti-piracy measures are at the root of users’ CPU utilization problems.

While gaming companies can’t be blamed for wanting to protect their products, there’s no sense in punishing legitimate consumers with an inferior experience. The great irony, of course, is that when Assassin’s Creed gets cracked (if that indeed happens anytime soon), pirates will be the only ones playing it without the hindrance of two lots of anti-piracy tech battling over resources.

The big question now, however, is whether the anti-piracy wall will stand firm. If it does, it raises the bizarre proposition that future gamers might need to buy better hardware in order to accommodate anti-piracy technology.

And people worry about bitcoin mining……?

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

[$] The state of the realtime union

Post Syndicated from jake original https://lwn.net/Articles/737367/rss

The 2017
Realtime Summit
was held October 21 at Czech Technical University
in Prague to discuss all manner of topics related to realtime Linux.
Nearly two years ago, a collaborative
project
was formed with the goal of mainlining the realtime patch set. At the
summit, project
lead Thomas Gleixner reported on the progress that has been made and the
plans for the future.